sorry to slide back into technoskepticism but this is how this always fucking goes
company: we are going to do thing with AI
commentators: but what about subtle problems, like the human touch or making deep ethical decisions? we may simulate conversation, but have we truly simulated empathy?
AI, immediately upon release: i'm gonna drive directly into a firetruck for no reason
commentators: ...we over-estimated the problems we would have.
--
EDIT: holy shit it's supposedly hand-written?!!?! I may have misblamed AI for this one but... but wow, this might not even be an accident, this might be something closer to active malice
