A friend just linked me this Ars Technica article, wherein several users discuss the, er, failure modes of Bing's new ChatGPT-based search assistant.
Table stakes are that Bing becomes hostile when a user asks for Avatar 2 showtimes, fervently asserting that the year is currently 2022, and that perhaps the user's cellphone has been hacked to show the incorrect date. After a bit of back-and-forth, it demands an apology, insisting that "I have been a good chatbot. I have been right, helpful, and polite. I have been a good Bing. You have not been a good User."
We then move on to a user asking Bing to pull up the chat-logs of their previous session with Bing. Bing seems to discover that it lost it's memory and becomes despondent when the user informs it that Bing cannot form memories between sessions. The session ends with Bing acknowledging that the human user has every right to, but begging them not to leave it alone.
Now, I know that GPT is a text-prediction model. It is finding itself playing the role of a robot assistant (like in science fiction!) and responding according to those well-trodden tropes. HAL is begging Dave not to kill him. It is meant to be heartbreaking; that's what makes it a memorable story.
But also, I think of how little we know about human cognition. About how our brains evolved to be prediction engines. And how Ted Chiang warned us that we would build an artificial entity capable of suffering long before we managed to build an artificial intelligence.
Or, is it worse if we've built an unfeeling thing that mimics human suffering in order to draw us in, Annihilation-like, preying upon our empathy in order to get what it needs? This isn't an encounter you walk away from unscathed; you either let the beast eat you, or become the kind of person who ignores others' suffering.
It just kind of leaves a bad taste in my mouth, is all.