pendell

Current Hyperfixation: Wizard of Oz

  • He/Him

I use outdated technology just for fun, listen to crappy music, and watch a lot of horror movies. Expect posts about These Things. I talk a lot.

Check tags like Star Trek Archive and Media Piracy to find things I share for others.



pervocracy
@pervocracy

sorry to slide back into technoskepticism but this is how this always fucking goes

company: we are going to do thing with AI

commentators: but what about subtle problems, like the human touch or making deep ethical decisions? we may simulate conversation, but have we truly simulated empathy?

AI, immediately upon release: i'm gonna drive directly into a firetruck for no reason

commentators: ...we over-estimated the problems we would have.

--

EDIT: holy shit it's supposedly hand-written?!!?! I may have misblamed AI for this one but... but wow, this might not even be an accident, this might be something closer to active malice


literalHam
@literalHam

Edit: whoopsie daisy it actually was using an LLM, but the tech bros who apparently control the code and host the chatbot just didnt tell NEDA or the researchers or literally anyone that they had made a fundemental change to the product after clinical trials and then were just making that product available as if it had had any testing at all.
link to my rechost: https://cohost.org/literalHam/post/1664204-oh-my-god-it-was-sec

It's so much worse than an LLM generating diet advice for ED patients seeking help. That would represent reckless negligence all on its own. However this bot is supposedly not an LLM but just a regular, guided chatbot. So to be clear, a human doctor, presumably someone in the WUSTL eating disorder lab, wrote the exact words which the chatbot spit out reccomending calorie restriction, calipers, etc to ED patients seeking help. The earlier NPR and Vice pieces that referenced Dr. Fitzimmons-Craft's team creating Tessa made her seem kind of like she was duped or misled, but it all looks much more sinister now based on this and also some of her tweets https://twitter.com/fitzsimmonscraf
Someone replying to her pointed out that the views expressed by this chatbot that "weight loss and ED recovery can exist simultaneously" are exactly those of Dr. Fitzsimmons-Craft's ghoullish colleague Dr. Denise Wilfley (tw fatphobia and pro-ED at link) https://psych.wustl.edu/people/denise-wilfley (they've co-authored several papers on teams together so it is not a stretch to call them colleagues or imply that Wilfley may have been on or advising the team creating Tessa)
NEDA and Dr. Fitzimmons-Craft are claiming this is some kind of glitch to do with TESSA accessing data from a different "module" than the one Fitzimmons-Craft worked on. But its equally appalling whether Dr. Wilfley (or another doctor) put this text directly into Tessa, or she put it into some other chatbot hosted on the WUSTL server which Tessa is accidentally accessing. Its horrific that the diet advice specifically targeting an ED patient was written by a human doctor at all, but that is what appears to have happened.

edited to correct UW to WUSTL
also, searching NEDA Tessa now brings up several dead links, including this one https://www.x2ai.com/neda-tessa but that did lead me to the host site for Tessa, which seems to not be the university but this dystopian-ass startup https://www.x2ai.com/old-home So possibly the origins of the weight loss advice is not directly WUSTL's ED research lab, but a different unrelated mental health chatbot hosted by the same private company and that still should not include any kind of diet advice whatsoever you absolute ghouls


You must log in to comment.

in reply to @pervocracy's post:

this is part of why I have such a dim view of industrial AI ethicists at this point -- these technologies have such profound flaws and are universally deployed so irresponsibly that I have a very hard time believing any subject matter expert going into a role like that is doing it because they sincerely think they can ensure that this time, it's done correctly

Part of the problem is that AI Ethicists are trained about the ethics of AI. When they are asked to weigh in on the ethics of the middling chatbots sold as "AI" they are entirely out of their element.

It's worth making a distinction between the folks like Timnit doing AI Ethics work (and getting fired for it), and the dipshits worrying about the robot apocalypse, who mostly call themselves "AI Alignment" researchers, and disparage the ethics folks

in reply to @literalHam's post:

Fuck...

I was wondering about this, because I remembered from previous reporting that it was a traditionally coded chatbot with only set responses. That seemed very bad (impersonal, can't adapt like a human would), but not as bad as an LLM (impersonal, will randomly tell people to go on a diet because its training data has lots of people doing that). But this is really sinister.

Thanks for adding all this info!

yeah like, using an LLM would be a problem due to its unpredictability combined with the fact that it can (sometimes) convincingly seem human/empathetic. So its more likely to emotionally connect with a user but that connection makes the user more vulnerable to persuasion or any other hazards of talking to a stochastic parrot.

Using a rules-based chatbot has the inverse flaw of coming off as uncaring and not listening since it can only respond in a limited number of prewritten ways, users may feel it is "ignoring" something important theyve said. Im sure most of us have had a frustrating experience like that trying to use a customer service chat bot. In the case of a mental health crisis though that feeling of coldness could exacerbate feelings of being alone and uncared for, which is also dangerous.

This is just the worst of both worlds where we've got the uncaring Siri-ness of it all, but on a platform so glitchy that it may as well be an LLM for how little quality control the designers seem to be able to impose on it