sorry to slide back into technoskepticism but this is how this always fucking goes
company: we are going to do thing with AI
commentators: but what about subtle problems, like the human touch or making deep ethical decisions? we may simulate conversation, but have we truly simulated empathy?
AI, immediately upon release: i'm gonna drive directly into a firetruck for no reason
commentators: ...we over-estimated the problems we would have.
--
EDIT: holy shit it's supposedly hand-written?!!?! I may have misblamed AI for this one but... but wow, this might not even be an accident, this might be something closer to active malice
Edit: whoopsie daisy it actually was using an LLM, but the tech bros who apparently control the code and host the chatbot just didnt tell NEDA or the researchers or literally anyone that they had made a fundemental change to the product after clinical trials and then were just making that product available as if it had had any testing at all.
link to my rechost: https://cohost.org/literalHam/post/1664204-oh-my-god-it-was-sec
It's so much worse than an LLM generating diet advice for ED patients seeking help. That would represent reckless negligence all on its own. However this bot is supposedly not an LLM but just a regular, guided chatbot. So to be clear, a human doctor, presumably someone in the WUSTL eating disorder lab, wrote the exact words which the chatbot spit out reccomending calorie restriction, calipers, etc to ED patients seeking help. The earlier NPR and Vice pieces that referenced Dr. Fitzimmons-Craft's team creating Tessa made her seem kind of like she was duped or misled, but it all looks much more sinister now based on this and also some of her tweets https://twitter.com/fitzsimmonscraf
Someone replying to her pointed out that the views expressed by this chatbot that "weight loss and ED recovery can exist simultaneously" are exactly those of Dr. Fitzsimmons-Craft's ghoullish colleague Dr. Denise Wilfley (tw fatphobia and pro-ED at link) https://psych.wustl.edu/people/denise-wilfley (they've co-authored several papers on teams together so it is not a stretch to call them colleagues or imply that Wilfley may have been on or advising the team creating Tessa)
NEDA and Dr. Fitzimmons-Craft are claiming this is some kind of glitch to do with TESSA accessing data from a different "module" than the one Fitzimmons-Craft worked on. But its equally appalling whether Dr. Wilfley (or another doctor) put this text directly into Tessa, or she put it into some other chatbot hosted on the WUSTL server which Tessa is accidentally accessing. Its horrific that the diet advice specifically targeting an ED patient was written by a human doctor at all, but that is what appears to have happened.
edited to correct UW to WUSTL
also, searching NEDA Tessa now brings up several dead links, including this one https://www.x2ai.com/neda-tessa but that did lead me to the host site for Tessa, which seems to not be the university but this dystopian-ass startup https://www.x2ai.com/old-home So possibly the origins of the weight loss advice is not directly WUSTL's ED research lab, but a different unrelated mental health chatbot hosted by the same private company and that still should not include any kind of diet advice whatsoever you absolute ghouls
