Talking about LLMs and other machine learning systems coughing up incorrect information or bad results as "hallucination" is really just trying to paper over the fact that they have no understanding or knowledge. It's no more or less "hallucination" than anything else it spits out, it's doing the thing, it's executing its algorithm, you just don't like the results or admitting what they imply about your expensive Eliza program.
*pulls magnetic poetry words out of a hat and slaps them on the refrigerator* wow some of this doesn't make sense. the magnetic poetry is hallucinating
I have to admit it's a fascinating "tell": all these horrible tech executives think they're experts on human cognition because they've memorized the DSM and view literally everything through the lens of pathology. (if only they knew that they were themselves "hallucinating" by their own wretched definition—i.e. unable to tell valid information from confabulation) ~Chara





