I have gotten in a lot of conversations recently where i keep coming back to the frustrating point that there fundamentally is no thought process involved in LLMs. They are not imitating thought, even. They are using a statistical model to imitate the past output of creatures which verbalized their thoughts in writing
The problem though is that society as a whole directly correlates being able to verbally communicate with exactly how intelligent a subject is. Less verbalization == less intelligent. Aside from the clear eugenics of this, it has a flip effect where 99% of the population is wholly unable to conceive of a machine that could speak in the way which something with thought would have. But that’s all it is, it’s a shadow box, a puppet show of how verbalization communicated thought, generally, in the past. But there is no ghost in this machine.
And it fucking sucks because every time i see some asshole buy into thinking “AI” (by which they really mean LLMs or even just ChatGPT) has even rudimentary thought, I know for a fact they would think i am fundamentally less “intelligent” because i go nonverbal sometimes. I know that dehumanization comes just as easily as this anthropomorphism comes for the machines. It’s a cruel double standard.

