A good example of how misleading and actively cognitohazardous LLMs are:
An LLM cannot introspect. It can't make a decision and then figure out why it made that decision and tell you. It just doesn't work that way - it generated the response that it did because that's what the math added up to; it doesn't have motives that can be explained.
HOWEVER. If you ask ChatGPT something, and then you ask it why it gave the response it did, it will then appear to introspect. It's not doing that, of course, it's just generating text that matches the prompt, it's outputting something that resembles introspection. But that's a pretty difficult distinction to grasp, and 'AI' propaganda is actively engaged in trying to erase it. OpenAI's ideology, the whole 'stochastic parrot' idea, is that saying things is equivalent to thinking things.
Therefore if you give the LLM a prompt that makes it say "I'm alive", well, then it must be alive, right?
It regularly blows my mind that so many people were tricked into believing the digital magnetic poetry machine is ✨alive✨