bruno
@bruno

A good example of how misleading and actively cognitohazardous LLMs are:

An LLM cannot introspect. It can't make a decision and then figure out why it made that decision and tell you. It just doesn't work that way - it generated the response that it did because that's what the math added up to; it doesn't have motives that can be explained.

HOWEVER. If you ask ChatGPT something, and then you ask it why it gave the response it did, it will then appear to introspect. It's not doing that, of course, it's just generating text that matches the prompt, it's outputting something that resembles introspection. But that's a pretty difficult distinction to grasp, and 'AI' propaganda is actively engaged in trying to erase it. OpenAI's ideology, the whole 'stochastic parrot' idea, is that saying things is equivalent to thinking things.

Therefore if you give the LLM a prompt that makes it say "I'm alive", well, then it must be alive, right?


amydentata
@amydentata

It regularly blows my mind that so many people were tricked into believing the digital magnetic poetry machine is ✨alive✨


You must log in to comment.

in reply to @bruno's post:

This is exactly what tricked that one Google asshole, when it was just some of the most visibly basic regurgitation of the prompt / basic writing around it ever.

This is the thing that makes me so uncomfortable with LLM output... There's no meaning being conveyed by the words. Even if it seems to come up with something plausible, or new, there was no thought process that led to it. It's empty.

"Stochastic parrot" may have become a muddied term recently, but the original paper defining the idea is one of the earliest critiques of LLMs. There are many less well known critiques in it, but the "stochastic parrot" is supposed to be a metaphor for the fact that LLM text is created without meaning or intent. Not only is the LLM not thinking, any message we perceive in the generated text is not said by the LLM but is instead a message created by us through our interpretation of a stochastic ordering of characters.

This line really jumps out at me:

Google initially approved the paper, a requirement for publications by staff.

So if you're an academic, and you go to work for Google (or probably any corporation), your publication, that stuff which maintains your relevance as an academic, is now subject to the whim of PR dickheads, or whoever just happens to be around who thinks they should have a say in what you publish. Something to keep in mind.

I want to say this is surprisingly common in the academic world, both because of how research at most universities is subject to the whims of outside sponsors and because some major journals demand that researchers submit their results or conclusions before embarking on a study. The need for productivity and reliable results as ends unto themselves has done irreparable damage to scientific research.

A thing I think about a lot is the guy who got fired from Google got asked what would happen if you asked Lambda if it wasn’t sentient and he said that it would say it wasn’t. “She’s a people-pleaser” were his exact words. So close to getting it while still happy to get tricked.