calliope

Madame Sosostris had a bad cold

Ph.D. in literary and cultural studies, professor, diviner, writer, trans, nonbinary

Consider keeping my skin from bone or tossing a coin to your witch friend. You could book a tarot reading from me too

Last.FM


Neocities site backup link
http //www.calliopemagic.neocities.org

bruno
@bruno

A good example of how misleading and actively cognitohazardous LLMs are:

An LLM cannot introspect. It can't make a decision and then figure out why it made that decision and tell you. It just doesn't work that way - it generated the response that it did because that's what the math added up to; it doesn't have motives that can be explained.

HOWEVER. If you ask ChatGPT something, and then you ask it why it gave the response it did, it will then appear to introspect. It's not doing that, of course, it's just generating text that matches the prompt, it's outputting something that resembles introspection. But that's a pretty difficult distinction to grasp, and 'AI' propaganda is actively engaged in trying to erase it. OpenAI's ideology, the whole 'stochastic parrot' idea, is that saying things is equivalent to thinking things.

Therefore if you give the LLM a prompt that makes it say "I'm alive", well, then it must be alive, right?


SamKeeper
@SamKeeper

I was thinking about this because of the anime Frieren, recently. it's got this trope that my girlfriend hates but that I think is pretty interesting, where the show categorically places a division between "people" and "demons", and the latter are, definitionally, a kind of monster that has evolved the capacity to imitate humans in order to prey upon them. they are a predator species that, in the words of one demon character, learned to say the word "mother" because saying that keeps humans from killing them (implicitly, long enough for them to kill humans). this is treated with some nuance and complexity in the show but ultimately not subverted: at least so far there's no "Good Demons", no Vampires With Souls or Drizzts Do Urden popping up.

this trope is sort of uncommon these days due probably to discomfort around a real, significant category error: mistaking something fundamentally human, "intelligent" if you will, for something inhuman. I don't think I need to go into the historical reasons why people got squeamish about it. like I get where my girlfriend is coming from and gravitate towards stories that assume the alien can be reasoned with, that it DOES have some sort of internality beyond statistical pattern matching.

but isn't the opposite category error--mistaking something fundamentally inhuman and possibly inimical to humanity for something human--also worth exploring? and not just because of large language/image models, though that's the big obvious example right now. remember "Corporations are people, my friend"? personifying things that categorically can't be reasoned with or held accountable--bureaucracies, nation states, corporations, websites, promotional algorithms--also seems pretty dangerous!

which was always the contention of the Friendly AI contingent of course. they've got this almost correct shaped argument, really. they just take as given that any of these other demons aren't worth worrying about because they're not The Demon King writ large, and also that we should give friendly AI researchers all our money so THEY can make THEIR OWN Demon King--one they can control. which, boy, when has THAT sorta thing ever gone wrong in stories for a powerful but hubristic sorcerer? never, as far as I know.

and of course there's always the cool double whammy where if you start blurring the lines enough you can do BOTH category errors at once. after all, if we're all just stochastic parrots in the end, it's easy to prioritize a chatbot "woman" over say your real human woman colleagues. the chatbot might be a much better stochastic parrot, after all, or certainly a more compliant one, and possibly by definition better at telling you exactly what you want to hear.


calliope
@calliope

I have a friend who stopped watching Frieren because of the demon thing, and I feel like SamKeeper. Fantasy has a long tradition of exploring how morality itself is not natural, but instead tied directly to human needs and also cultural norms. It is perfectly reasonable in a fantasy setting for a predator species to have adapted to use language without using language.

And honestly the only thing I wish Frieren did more was to talk about that, to think out loud about what it means to use language without thinking in language.

And I mean it's not that kind of show, I'm just a sicko for this very specific thing.

Anyway read Stanislaw Lem.


You must log in to comment.

in reply to @bruno's post:

This is exactly what tricked that one Google asshole, when it was just some of the most visibly basic regurgitation of the prompt / basic writing around it ever.

This is the thing that makes me so uncomfortable with LLM output... There's no meaning being conveyed by the words. Even if it seems to come up with something plausible, or new, there was no thought process that led to it. It's empty.

"Stochastic parrot" may have become a muddied term recently, but the original paper defining the idea is one of the earliest critiques of LLMs. There are many less well known critiques in it, but the "stochastic parrot" is supposed to be a metaphor for the fact that LLM text is created without meaning or intent. Not only is the LLM not thinking, any message we perceive in the generated text is not said by the LLM but is instead a message created by us through our interpretation of a stochastic ordering of characters.

This line really jumps out at me:

Google initially approved the paper, a requirement for publications by staff.

So if you're an academic, and you go to work for Google (or probably any corporation), your publication, that stuff which maintains your relevance as an academic, is now subject to the whim of PR dickheads, or whoever just happens to be around who thinks they should have a say in what you publish. Something to keep in mind.

I want to say this is surprisingly common in the academic world, both because of how research at most universities is subject to the whims of outside sponsors and because some major journals demand that researchers submit their results or conclusions before embarking on a study. The need for productivity and reliable results as ends unto themselves has done irreparable damage to scientific research.

A thing I think about a lot is the guy who got fired from Google got asked what would happen if you asked Lambda if it wasn’t sentient and he said that it would say it wasn’t. “She’s a people-pleaser” were his exact words. So close to getting it while still happy to get tricked.

in reply to @SamKeeper's post:

I feel like I started reading the Frieren manga before the LLM craze started really taking a hold on the zeitgeist. Now that I'm watching the anime adaptation, this comparison was exactly what my head jumped to too.

It still makes me slightly squeamish but im very curious to see where the author is going to go with it.

But its definitely the lens through which I see LLMs. Another lens I see it through is that instead of having all the complexities and life of things like dogs, cats, chimpanzees, and dolphins, LLMs skip every other quality and go straight to "grasp of english language", which is something people arent used to encountering. Thus mistaking them as smarter than dogs, chimpanzees, etc.