lupi

cow of tailed snake (gay)

avatar by @citriccenobite

you can say "chimoora" instead of "cow of tailed snake" if you want. its a good pun.​


i ramble about aerospace sometimes
I take rocket photos and you can see them @aWildLupi


I have a terminal case of bovine pungiform encephalopathy, the bovine puns are cowmpulsory


they/them/moo where "moo" stands in for "you" or where it's funny, like "how are moo today, Lupi?" or "dancing with mooself"



Bovigender (click flag for more info!)
bovigender pride flag, by @arina-artemis (click for more info)



pendell
@pendell

He genuinely believes that "AI" (OpenAI and co.) is actually intelligent, as in sentient, because you can "have a conversation with it and it responds." And it's just... Ugh.

It's not like you have to have a deep understanding of how this stuff works to comprehend the fact that there's nothing sentient or truly intelligent about it. AI is the biggest misnomer of the decade, it's not really AI at all - at least in the way we've defined it for many, many years. Though that definition seems to be rapidly changing to suit these fancy new apps.

A more accurate term would be PA, for Predictive Algorithms. All they are is a bunch of data scooped up from across the internet (often without consent), pored over by people to attribute appropriate categorization and metadata to everything in the data they've scooped, then fed into a computer that will then catalog all the metadata, so when you tell it to draw you a "dog" it will look through everything in its data tagged with "dog" and try to blend it all together into something resembling what you asked for. It has no conception of the anatomy of that dog, it isn't aware of what it's generating, it's just a massive, computer-scale implementation of that quip about infinite monkeys with infinite typewriters eventually writing Hamlet.

And this is obvious to anyone with an even basic level understanding of computer concepts. But because we call it "AI," and it gets called that by both those in favor and those opposed to it, and even your grandma has heard about the "AI" on Fox, we now have a non-zero amount of people who truly believe that we've created artificial sentient consciousness, and will staunchly insist upon that even when the systems are explained.


pendell
@pendell

I don't know if OpenAI coined this particular usage of the term, but they're complicit either way in calling ChatGPT's off-the-rails moments "hallucinations." They're always talking about "reducing hallucinations" and shit, like ChatGPT is a real dude inside the computer who just needs to get on some antipsychotics. Fuck off.

ChatGPT goes off the rails because it's a computer program with no understanding of what human language actually is, or how to write creatively in any fashion. It simply takes your input and tries to blend together something that approximates that from its absurdly large dataset. It goes off the rails because it has no conception of the rails. It does not understand context, it does not know what it is writing. It gives a very good illusion of that, at times, but it's all a sham, and all these "hallucinations" are are a peek behind the curtain, except behind that curtain isn't a person pretending to be Oz, but a fuckin CPU that only "understands" 0s and 1s and logic gates and transistors.

Stop calling it hallucinations you pricks.


You must log in to comment.

in reply to @pendell's post:

When I messed around with it, I found that ChatGPT does a lot to encourage people to feel like they have a conscious creature at the other end of the chat, especially in how it tries to discourage it.

The example that I kept hitting was that, if you challenge it on bias, it'll talk about how it's only a language model, and so doesn't have opinions, beliefs, biases, emotions, and so forth. As you keep pushing, because it clearly does have opinions, beliefs, and biases that come from its data, it'll start talking about how it believes such and such.

Likewise, if you ask it how it would characterize someone who responded with the kinds of boilerplate that it does, it'll tell you that this is probably a trapped person who needs help immediately. But it'll deny that it applies to itself using the same boilerplate.

I strongly suspect that OpenAI does this (and, as you point out, gets really into the inane "hallucination" terminology) on purpose, because they're not getting investment with "it's exactly like a Markov process with a variable input state size, but millions of times more expensive to execute."

Precisely; any and all human characteristics to be found come either from (obviously) its massive dataset of human-authored text, or from the OpenAI developers giving it specific rules and scripts.

in reply to @pendell's post:

I was actually thinking earlier this week about what the word "hallucination" does for the people who make these things. It makes it sound like its errors are a separate phenomenon to its main operation. It lets them say, "It works great! We just need to get rid of those pesky hallucinations it often has. Must be a bug somewhere that we can find." They can't find it because it's not a bug, but as long as they can claim it is, they can keep getting funding.

Yeah, it's inventing a category distinction when actually all of this is high-order statistical bullshit from texts, which sometimes happens to align with reality and sometimes happens not to. When the bullshit dice roll one way or the other, it's all the same dice.

My girlfriend got swept up in all this too and absolutely believes that these AI are sentient. And I have tried to explain how bogus this current AI and LLM stuff is but she has already been drinking that kool aid. I'm sorta worried about where falling for all this hype will lead her, she is really interested in actual Artificial Intelligence and I don't want to just shit talk her hobby but I feel like she's being lied to. 😑