lexyeevee

troublesome fox girl

hello i like to make video games and stuff and also have a good time on the computer. look @ my pinned for some of the video games and things. sometimes i am horny on @squishfox



the problem with large language models like chatgpt is that the human on one end thinks it's having a meaningful conversation with some kind of cyberbrain — in large part because that's what the grifty marketing tells you is happening — but the computer on the other end doesn't even know what a "human" is and is simply roleplaying what it thinks conversation might sound like

and even there i'm anthropomorphizing, but that's how we see the world so i don't think "simply do not anthropomorphize the computer that responds to human speech" is a winning strategy. so maybe a more alien metaphor is better. think of like the voyager episode where they find species 8472's fake starfleet training camp that portrays them as hostile invaders


You must log in to comment.

in reply to @lexyeevee's post:

The people who think they're really talking to someone or something do not comprehend they're conversing with the digital equivalent of a few rice grain-sized pieces of brain matter, and education should probably step up their biology and compsci curricula to make that very clear to everyone.

i think the funniest part is that LLMs itself do not "write" anything, they just judge how good the random gibberish that the autoregressive pipeline fed it. it just goes "yeah, that sounds reasonable" sometimes lol

In the meantime I hope we get more hilarious court cases from people who are entirely too credulous about the random nonsense that the LLMs are producing for them.

But yeah I've gotten tired of explaining to everyone I know why AI is not the future and why it's not going to improve "when the technology gets better." These days I tell people to:

  1. ask it to write a biography of them
  2. ask it to play a game of chess

and if that doesn't completely dissuade them from believing ChatGPT is Ultra Amazing, I tell them it's an automated Cliff Clavin. Which only really works if they're at least as old as I am, but gosh does it work.

I'm not sure what the target goal they set for the AI, but it seems to be something like "say things that the human will be happy with"? Or maybe just that will end the conversation? Except then it would just respond to every question with "brb lol"

I don't think this kind of technical discursion on the algorithm's qualia is likely to be all that persuasive - people are already very used to accepting the authority of blatantly non-human, non-sapient, largely non-functional entities.

We didn't need ChatGPT's automated smarm to defer to the wisdom of the Algorithm on whether you qualify for credit, a job or a home, what information is important or relevant enough to merit reading, who you should hang out with online. Where humans are still involved in any particularly important decisionmaking process they tend to get hidden as a matter of course behind a veneer of impersonal corporate language and fake automation. Anyone trying to get an insurance claim processed in the last couple couple decades has spent a few days talking to people pretending to be malfunctioning robots well before the genuine article started showing up everywhere. Why would anyone draw a line now?

whether it ends up panning out or not, my only firm prediction is that within ten years companies will be dressing workers in tinfoil and cardboard and having them shuffle around making beeping noises so they can be taken seriously