bruno
@bruno

Also like... you are not immune to propaganda so I would caution to consider whether your 'nuanced argument' is not just an OpenAI talking point in disguise.

Specifically I'm talking about arguments about 'intelligence.' Now intelligence is a slippery and ill-defined term, but I've seen people whom I'd hope are well intentioned talk about the "emergent behavior" of LLMs and suggest that that's a kind of "intelligence."

This is very much the Sam Altman position – 'intelligence' is a kind of spectrum, with 'strong' 'general artificial intelligence' on one end, LLMs on the other, and humans somewhere in between.

But the thing you're meant to believe from hearing this argument is that if only they can pump even more precious resources into an LLM, it will eventually become 'more intelligent'. And that's... simply not true. What an LLM is doing is just qualitatively different from what a human mind is doing. You can call that 'intelligence' if you want, I guess, but that's extremely misleading.

Or, to put it another way: you can say that I can cook, and you can say that a microwave oven can cook, but those are wildly different definitions of what 'cooking' means; and no advance in microwave technology can make it capable of producing, say, a caesar salad.

Ultimately OpenAI wants 'intelligence' to be a singular, obviously measurable thing, and they want you to believe that their product can be said to possess that thing. This is both an inaccurate view of the world and a harmful one; people seem to very easily miss how bound up in colonialist and racist thinking the AI boosters' view of 'intelligence' is.

Above all, it's extremely easy to be credulous about LLMs because the technology is frankly intended to trick people. The tech itself is propaganda for the tech. LLMs generate text that's statistically likely, that resembles 'real' text written by someone; that's the trick. It is of course very difficult to tell a statistical average apart from the real thing, except in various ways that really matter when the rubber meets the road.

Pumping more electricity into these data centers and munging more text is not going to produce something that has the qualitatively different capabilities that a human mind has. This is like saying "cars are getting faster all the time so soon they'll be able to fly." Or thinking that VR headsets getting better will lead, merely through incremental improvement, to full dive experiences that are indistinguishable from real life.


You must log in to comment.

in reply to @bruno's post:

the ai industry promotes itself, and is a great client for the data selling industry, which is a great client for the data collection industry, which is a great client for the surveillance industry etc etc

I was shitposting about it earlier, but, recognizing that I am also not immune to propaganda.... if any of these were going to have a chance, it would have been Bard, right? Google's (supposed) knowledge graph complexity having tons of relationships between data and its meaning, and they opened up out of the gate with ... the same generated slop.

All they're doing is more text associations. But the reversal curse is real, and you can do as many additional associations as you like, the scale can be Mindboggling, and ... it's still the same. It's still just Eliza. It's still just tapping the middle word of auto correct, over and over.

Morever, their beliefs about AGI all rest on the belief that consciousness or self-awareness is a specific stage on the linear scale of intelligence. That a sufficiently-intelligent system is inherently self-aware because those are definitionally the same thing. I have a whole-ass post I've been meaning to write for over a year on that topic, also including why its bound up in racism and colonialism. Suffice it to say, there's a whole body of work around a theory of mind where consciousness is well-defined and doesn't actually have very much to do with intelligence at all.

worth noting that fearmongering about AI from AI businessmen and other TESCREALers that AI is an Existential Danger To Humanity and will turn into skynet or do a singularity or something is itself a form of marketing for them

(this is to be distinguished from the type of criticism in this post and basically everywhere i've seen on cohost, to be clear)

I love the "emergent behavior" thing, because since nobody has yet to see it happen, it's transparently just code for "we actually have no idea how any of this works, but we hope that we can skip the sloppy process of learning and inventing and just accidentally leapfrog into a world-changing advance." It doesn't even matter what technology they mean, at that point. Maybe snow-shovels in a pile will cause intelligence to emerge. You don't know!