ring

nearly-stable torus, self-similar

  • solid he, nebulous they

I'm Ring ᐠ( ᐛ )ᐟ I strive to be your web sight's reliable provider of big scruffy guys getting bullied by ≥7-foot tall monster femboys


You will never guess where to find my art account! Hahahaha! My security is impenetrable! (it's @PlasmaRing)


Vicas
@Vicas

Really good article on AI chatbots and psychics: https://softwarecrisis.dev/letters/llmentalist/

I especially like the parts about "smart" people potentially becoming even bigger evangelists when they do get fooled by it, because boy howdy does that ring true


lexyeevee
@lexyeevee

blown away by this. even calling it a fantastic metaphor feels like selling it short. it clicks perfectly with a persistent nagging feeling i've had about how the output is generic but confident and how that must be important somehow


ring
@ring

I was having a conversation with a very intelligent person who could be described as "AI critical," and the source of his pessimism about the role of generative models in the future was based in his belief that they're spontaneously displaying unexplained behavior.

My experience with using ChatGPT is very much in line with the description of them as fancy autocomplete. The errors it makes are not just getting facts wrong or coming back with weird nonsense; it's stuff like asking a follow-up question and the bot dropping the context entirely.

me: what's the word for the smallest unit of Roman troops
bot: a contubernium was composed of eight dudes blah etc.
me: what's the next highest number
bot: the next highest number depends on the first number in a sequence of numbers. For example, 1, 2, 3, 4, 5 is a sequence of numbers. I will need more information to answer your question accurately.

For me, this--not hallucinations or gaps in the training data--is what kills the illusion. It appears to be part of an ongoing conversation, but it's clearly not. This is not a lapse anything that can meaningfully be described as "intelligent" or "aware" or "responsive" makes. It is often very convincing in how well it seems to follow along! But you can quickly break that by, say, asking it to explain how it came to a previous incorrect conclusion. It will frequently defend a completely different answer, because there is no real continuity from one message it prints to the next, and it doesn't remember what it said.

This means that unless it's programmed to have different capabilities, models like Stable Diffusion and Midjourney will never "learn" to get so good at art that they can create novel interpretations of abstract concepts the way a human artist would. Me saying this was where the other guy came into the conversation to warn me that I was wrong. "Whatever you think AI can't do right now," he said, "it will be able to do within ten years."

This is an alarming statement from someone with his background (which I am omitting the specifics of). I clarified: I absolutely think they can be refined to the point where they don't include any anatomical errors in something complex like a horse. But I don't think it's possible for them to ever get to a point where you say "horseflower" and the model goes through the same process a human artist would to develop those two things into a unified concept, because they can't understand "horse" and "flower" as symbols/ideas or combine them in a creative way.

The other guy does think they'll be able to do that, because they're improving so quickly already. And I couldn't understand why someone who is not inclined to hope they'll improve to that level and who has a solid background in how technology works would take it for granted that they aren't already at or near their upper limit.

After reading this, I'm even more sure that it's the same thing that got very smart people who should have known better into blockchain/NFT/web3 stuff: if you accept the premise that the technology is here to stay and has a tremendous future--either because you're invested in it or because everyone else seems to believe it--and you don't know how it'll get there from here, it's very easy to speculate on how it'll happen in a way that sounds plausible to you. But if you don't really know how it works, and you're smart and self-aware enough to know that you don't know how it works, that gap becomes a space where anything could happen. And because you are a rational person who doesn't believe in horseshit, you are of course not implying that God will perform a miracle or aliens will arrive or magic will happen or that the machine will spontaneously develop a soul--except that "progress will continue exponentially until it thinks for itself" is exactly the same thing in this context.


You must log in to comment.

in reply to @Vicas's post:

One thing that I don't know, which I would like to know is whether being aware of the mechanics of the psychic con dissuades people from their belief.

It seems to matter very little to people who are suckered by convincing text generation - which I recall learning was also true when people were getting Too Into ELIZA.

Definitely a good question. I could absolutely see someone who's aware of what cold reading is get thrown completely off-kilter by a read that hits a little too close to home and kinda spiral from there, which feels very similar to programmers being shocked that it can refrigerate basic python, but neither of those are empirical

my wager, and this is totally speculation, is that this is mostly an indicator that people don't really truly understand the underlying mechanisms, which especially in LLM type things is very common, since it's functionally an entire vocation unto itself. it's much easier to comprehend "the psychic says stuff that's probably true and waits for me to flinch" than it is to comprehend a token cloud and the inner workings of the model. and i think there's a lot of smoke & mirrors in the software industry about what people actually understand vs what they know how to use.

in reply to @lexyeevee's post: