ring

nearly-stable torus, self-similar

  • solid he, nebulous they

I'm Ring ᐠ( ᐛ )ᐟ I strive to be your web sight's reliable provider of big scruffy guys getting bullied by ≥7-foot tall monster femboys


You will never guess where to find my art account! Hahahaha! My security is impenetrable! (it's @PlasmaRing)


pervocracy
@pervocracy

I really need every organization ever to understand that large language models cannot ever fact-check without a major qualitative change in how they work. This isn't like how a toddler can't read, where it's a matter of putting in more time and effort. This is like how a rock can't read.

It's in the name! Large language model! They are getting better at forming grammatical sentences, and cannot get better at understanding their meaning! There is simply no mechanism in there, as best I understand, for determining that one grammatically valid sentence is likely to be true and another is likely to be false. Not an underdeveloped beta-stage mechanism. Nothing. This car isn't slow, it has no wheels.

So it's really weird to see all sorts of organizations rolling out "AI" features like we don't all know this. I guess it's the hot thing and it's cheap, but it's such a frustrating Emperor's New Clothes situation and all I can hope for is that some high profile companies get sued when their LLM writes checks their customer service can't cash.


You must log in to comment.

in reply to @pervocracy's post:

I agree that they're terrible for this and will be terrible at it probably forever, but I do think it's important to acknowledge that the argument is that a perfect language model trained on truthful text is effectively a model for knowledge. I don't think that's actually necessarily wrong (but it's also not obviously correct) but "perfect" and "truthful" here are doing extremely heavy lifting in that argument, and LLMs now and for the foreseeable future will be shitty at both of those.

Talking about this feels almost exactly like talking about NFTs in video games. Specifically the part where they'd say, "In just a few years you'll be able to take your Call of Duty gun into Mario!" and there was no way to explain that wasn't going to happen. They'd think I was foolishly claiming an upper limit on progress, but they fundamentally did not understand that no amount of powerful technology will make one studio want to honor a potentially infinite amount of purchases made in games owned by other people, even if for some reason all games were suddenly coded exactly the same way using exactly the same art assets in exactly the same engine. Yeah, sure, I guess unicorns could stampede out of my ass, too. Never say never.

I think even more so with this, "the technology is advancing exponentially!" is doing a lot of work, and it's both terrifying and fascinating to watch people's standing ideas of what "AI" is work their way into it, even if they're not happy about its existence. Like, "Well, it could learn independently, if it has enough material!" as though it's a given that it'll exhibit emergent behavior. The fridge cannot spontaneously learn how to cook food! It doesn't have a heating element!