Nathan Sorenson:
I refuse to say that a LLM "makes things up" or "states things with confidence" - I get that it's convenient short hand but these product managers want you to anthropomorphise their garbage products. It's our job to go out of our way to resist. We have to be strategically pedantic on this one. They are hastily rushed out garbage products, not lil smol boys who are doing their best.
I have come to feel that this is a line worth holding, because even pieces like this one that do a reasonable job at pointing to why the AI industry's broader pitches - like giving companies ways to evade responsibility for things like firing decisions by "delegating" (here, a verb of deliberate ambiguity) them to a piece of software - are harmful, still gets a bullshit headline like "AI is starting to pick who gets laid off" [1] that offers the framing that's most beneficial to said industry: don't blame us, say the customer-execs, the AI told us to! Public resistance so often begins with language, and I love "strategic pedantry" for the attitude we need here.
I believe the AI industry's hardest push is going to come once the specific hype cyclone it created around LLMs subsides (as the initial wild overpromises of their potential inevitably fray beyond credibility) and this push will be about policy as product. They will engage in unprecedented levels of "AGI theater" (because General Intelligence is almost impossible to define clearly on a philosophical level, and they can always safely claim that the latest product is but a humble incremental step towards that forever-5-to-10-years-away bullshit (and credulous factions within the press will happily fill in the blanks)) to frame software service offerings that purport to "make decisions": who to fire, who to arrest and convict, whose health insurance claims to deny, etc. The problem with these are + will be that they're functionally, politically, morally indistinguishable from Nixon and Kissinger circa 1973 flipping a lucky coin in the Oval Office to decide which village to napalm next [2] - when any given coin flip proves to have been a catastrophically bad decision, it won't matter; "the algorithm" is the product, and the product is a perpetual work-in-progress, always getting better, always to be judged on its future potential rather than its current performance. When a decision making apparatus is non-human and your system's POSIWID function is to extract and dehumanize and murder for profit and power, let's call that what it is: humans with power deciding to harm the world, with some extra steps.
A clear alternative to "AI" is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don't talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing "cats" from "dogs" in a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans.
-- from AI is an Ideology, Not a Technology, which I arrived at via Emily Bender's excellent talk Resisting dehumanization in the age of AI
"Policy as product" is my attempt to state the underlying strategic political goals of the AI industry as plainly as possible. Like most modern tech products, software that is sold as being able to set policies and make decisions will not really do what its creators claim. It'll operate in ignorance of anything that falls outside its data set, obviously, and any aspect of human experience that can't be flattened into legibility (ie as data) will simply not exist within its decision space. And thus it'll be used as a sock on the human hands making terrible, absurd, frequently inhumane decisions; yet it will also require constant supervision, creating new bureaucracies [3] to absorb its inefficiencies and flaws - to say nothing of the burdens borne by the actual human (and animal, and ecological) subjects of these policies. And it'll do this in service of its true purpose: to reduce (short term) costs and deflect accountability around any and all policy decisions. Your boss will want to do layoffs, so he'll use a program that is supposed to "carefully consider the massive amounts of data" [4] and "make a decision", and your name will come up. And you are several steps downstream from the executives who set the policies and goals of that software that was sold to your boss, you'll never meet them or see their faces but they will wield immense power over everyone you know.
The line we need to hold hardest is letting this shit into public institutions, eg government. Most of the AI industry's biggest investors make no secret of wanting a society where ordinary people have absolutely zero protections from the will of capital. They need government around mostly as a third party scapegoat and shock absorber and blood mop for capital's harmful externalities. After decades of undermining and sabotaging the decision-making power, and draining the resources of, government and other public institutions what better coup de grâce to put them into an irrevocably subservient role than to capture the mechanisms of policy itself? With the right framing, every decision made by a human who actually stands by it, eg an elected official who argues for a policy and the public servants who implement it, will be subject to the harshest worst-faith-possible criticism; whereas a decision made by a perfect (still in beta™) immortal machine intelligence will always have keepers with arguments close at hand for evading all accountability. Human governance will become just another underperforming employee, and surprise, "the algorithm" will offer as replacement a purportedly non-human mechanism with a rock solid (trust us) "better than human" performance rating. And at that point, it'll probably be too late to protest.
So I think we need to speak very plainly about what's really going on here, and shine as bright a light as possible on the politics of the people pushing this shit, and the values they've encoded into these systems. Because I think the bloodiest, highest stakes battles are still to come.
[1] oddly, the article summary (visible on archive.is) is more tolerable: "Algorithms may increasingly help in making layoff decisions"
[2] to be clear, this is not something that actually happened during the Vietnam War to my knowledge - i'm merely illustrating the essential absurdity of present + future systems.
[3] imagine health insurance companies recognizing, if they haven't already, generated claim denial appeal letters as a threat to their profits, and countering with generated gunk in greater force and volume - assume capital will win any arms race it has itself initiated.
[4] it's telling how rhetorically effective pulling out increasingly big numbers has become in this public perception arms race - 175 billion, no wait, 100 trillion parameters!! 500TB of images! massive datasets of health records, sales reports, crime statistics... more! surveil more!