• he/him

The challenge we face is not finding an alternative, but having the strength to drag those unwilling towards brighter possibilities. The future is not yet lost.


vectorpoem
@vectorpoem

Nathan Sorenson:

I refuse to say that a LLM "makes things up" or "states things with confidence" - I get that it's convenient short hand but these product managers want you to anthropomorphise their garbage products. It's our job to go out of our way to resist. We have to be strategically pedantic on this one. They are hastily rushed out garbage products, not lil smol boys who are doing their best.

I have come to feel that this is a line worth holding, because even pieces like this one that do a reasonable job at pointing to why the AI industry's broader pitches - like giving companies ways to evade responsibility for things like firing decisions by "delegating" (here, a verb of deliberate ambiguity) them to a piece of software - are harmful, still gets a bullshit headline like "AI is starting to pick who gets laid off" [1] that offers the framing that's most beneficial to said industry: don't blame us, say the customer-execs, the AI told us to! Public resistance so often begins with language, and I love "strategic pedantry" for the attitude we need here.

I believe the AI industry's hardest push is going to come once the specific hype cyclone it created around LLMs subsides (as the initial wild overpromises of their potential inevitably fray beyond credibility) and this push will be about policy as product. They will engage in unprecedented levels of "AGI theater" (because General Intelligence is almost impossible to define clearly on a philosophical level, and they can always safely claim that the latest product is but a humble incremental step towards that forever-5-to-10-years-away bullshit (and credulous factions within the press will happily fill in the blanks)) to frame software service offerings that purport to "make decisions": who to fire, who to arrest and convict, whose health insurance claims to deny, etc. The problem with these are + will be that they're functionally, politically, morally indistinguishable from Nixon and Kissinger circa 1973 flipping a lucky coin in the Oval Office to decide which village to napalm next [2] - when any given coin flip proves to have been a catastrophically bad decision, it won't matter; "the algorithm" is the product, and the product is a perpetual work-in-progress, always getting better, always to be judged on its future potential rather than its current performance. When a decision making apparatus is non-human and your system's POSIWID function is to extract and dehumanize and murder for profit and power, let's call that what it is: humans with power deciding to harm the world, with some extra steps.

A clear alternative to "AI" is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don't talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing "cats" from "dogs" in a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans.

-- from AI is an Ideology, Not a Technology, which I arrived at via Emily Bender's excellent talk Resisting dehumanization in the age of AI

"Policy as product" is my attempt to state the underlying strategic political goals of the AI industry as plainly as possible. Like most modern tech products, software that is sold as being able to set policies and make decisions will not really do what its creators claim. It'll operate in ignorance of anything that falls outside its data set, obviously, and any aspect of human experience that can't be flattened into legibility (ie as data) will simply not exist within its decision space. And thus it'll be used as a sock on the human hands making terrible, absurd, frequently inhumane decisions; yet it will also require constant supervision, creating new bureaucracies [3] to absorb its inefficiencies and flaws - to say nothing of the burdens borne by the actual human (and animal, and ecological) subjects of these policies. And it'll do this in service of its true purpose: to reduce (short term) costs and deflect accountability around any and all policy decisions. Your boss will want to do layoffs, so he'll use a program that is supposed to "carefully consider the massive amounts of data" [4] and "make a decision", and your name will come up. And you are several steps downstream from the executives who set the policies and goals of that software that was sold to your boss, you'll never meet them or see their faces but they will wield immense power over everyone you know.

The line we need to hold hardest is letting this shit into public institutions, eg government. Most of the AI industry's biggest investors make no secret of wanting a society where ordinary people have absolutely zero protections from the will of capital. They need government around mostly as a third party scapegoat and shock absorber and blood mop for capital's harmful externalities. After decades of undermining and sabotaging the decision-making power, and draining the resources of, government and other public institutions what better coup de grâce to put them into an irrevocably subservient role than to capture the mechanisms of policy itself? With the right framing, every decision made by a human who actually stands by it, eg an elected official who argues for a policy and the public servants who implement it, will be subject to the harshest worst-faith-possible criticism; whereas a decision made by a perfect (still in beta™) immortal machine intelligence will always have keepers with arguments close at hand for evading all accountability. Human governance will become just another underperforming employee, and surprise, "the algorithm" will offer as replacement a purportedly non-human mechanism with a rock solid (trust us) "better than human" performance rating. And at that point, it'll probably be too late to protest.

So I think we need to speak very plainly about what's really going on here, and shine as bright a light as possible on the politics of the people pushing this shit, and the values they've encoded into these systems. Because I think the bloodiest, highest stakes battles are still to come.


[1] oddly, the article summary (visible on archive.is) is more tolerable: "Algorithms may increasingly help in making layoff decisions"

[2] to be clear, this is not something that actually happened during the Vietnam War to my knowledge - i'm merely illustrating the essential absurdity of present + future systems.

[3] imagine health insurance companies recognizing, if they haven't already, generated claim denial appeal letters as a threat to their profits, and countering with generated gunk in greater force and volume - assume capital will win any arms race it has itself initiated.

[4] it's telling how rhetorically effective pulling out increasingly big numbers has become in this public perception arms race - 175 billion, no wait, 100 trillion parameters!! 500TB of images! massive datasets of health records, sales reports, crime statistics... more! surveil more!


You must log in to comment.

in reply to @vectorpoem's post:

I have this theory that a major factor in how "AI" marketing works is exploiting the lay audience's instinct to assume an excluded middle between the (actually true) "this doesn't work, and never will" and (thinktank-funded scifi bullshit) "killer robots will replace humanity".

Then OpenAI can drop right in there and sound like the perfectly reasonable option by going "sure the tech isn't all there yet, but look how impressive this carefully selected demo is!" and even pretty intelligent people assume there must be SOMETHING there, especially if they've got capital ...

I get the impression that "AI" specifically is soon to go the way of blockchain, and metaverse, and NFTs, leaving a bad smell and some ruined lives in its wake - the core of the silicon valley business model is imbuing a bunch of nothing with a veneer of newness, of this has never been precedented, and once the gimmick no longer sounds like nothing you've ever seen before its utility evaporates and it's on to the next thing the rubes haven't realized sucks yet.

Because really, what you're talking about is the same bureaucratic obfuscation people in power have been using to deny responsibility for the things it's their nominal fucking job to be responsible for for, as you mention, many generations. Gesturing to all the many small actions that added up to that village getting incinerated and piously crying out "It wasn't me, it was the System" never required a fuckin' robot in the hustle; when serfs got massacred they'd shake their fists to heaven and blame the Tsar's nameless wicked advisors, who had misled their clearly virtuous sovereign. Anyone who's had to deal with healthcare in the US is already intimately familiar with the Algorithm deciding their tumor will go untreated, sorry nothing we can do about it, and it didn't take some jumped-up Eliza clone to get there. The only thing LLMs are actually any good for, spam, isn't really novel in any meaningful sense either, it's something we know how to deal with if we really wanted it, you just can't do it on an atomized individual level except by dropping out of the system and the people in charge of our social infrastructure don't wanna be bothered with maintenance.

What is newer and more specific to our present moment is just the institutional decay that lets this shit run unchecked, with only the most snide and off-the-cuff rationalization. A Norfolk Southern spill into the commons every week with a shrug and a "you're welcome", because there will be no consequences, consequences are for individuals who don't have the moxie to just fuckin' move out of the toxic waste dump the System has installed in your home. And yeah, for a while the bots will be a part of that, another scam in the mix, and then they will go out of fashion, and the decay will remain.

like @secrets I was also struck with how "It'll operate in ignorance of anything that falls outside its data set, obviously, and any aspect of human experience that can't be flattened into legibility (ie as data) will simply not exist within its decision space. " also describes the functioning of bureaucracy prior to the emergence of this obfuscated-responsibility-technology.

one thing that occurred to me is that in the age of Twitter (the bird site is dead, long live the bird site), bureaucratic functionaries became more subject to call out's (that they could actually hear from their lofty heights) for the outcomes of their systems, be it startling lack of diversity in hiring or folks talking about how much (potential) grant money is wasted reviewing grant applications.

this is a real pain point experienced by people with the power to decide to spend huge amounts of money to 'modernize' process.

this framing doesn't make the issue less bad or less urgent, but acknowledges the way that the struggle is continuous with struggles that have come before.

tbh the closest thing I can recall to twitter callouts making any actual mark outside of twitter is journalists cherry-picking anonymous comments to cite as proof of a silent majority that agrees with them or the universal idiocy of their opposition, which is a lot closer to just running a really inefficient ChatGPT prompt to get the text you want than anything that actually empowered the people doing the tweets.

I'm not saying twitter complaints changed policy, I'm saying they were experienced as discomfort. all of the rabbid decrying of "cancel culture", which even in successful instances largely left wealthy people still wealthy, is the clearest mark of this discomfort. all of the big tech diversity language and PR pieces is another.

it's not enough to weld power, they want to do so while not hearing about the repercussions.

on the theme of POSIWID, the lack of debuggability is the value of the model, compared to simpler models previously used for things like loan applications.