I think I'm taking a risk writing this post, because unfortunately it's going to involve nuance. I want you to know up front, I'm probably on the same side of the issue as you. I think generative AI in the 2020s is destroying the public sphere and we need to do a lot of different things to stop it.
If you end up reading this post and thinking that I'm making a slippery-slope argument that says generative AI is inevitable and you should give up and let it take over our culture, please back up and read this. Holy shit no I am not saying anything like that. I do not think we should give up. I think we should target it, say what's bad about it, and stop it.
The thing that's been bouncing around my mind is that, if you take such a hard-line position that you want to "ban AI" without being able to say what you mean by that, you are not going to be effective enough at opposing it. I understand taking a nuance-free position as a tactic, especially if your position is that "well I don't understand this and I shouldn't have to understand this, I just know it's bad", but I don't think that works as the only tactic.
Here's the outline of what I mean:
- The definition of AI is constantly changing
- Many AI techniques in the past have been normalized to the point where it sounds really silly to call them "AI"
- If today's generative AI follows that trend and gets normalized, that would be a problem, even though this sounds like recency bias
- Something changed in the 2020s that made generative AI more dangerous and insidious, and if we pin down what changed, we will be able to oppose it better. There were warning signs before, but the call to action is now.
I don't want this to be a cohost meta post. I know things about AI and its history. I don't know things about how to run a web site. Even though the cohost meta is why I've been thinking about it, please don't ask me to turn this into a strong recommendation about how to run a web site.
Some background on me: I wrote an often-quoted post in 2017 warning about how the default state of AI training, the thing you get when you implement it exactly like the textbook says, is to be racist. Many people thought before that that "whoops, AI turned racist" was just a wacky thing that happened to Microsoft Tay and wouldn't happen to them. I also wrote a follow-up post about how Google's "Perspective AI", which they promoted as the default way to moderate a web site instead of having humans do it, was exactly the kind of racist-by-default implementation I described. I used to run an AI system called ConceptNet. I don't think it changed the world, and my proudest thing about ConceptNet is the silly bots and games that people made with it. I don't work in the AI industry anymore.
Things that used to be AI
Anyway. Perhaps you want AI to go away. You want there to be no more AI-generated posts and you want to shun, or possibly ban from a community, anyone who posts them. Does this mean you want to disallow:
- Commentary on a chess game with the strategic value of moves annotated
- Photos taken with a phone camera
- Posts written with assistive speech-to-text
- Google Maps directions
- Posts written using spell check
- Posts typed or swiped on a phone's on-screen keyboard
I don't think you mean those. And now there's a chance you're upset, because what the hell, those clearly aren't AI, why would I even bring them up unless I'm trying to obfuscate an important issue?
They used to be AI!
Phone cameras use AI image recognition to make photos sharper than the lens can actually support, and make human faces prettier. The iPhone propagated it first. A blogger who wrote about the statistics of online dating noticed that iPhone users got laid more. Android phones had to make a big push to catch up, around the time of the Pixel 3.
One of the pitches my research group made for the technology we were working on, in grad school, was that you'd be able to do better T9 texting on your phone, because the AI system could understand what words you were likely to mean, not just what words are in the dictionary. It's amusingly short-sighted in retrospect that we were just talking about T9.
Giving directions on a map was the first major assignment in undergraduate AI class in the '00s.
Spell check was considered AI when it was first developed. It is an example in the 2nd and 3rd edition of the most definitive AI textbook, Russell and Norvig.
Playing chess well was absolutely AI, and I don't just mean in the non-overlapping magisteria of "game AI". In the '70s, people believed that a computer that could win at chess would have proved its intelligence. It was a topic of mainstream AI discourse in the '90s when Deep Blue beat Kasparov (not through any new techniques, just by computing more things faster). The current chess engines include techniques that came from generative AI, even though the most they generate is strategic lines of chess moves.
And fairly recently, the thing people thought of when they thought of AI was a pattern-matching voice assistant like Siri or Alexa. (Amazingly, these things that used to kind of work have been invaded by modern generative AI to the point where they don't work anymore, they just fake it the way any generative model does.)
The easy position to take would be to say "well, everyone who called those things AI was wrong. ChatGPT and Stable Diffusion are AI, and those older things weren't." I'll caution you: if you take that position, you are taking the same position as AI-promoting techbros. They don't think old AI was AI either! They think they invented AI in the 2020s! "None of that previous junk was AI, we're finally making real AI" is part of their bullshit sales pitch!
By denying that anything that happened in the 2010s or earlier is "real AI", the AI promoters can dismiss all the previous concerns like the ones I wrote about. Moreover, they can dismiss Timnit Gebru, Emily Bender and Margaret Mitchell. They were just writing about silly computer toys in the bad old days, they say, and nothing they say has any relevance to "real AI", which happens to have all been created after three influential women were run out of the industry for raising the alarm.
"Stop quibbling over definitions, I know it when I see it"
Another position you might take on harmful generative AI is the old canard about pornography: "I know it when I see it."
In 5-10 years, there is a chance that you won't know it when you see it. I'm not saying this is inevitable, and in fact it's dystopian as fuck, but I'm saying it's possible enough to be worried about. Our cultural memory about technology such as AI is so short.
Warning: dystopian speculation
In 2030 the generative AI discourse may still be going on, and you'll say, look, the problem is clearly with this generative AI app that generates VR porn of your crush. Clearly it's not just about generating text or images. Those aren't AI, everyone does that, how could you even write anything without it.
In 2035 it may be, look, the problem is with the robots that brutalize protesters and striking workers, clearly the problem isn't the VR porn app that everyone has, why would you waste your breath on a silly old thing like that.
If AI stops being "AI" the moment it's half a decade old, then all the concerns with current AI can be dismissed and swept under the rug just by the passage of time. Nonspecific calls to ban generative AI can be completely neutered by making a technique so common that it's not called AI anymore. You do not want let them run out the clock, and you really don't want to reward making the bad thing common.
This dystopia is not inevitable. I think defining the problem is part of stopping it.
2020s AI is not just the natural progression of technology
I'm glad you get here, because here's the part where I make the point that I'm risking all this nuance to make. There is a noticeable change in the generative AI technologies that are currently ruining the world, and if we can describe what changed, we can describe how to oppose them.
Here are things that changed:
-
Training data was collected with the clear non-consent of its subjects. In 2010, if you heard an AI was being trained to understand language based on something you wrote, your reaction might be "huh, neat", or maybe "what?". In 2024, your reaction is probably "FUCK NO, I did not agree to this, make it stop", and there's a good reason for it, because we've seen what they intend to do with that trained AI.
-
Users do not want the technology. When AI was a phone camera that made you sexier, people wanted it. When it let people accomplish tasks with their voice, they wanted it. For the most part, people do not want the generative AI slop that's being shoved into everything. Most people do not want to read authoritative-sounding lies instead of search results, see art with no artist, listen to pop music sludge made by nobody, or talk to a fake phone agent that is not capable of understanding anything. Companies keep putting AI into products even though it makes customers distrust the product. They are fighting a battle to normalize this decade's generative AI that they are not certain to win.
(As an exception, a sizable minority of people like and trust ChatGPT. I have no idea why. I hope we can pop that bubble.)
-
They are fighting against a resurgent labor movement. Bosses are so determined to put AI into everything, at all costs, because they need to undermine the power that workers have. They are scared and they need to be more scared.
-
They are fighting against the joys of being human. The message of AI has largely shifted from "this will improve your life and give you more for leisure and creativity" to "you'll have to use this no matter what, and your leisure and creativity are obsolete". And we can hear them. Unlike the prior history of AI, ordinary people can see how much it sucks, how they want to put our interests in a blender and make a gray sludge out of them, how they want to make a machine draw for them because they won't even put in the effort to learn to draw. We are now all Hayao Miyazaki, who was shown an early generative AI and said "I strongly feel this is an insult to life itself".
-
The theft and plagiarism is obvious now. Large-scale scraping of training data can no longer be written off as "oh, it's just an AI experiment, it doesn't matter." We know it does matter. This has put us, unusually, on the same side as massive copyright holders like Getty Images and music labels. A strong interpretation of copyright is an uncomfortable tool to fight AI, but it's a tool we can use.
-
The energy use is unsustainable. These large generative AIs don't run on normal computers using normal amounts of power. They're unsustainable for the environment, and also unsustainable for the companies to be able to afford what they're doing. OpenAI loses money on every interaction even though they don't have to pay for their externalities. They shouldn't be able to do this forever.
What I am advocating for
I'm sure many of these suggestions are going to be obvious. I still want to suggest them. It's important to have targeted ways to resist generative AI.
-
Unionize. This is the best tool we have. Capital owners want to replace creative workers with AI slop using the work those workers did, and the workers who are able to stop this are the ones who are unionized. A line we can draw against the AI boosters is: "You can't use our work to do this. Go do the work yourself." And they won't do the work themselves because they don't know how to do work.
-
Make modern generative AI unwelcome in your communities, for these reasons. Not just because it's "AI" which has always been a squishy word for "things computers can't quite do yet", but because it's fucking fake, stolen and dehumanizing. Messing around and making computers do new things isn't the problem. The huge capitalist pressure to make all available computing power do one thing that most people don't want is the problem.
-
Support and celebrate the imperfections of being human. Every time you correct someone's grammar, there's a risk that they turn to ChatGPT to write for them next time. Support weird unprofessional writing. Support art that is poorly drawn in a human way. Support imperfect music made by real people with a tune in their heart.
-
Demand sources. Demand credit. Even if someone labels something as generative AI, they ought to be challenged on what their sources are, whose words are being used without credit, whose art is being collaged together. Of course they'll give stupid answers and credit products as if they were people, but stay firm and don't accept those answers. "Stable Diffusion didn't make the thing, Stable Diffusion is the machine you used to steal the thing, so who is it stolen from? Who are the recording artists who made the sounds that Suno is making? Oh, you don't know, is it because you plagiarized it?" Do not let up on this. The problem is not something as banal as copyright, it's that they are taking credit for something they didn't make, or assigning that credit to their imaginary corporate friend.
-
Make the companies pay for environmental harm. Just like cryptocurrency before them, the large generative AI companies are benefiting from unnaturally cheap electrical power, and political action can change this. Environmental action is slow and frustrating, but it's at least one avenue of attack. They should be paying for their externalities.
-
Destroy Google. Okay, I don't have a specific plan for how to do this, but it would sure help.
-
Oh and of course mercilessly mock the people who distract from the real problems by making up a sci-fi AGI god. But we've been able to mock them for a long time, since long before generative AI was the present danger it is now.
Keep AI weird?
As a minor point, I think it can be good to celebrate weird, low-power, niche experiments that cannot imitate a human and are not trying to, like in the '10s when we used things like Deep Dream and GPT-2 to generate uncanny absurdism and posted curated examples of it. "AI" could be taken in a different direction. That kind of stuff is still possible and a single home computer can do it. Turn down the model size, turn up the weird. Put some effort into doing something different so we can skip the racism and sexism and stuff this time.
Even better is the bespoke templated stuff whose model is entirely transparent, like "Cheap Bots Done Quick", Inspirobot, and friends.
Maybe the time of absurdist computer-generated memes is over anyway. Maybe generative AI made them terminally uncool. If so, I won't miss them too much. But they shouldn't be the target, I think.
