from Alex Hern's tech newsletter:
We’re at the crossroads of two very different AI futures. In one, the companies that invest billions in training and improving these models act as gatekeepers, creaming off a portion of the economic activity they enable. If you want to build a business on top of ChatGPT, for instance, you can – for a price. It’s not extortionate, a mere $2 for every 700,000 words processed. But it’s easy to see how that could one day result in OpenAI being paid a tiny sliver of a cent for every single word typed into a computer.
You might think that no company would give up such an advantage, but there’s weakness of that world: it’s an unstable one. Being a gatekeeper only works while there is a fence around your product, and it only takes one company to decide (willingly or not) to make something almost as good available for free to blow a hole in that fence for good.
The other world is one where the AI models that define the next decade of the technology sector are available for anyone to build on top of. In those worlds, some of the benefit still accrues to their developers, who are in the position to sell their expertise and services, while some more gets creamed off by the infrastructure providers. But with fewer gatekeepers in play, the economic benefits of the upheaval are spread much further.
There is, of course, a downside. Gatekeepers don’t just extract a toll – they also keep guard. OpenAI’s API fees aren’t a pure profit centre, because the company has committed to ensuring its tools are used responsibly. It says it will do the work required to ensure spammers and hackers are kicked off promptly, and has the ability to impose restrictions on ChatGPT that aren’t purely part of them model itself – to filter queries and responses, for instance.
No such limits exist for Stable Diffusion, nor will they for the pirate instances of LLaMA spinning up around the world this week. In the world of image generation, that’s so far meant little more than a lot more AI-generated porn than in the sanitised world of Dall-E. But it won’t be long, I think, before we see the value of those guardrails in practice. And then it might not just be Meta trying to jam the genie back in the bottle.
This is excellently put, imo, and gets to the biggest question I have around the future of AI. And I think that... I don't trust private companies to guard against abuse. They're terrible at it! And they're more concerned with weird apocalyptic "alignment risk" fantasies than everyday harms. So given that... fuck a platform, fuck structuring a marketplace so you get a little rake any time something happens, fuck, I guess I am saying, profiting due to your use of capital rather than of labour.
Anyway. That's why I'm glad that Stable Diffusion exists, despite the harms it's also causing, and why I'm glad for this LLaMA leak. Here's to software you can run on your own computer! Here's to making a weird tool built on a weird tool someone else has made.