mtrc
@mtrc

While it pains me to admit it, a lot of the discussion about AI - on all sides - is pretty flawed. By that I mean that, yes, the people claiming it will automate the entirety of human cultural output three months from now are obviously completely deluded and/or selling snake oil. Equally, many of the criticisms of AI are also off base - the sharpest claims that AI systems only exist to steal content, that it cannot create anything new, and so forth. These criticisms are not exactly wrong but they are exaggerated or imprecise, and it can make it harder to have meaningful discussions and offer critiques when it is too easy to dismiss the whole line of questioning as "AI doomerism", AKA my new least favourite phrase.


The reason we've ended up here is quite simple: the public have not been given a straight answer about artificial intelligence since the day AlphaGo squared off against Lee Sedol in 2016. Every news article, every interview, every press release and paper has been spun, misinterpreted, exaggerated or played down, and it's led us to where we are today, with the public surrounded by a technology they do not understand and cannot control. The problem is only getting worse, as adoption and deployment of some of the most popular AI products is increasing in speed, while education and outreach about these technologies continues to lag behind.

This process was already in place back in 2020, when GPT-2 was released. But in the last six to twelve months I've seen it accelerate for a few key reasons. The first is that a major company broke the seal on integrating their technology directly with a paid product. Once this happened, it was possible to talk about AI tools in terms of market share, and this provides a metric to compete against. If you've been spending investor money for years and operating at a loss, and someone is starting to charge money for their product or integrating it into tools that already have a huge reach like Photoshop and Powerpoint, you have a lot fewer excuses for not doing the same. So major tech companies all tried to move with their own products, whether or not they were ready (spoiler alert: none of them are ready).

The other reason is that major governmental organisations are starting to talk about regulation. Regulation is potentially the biggest threat to big tech's AI rush so far. Even localised, clunky regulations like GDPR can have enormous impacts on how sectors operate and seriously disrupt plans. One really important part of Big Tech's strategy is to lean into the regulation skid. Once it became obvious that regulation was inevitable, the best thing major companies could do was become part of the conversation around regulation in the hope that they might end up involved in drafting, monitoring and shaping it. That's why we've seen performances from people like Sam Altman recently, who is positioning himself as a tech guy who "gets it" and is keen on regulation.

But another really important response to the oncoming threat of regulation is to simply deploy so much AI, so fast, and get it so integrated into every aspect of our lives that it is almost impossible to disentangle or undo. This is a general strategy that big tech has used for a long time - as another example, you can see Google's deals with the NHS in the UK, where the aim was to provide a service which in turn gave them privileged access to systems and avenues to spread their influence throughout the UK. Palantir spent a lot of time and money positioning themselves very carefully around the EU and the UK in a similar fashion. So this isn't a new strategy, but the recent acceleration feels like a response to imminent legislation. If laws are passed tomorrow that restrict systems, they will initially most likely apply to new systems. If you deploy as much as you can today, it gives you a head start on everyone starting tomorrow, and if society becomes dependent on your tools then it makes it all the more difficult for a government to turn around and punish you for noncompliance.

This is reminiscent of the approach that companies like Uber and Deliveroo took in establishing themselves. By burning money and other resources, they were able to secure a market share that made no sense for a business that actually had to make a profit (to date, to the best of my knowledge, Uber has never made an annual net profit since it began operating). If you aggressively increase that market share so much that your competitors start going bankrupt, it doesn't matter what happens next. You're the only game in town. While a system like ChatGPT can't force out a 'competitor' in the same way, it can become so culturally ingrained that it becomes hard to undo its presence in the world. 90% of students said they had used ChatGPT to help them study recently. Systems like it are integrated into search, Gmail and Photoshop. Given another six months, what would it really matter if we legislated against these systems? How on earth would anyone untangle them from the rest of our digital lives?

This is why the current strategy for companies like Google, Microsoft and Facebook is not to charge anyone serious money for this technology. They are not looking to package it up for expensive licensing deals yet, or really trying to cash in. They don't really care that these systems don't really work, are misunderstood, or are hard to control. All they want is for this technology to become such an integral part of your day-to-day life that living without it seems impossible (and for that to happen with their product specifically, not their competitors'). Which is also one of the reasons they have tried so hard to control the messaging around this technology for the last eight years. Widespread public understanding of this technology would, in my opinion, be pretty catastrophic for a lot of these companies. The confused state of discourse about AI is a symptom of a very specific strategy to make it hard to make sense of what is happening. The people who are afraid get scorned. The people who are excited act as cheerleaders. Almost no-one is identifying the real issues, and as a result, politicians and lawmakers experience more noise and distraction when trying to make sense of what is coming next.

The UK's proposed regulations are really an absence of regulation, while the EU's appear to have been watered down considerably (and there's some debate if they are even still relevant, as the landscape has changed so fast). My prediction is the US' won't be any more useful than the UK's, as the government advisory boards seem fairly compromised by big tech and the US is very afraid of its international competitiveness. It seems unlikely that a major GDPR-like law is going to sweep in and mix things up for us this time.

So what can we do instead? This doesn't mean the end of the world but it bodes poorly for the next couple of years. And it's a recurring reminder that, again, no-one is coming to save us. There are no grown-ups in charge. If we want to change the future, we have to do it ourselves. We can talk about that in a future post, though. πŸ’œ


You must log in to comment.

in reply to @mtrc's post:

thanks for this thoughtful post, mike. imo your analysis is dead-on and i really appreciate how clearly you've articulated it here β€” definitely gonna be sharing this with the people in my life who are less mired in the tech world, but still want to understand what's going on with AI.

also β€” glad to see you here on cohost! :eggbug: