What is the purpose of automation? I've been working with and researching generative algorithms for almost fifteen years now, and I've used all sorts of techniques in all sorts of ways. I've also gotten to watch as procedural generation changed from a relatively rarely-seen technique to a core part of several game genres and a major point of research and development for some of the biggest game developers in the world. Yesterday, Unity announced several new tools powered by some of the latest trends in AI - you'll have heard lots of different names for them, but let's call them 'generative AI'. I'm skeptical about a lot of it, and I shared some of the reasons why on Twitter. I wanted to expand on one point in particular in this blog post: what is automation actually for?

Who Automates Who?
Automation in a lot of industries is controversial, because it's imposed from above onto the people that do the work. In the UK a common reason for transport union strikes is the threat of automation replacing, for example, staff selling tickets or helping people out in stations. This week the UK government has claimed they intend to replace receptionists in the health service with AI systems. This kind of automation is explicitly applied as a business solution that saves money, by replacing a person with a machine, either in part or whole. In his book Forces of Production, author David Noble points out that this isn't often even for cost reasons - it can be a way of more tightly controlling employees (thanks to Aleena Chia for that reference).
In the games industry, automation is more complicated. It is imposed from the top down as a way of making more money, but this typically doesn't come through replacing humans. Because the games industry is in a constant arms race to create bigger, flashier games, automation is instead used to scale up production. If my staff normally have the time to design five levels, and I take away 20% of the tasks involved in making a level, they might have time to make a sixth level. That makes my game bigger than the previous game.
However, automation is also used from the other side too, to achieve creative visions. For example, the action roguelite Spelunky uses procedural generation to create levels every time the player plays the game. Designer Derek Yu created a level generation system to make the game unpredictable and improvisational. It wasn't applied as a business decision in the moment, it was a creative decision, an artistic one. Classifying this situation exactly is complicated because Derek was a solo developer - Derek was both the line worker and the manager. But Derek's own writing about Spelunky makes it clear why he chose to use automated techniques to create content for his game.

Does Automation Work?
In the field of game AI research, where I work, it used to be very common to justify research into procedural generation by explaining that it saves money and time. In their book on Procedural Content Generation, Shaker, Togelius and Nelson claim that "a game development company that could replace some of the artists and designers with algorithms would have a competitive advantage, as games could be produced faster and cheaper while preserving quality.'' In our earlier example of automation in railway stations and hospitals, we can see how this argument is made directly. Before, we were paying people to do a task. Now, we have replaced it with a machine. The argument may not actually hold water, but the logic is sound.
The problem with making the same claim about the games industry is that we don't actually have any evidence for it. This is not to say it isn't true! It's just that in our example above about adding an extra level to a game, it's not obvious whether we can say this has saved money or time. We made a bigger game for the same amount of money and person-hours, and that sort of sounds like a good thing, although it's really only good if you benefit from the profit a game makes. If you're working in a salaried job then your job probably didn't change all that much - and in fact it may have gotten worse, because unstable automation technology can fail and create extra work for you.
Which brings us to the second issue related to automation: even if we accept that automation, in principle, saves us time, we don't actually know if it works that way in practice. I'm fairly certain that autocorrect on my phone has cost me more time than it has saved. Sometimes it helpfully fixes a typo, and sometimes it changes correctly-spelled technical terms, place names or non-English words into complete garbage, which I then have to delete and retype, sometimes multiple times if my thumb misses the tiny cross that tells it not to do that. That doesn't mean autocorrect is bad, and it doesn't even mean it's not saving me time. But if people were telling me this technology was going to revolutionise my business, my industry and my working life, I might expect some data to back it up.

New Experiences
What about Spelunky, though? We might not be able to claim that procedural generation saves the time of people working on big games, but Spelunky seems impossible to make as a solo developer without technological assistance. This is a really great point and it's absolutely correct - such games would've been completely impossible to make otherwise. In Unity's big announcement post yesterday, one of the motivations they gave for integrating new AI systems with their game engine was that it would enable "new experiences".
This is the best argument for using generative technology, and it's why I personally have always been obsessed with generative algorithms. Spelunky is magical to me, fifteen years after launch, because its design is not possible without a generative algorithm. It uses procedural generation as a tool, to enable something unique, instead of replacing something that was already possible. The generative communities I've been a part of are interested in exploring these ideas, and seeing what other new experiences we can make using generative techniques.
However, it's worth noting that despite companies like Unity talking a lot about new experiences, mostly what they show are old ones. Most of the examples of generative AI being used today are replacing things people already do - you can make a 3D model move without an animator, you can make concept art without an artist, you can write code without a programmer. These are not new experiences, these are economic arguments for replacing people. It's probable that the reason for this is that the people doing the sales pitch aren't really designers or artists - they're technologists and salespeople, so they're selling the technology using the only ideas they have, which is normal when an idea is new. Real innovation might come later, when other people get their hands on this technology.
It also might not. It sort of makes the sales pitch a little circular - Unity is certain this will deliver new experiences, but also don't have any examples or evidence of why we should believe this. Hype and optimism is infectious, and it drives a lot of press, investment and enthusiasm for new technology. I obviously hope we see lots of new and exciting ideas in the future! But I'm not going to assume it's going to happen, either.

El Problema Es
What about the midpoint between the Spelunky argument and the everyday game development argument? Someone pointed out, correctly, that big AAA open world games like Watch Dogs 2 would have been much harder to make without automated tools. Isn't this evidence that it saved time and money?
It is pretty good evidence. The scale of games we see from major development houses today isn't possible without enormous investment in tools, huge teams that sprawl across the whole globe and, often, a lot of crunch and worker exploitation. All of this has an impact on the games we see released. Remove any part of this equation, including automation through tools, and these games aren't possible to make profitably any more. I hinted at it earlier, but automation is sold through two simultaneous sales pitches: to the people who enjoy profit, and to the people who generate it.
You might be both of these people. If you work on your own games, or in a small company or co-operative with your friends, then you tangibly benefit from any change to your ability to make games. Using automation to make bigger games that perhaps let you access a new audience, better investment or simply achieve more complex personal creative goals, are all good things (assuming the technology you choose to use isn't hurting someone else, but that's another blog post). But for a lot of people, they are not both of these things. If you work for a game developer as a junior programmer or artist or designer, your job is your job. Your company's bottom line doesn't mean much to you unless it dips low enough that they need to fire you.
One of the most egregious claims I've ever seen about procedural generation research is that it could be used as a solution to crunch in the games industry. But our example above of a AAA developer making an open world game is the perfect illustration of why this will never happen. As automation reduces the work someone has to do to complete their tasks, one of two things will inevitably happen: either they will be assigned more work (a bigger game) or paid less (a smaller workforce). Neither of these two situations results in a better life for the developer. Crunch doesn't happen because people decided to make a game and, magically, it was exactly 20% bigger than they had the budget and planning for every single time. Crunch happens because it is the way to legally extract the most work from a group of people and still produce a game that you can sell. Crunch is a failure of management, yes, but it's also a failure of incentives, processes and laws.

Endings
I really, really love generative algorithms. I'm personally not that excited about generative machine learning systems like Midjourney and ChatGPT, but I totally get why other people are. It seems magical and cool, and the joy people get from making silly or beautiful things with these systems mimics the same joy I had ten years ago writing code to randomly generate games. I am critical of a lot of things in the AI space today - companies, opportunists, grifters, regulators, a total lack of ethical awareness - but I understand the optimism and excitement about a lot of this technology, and I don't want to suggest it's wrong to feel excited or that none of this is real. I don't say this enough, so let me please double underline this here. There is a future in which generative technology is built and deployed well, to the benefit of everyone, but it is not a future we are building towards right now.
I grew up in the beginnings of STEM mania, where science and technology were held above everything and the idea that science and engineering was truth was a big part of that. That was never really true, but today the world of AI is almost solely driven by hype and potential, facts do not enter into it. We are faced with enormous claims every day that have absolutely no substance behind them, and these claims often come from people we think we can trust - governments, community leaders, 'non-profits' and friendly, shiny technology companies. It is absolutely imperative that we become more critical of these claims, and ask for clearer explanations of what we are being sold, and where we are being led.
What is the purpose of automation? It depends on who you are. It might be to save money and time, it might be to create beautiful new experiences, or there might be no real reward at all. When someone tells you a technology is going to bring about a particular benefit, ask yourself: are they talking to you? Or someone else?

