erica

talk account

freelance illustrator, designer, and idk buncha stuff

@kuraine's wife

avatar by karu

ascari
Last.FM Recently Played



mtrc
@mtrc

What is the purpose of automation? I've been working with and researching generative algorithms for almost fifteen years now, and I've used all sorts of techniques in all sorts of ways. I've also gotten to watch as procedural generation changed from a relatively rarely-seen technique to a core part of several game genres and a major point of research and development for some of the biggest game developers in the world. Yesterday, Unity announced several new tools powered by some of the latest trends in AI - you'll have heard lots of different names for them, but let's call them 'generative AI'. I'm skeptical about a lot of it, and I shared some of the reasons why on Twitter. I wanted to expand on one point in particular in this blog post: what is automation actually for?


Who Automates Who?

Automation in a lot of industries is controversial, because it's imposed from above onto the people that do the work. In the UK a common reason for transport union strikes is the threat of automation replacing, for example, staff selling tickets or helping people out in stations. This week the UK government has claimed they intend to replace receptionists in the health service with AI systems. This kind of automation is explicitly applied as a business solution that saves money, by replacing a person with a machine, either in part or whole. In his book Forces of Production, author David Noble points out that this isn't often even for cost reasons - it can be a way of more tightly controlling employees (thanks to Aleena Chia for that reference).

In the games industry, automation is more complicated. It is imposed from the top down as a way of making more money, but this typically doesn't come through replacing humans. Because the games industry is in a constant arms race to create bigger, flashier games, automation is instead used to scale up production. If my staff normally have the time to design five levels, and I take away 20% of the tasks involved in making a level, they might have time to make a sixth level. That makes my game bigger than the previous game.

However, automation is also used from the other side too, to achieve creative visions. For example, the action roguelite Spelunky uses procedural generation to create levels every time the player plays the game. Designer Derek Yu created a level generation system to make the game unpredictable and improvisational. It wasn't applied as a business decision in the moment, it was a creative decision, an artistic one. Classifying this situation exactly is complicated because Derek was a solo developer - Derek was both the line worker and the manager. But Derek's own writing about Spelunky makes it clear why he chose to use automated techniques to create content for his game.

Does Automation Work?

In the field of game AI research, where I work, it used to be very common to justify research into procedural generation by explaining that it saves money and time. In their book on Procedural Content Generation, Shaker, Togelius and Nelson claim that "a game development company that could replace some of the artists and designers with algorithms would have a competitive advantage, as games could be produced faster and cheaper while preserving quality.'' In our earlier example of automation in railway stations and hospitals, we can see how this argument is made directly. Before, we were paying people to do a task. Now, we have replaced it with a machine. The argument may not actually hold water, but the logic is sound.

The problem with making the same claim about the games industry is that we don't actually have any evidence for it. This is not to say it isn't true! It's just that in our example above about adding an extra level to a game, it's not obvious whether we can say this has saved money or time. We made a bigger game for the same amount of money and person-hours, and that sort of sounds like a good thing, although it's really only good if you benefit from the profit a game makes. If you're working in a salaried job then your job probably didn't change all that much - and in fact it may have gotten worse, because unstable automation technology can fail and create extra work for you.

Which brings us to the second issue related to automation: even if we accept that automation, in principle, saves us time, we don't actually know if it works that way in practice. I'm fairly certain that autocorrect on my phone has cost me more time than it has saved. Sometimes it helpfully fixes a typo, and sometimes it changes correctly-spelled technical terms, place names or non-English words into complete garbage, which I then have to delete and retype, sometimes multiple times if my thumb misses the tiny cross that tells it not to do that. That doesn't mean autocorrect is bad, and it doesn't even mean it's not saving me time. But if people were telling me this technology was going to revolutionise my business, my industry and my working life, I might expect some data to back it up.

New Experiences

What about Spelunky, though? We might not be able to claim that procedural generation saves the time of people working on big games, but Spelunky seems impossible to make as a solo developer without technological assistance. This is a really great point and it's absolutely correct - such games would've been completely impossible to make otherwise. In Unity's big announcement post yesterday, one of the motivations they gave for integrating new AI systems with their game engine was that it would enable "new experiences".

This is the best argument for using generative technology, and it's why I personally have always been obsessed with generative algorithms. Spelunky is magical to me, fifteen years after launch, because its design is not possible without a generative algorithm. It uses procedural generation as a tool, to enable something unique, instead of replacing something that was already possible. The generative communities I've been a part of are interested in exploring these ideas, and seeing what other new experiences we can make using generative techniques.

However, it's worth noting that despite companies like Unity talking a lot about new experiences, mostly what they show are old ones. Most of the examples of generative AI being used today are replacing things people already do - you can make a 3D model move without an animator, you can make concept art without an artist, you can write code without a programmer. These are not new experiences, these are economic arguments for replacing people. It's probable that the reason for this is that the people doing the sales pitch aren't really designers or artists - they're technologists and salespeople, so they're selling the technology using the only ideas they have, which is normal when an idea is new. Real innovation might come later, when other people get their hands on this technology.

It also might not. It sort of makes the sales pitch a little circular - Unity is certain this will deliver new experiences, but also don't have any examples or evidence of why we should believe this. Hype and optimism is infectious, and it drives a lot of press, investment and enthusiasm for new technology. I obviously hope we see lots of new and exciting ideas in the future! But I'm not going to assume it's going to happen, either.

El Problema Es

What about the midpoint between the Spelunky argument and the everyday game development argument? Someone pointed out, correctly, that big AAA open world games like Watch Dogs 2 would have been much harder to make without automated tools. Isn't this evidence that it saved time and money?

It is pretty good evidence. The scale of games we see from major development houses today isn't possible without enormous investment in tools, huge teams that sprawl across the whole globe and, often, a lot of crunch and worker exploitation. All of this has an impact on the games we see released. Remove any part of this equation, including automation through tools, and these games aren't possible to make profitably any more. I hinted at it earlier, but automation is sold through two simultaneous sales pitches: to the people who enjoy profit, and to the people who generate it.

You might be both of these people. If you work on your own games, or in a small company or co-operative with your friends, then you tangibly benefit from any change to your ability to make games. Using automation to make bigger games that perhaps let you access a new audience, better investment or simply achieve more complex personal creative goals, are all good things (assuming the technology you choose to use isn't hurting someone else, but that's another blog post). But for a lot of people, they are not both of these things. If you work for a game developer as a junior programmer or artist or designer, your job is your job. Your company's bottom line doesn't mean much to you unless it dips low enough that they need to fire you.

One of the most egregious claims I've ever seen about procedural generation research is that it could be used as a solution to crunch in the games industry. But our example above of a AAA developer making an open world game is the perfect illustration of why this will never happen. As automation reduces the work someone has to do to complete their tasks, one of two things will inevitably happen: either they will be assigned more work (a bigger game) or paid less (a smaller workforce). Neither of these two situations results in a better life for the developer. Crunch doesn't happen because people decided to make a game and, magically, it was exactly 20% bigger than they had the budget and planning for every single time. Crunch happens because it is the way to legally extract the most work from a group of people and still produce a game that you can sell. Crunch is a failure of management, yes, but it's also a failure of incentives, processes and laws.

Endings

I really, really love generative algorithms. I'm personally not that excited about generative machine learning systems like Midjourney and ChatGPT, but I totally get why other people are. It seems magical and cool, and the joy people get from making silly or beautiful things with these systems mimics the same joy I had ten years ago writing code to randomly generate games. I am critical of a lot of things in the AI space today - companies, opportunists, grifters, regulators, a total lack of ethical awareness - but I understand the optimism and excitement about a lot of this technology, and I don't want to suggest it's wrong to feel excited or that none of this is real. I don't say this enough, so let me please double underline this here. There is a future in which generative technology is built and deployed well, to the benefit of everyone, but it is not a future we are building towards right now.

I grew up in the beginnings of STEM mania, where science and technology were held above everything and the idea that science and engineering was truth was a big part of that. That was never really true, but today the world of AI is almost solely driven by hype and potential, facts do not enter into it. We are faced with enormous claims every day that have absolutely no substance behind them, and these claims often come from people we think we can trust - governments, community leaders, 'non-profits' and friendly, shiny technology companies. It is absolutely imperative that we become more critical of these claims, and ask for clearer explanations of what we are being sold, and where we are being led.

What is the purpose of automation? It depends on who you are. It might be to save money and time, it might be to create beautiful new experiences, or there might be no real reward at all. When someone tells you a technology is going to bring about a particular benefit, ask yourself: are they talking to you? Or someone else?


You must log in to comment.

in reply to @mtrc's post:

one thing about procedural generation that is almost never addressed, for obvious reasons, is that "infinite" procedurally generated worlds are often... very small. and samey. after not very long, it becomes recognizably the same exact experience, just a bunch of times. for roguelikes this can be interesting because the whole point is refining your ability to adapt to breakthrough challenges. for exploration games, or open worlds? it actually kind of sucks! you can usually very quickly suss out the bounds of what you might encounter, after which there will be no new surprises.

Ahh, thank you for making this point! I have a blog post I want to write next on the concept of "infinity" - I totally agree with you. Infinity is an odd thing. It never quite behaves how we imagine it will. :)

Thanks Mike, I enjoyed reading this.

There's a big issue I have with generative AI, aside from business ethics and politics and labour rights and IP law and all of that, and it's one that I don't hear people talk about much.

Humans sometimes have to do certain things to survive comfortably, and sometimes we make technology to assist with that. Like safety equipment in a dangerous factory, or an electric tin opener for someone with limited manual dexterity, or a screen-reader, a wheelchair, a smoke alarm, video conferencing, etc etc. All things invented to make life better or safer for some or all people. We invented factories and meetings and tin cans because we felt we needed to, and then we had to invent labour-saving or assistive technology to save ourselves from them.

Now, art is different. (By which I mean, any and all expressive media created by a human.) Humans don't make art because we have to so that we can survive comfortably. We do it because it's part of the human condition to create and express and share art. It's also a job, for a lot of people, but it's aspirational. We celebrate the art we love, whether that's Mozart or Marvel, or Dark Souls or Donkey Kong, a shitpost or a crappy poem. We revere people who we see as skilled creators. And we create art to enjoy ourselves, to express our thoughts and feelings, for the joy of practice and of sharing. It's in everyone to some extent, whether they know it or not. It's good.

So like, why do we need to be saved from that? Why do we, as a society, need technology to save us from the awful, dangerous, drudgery of... making something cool? I want our machine overlords to take care of the factories and databases and tin cans, and leave us humans free to be creative. But what they're trying to sell us is the other way round. I guess I object to that on a philosophical level.

This all applies more to things like GANs and LLMs than it does to proc-gen level creation and such like, but even so, I like an authored game that's telling me something. I'm glad things like Spelunky exist as good examples of designed, controlled automation, but I still feel like, on a fundamental level, that game design is art, and art is a thing humans do.

AI gives us a lot of tools to expand what's possible in creative and other areas and we'll keep getting more and more as the technology develops. We just need to get rid of tying survival to labor so that the "replacing people with machines" thing doesn't destroy society. Full unemployment should be the goal, not the problem!

Thanks for the thoughtful post!

I think another important difference between procgen in older games, vs current generative machine learning systems, is comprehension and intentionality.

My mental image of procgen in old roguelikes/etc – I'm assuming Spelunky falls into a similar category – is that game devs have some particular sense of what they want their game to look like or do, and based on that, they build up algorithms and constraints that lay out their specific ideas and goals in code, and put together input data for those algorithms to operate on. A big complex program can always surprise us with unexpected consequences of our instructions, but I'd argue there's a real sense in which the devs, individually or collectively, understand what the procgen is doing and why.

my header image, a procedurally generated circuitboard-like pattern

I see the crafting of these procgen algorithms as an art form in and of itself. For instance, my header image on cohost was generated by 128 lines of Asymptote code that I wrote myself; I thought about what sort of grid I wanted those circuitboard traces to follow, decided in what ways I wanted them to change direction with what probabilities, wrote code to keep them from colliding with each other, etc. Obviously Asymptote itself is a big complicated program that's doing a lot of things I don't necessarily fully understand as it executes my code and outputs SVG files, and the graphics libraries that render those SVG files as pixels on a screen are also doing all sorts of weird and complicated stuff. But above the level of abstraction where I can just think about points and lines and circles and paths and colors on a coordinate plane, every decision my code is making about what lines to draw where is a direct reflection of my artistic intent.

I've never actually used any of the machine-learning-based generative models that are getting hyped up today, but my sense is that using these models is much less of a comprehensible and intentional process. The models are trained on gigantic datasets that creators didn't choose, don't need to think about, and can't even understand or describe except in very broad abstract terms, like "every single comment on Reddit with a score of at least +3". Based on that input data, the algorithm generates a massive set of weights and connections for a neural network, which then generates output data in response to prompts. At best, I'd accept that the process for choosing those weights and connections is a direct reflection of ML researchers' ideas and intent (although the same can't really be said for the massive corpus of input data that process is working from); but the choices the resulting network makes in generating output data aren't at all meaningful or comprehensible to these tools' users. I'm not gonna say generative ML systems are invalid as artistic tools (although I do think that the large-scale profit-driven use of these systems to displace human artists is unconscionable), but I really do think it's substantively different from older procgen algorithms in important ways, and I'd be extremely hesitant to lump the two together.

This was a really good read, but something that I kept thinking was that there's an equivalence being drawn between "Generative AI" and "Procedural Generation" that I'm not 100% sure should be there (with a small disclaimer that I'm coming from the Data Science perspective here, and less a Game Design one).

Generative AI, as in tools like ChatGPT, DALL-E, and so on, work different on a fundamental level than Procedural Generation: Generative AI aggregates a massive data set to generate new things, whereas Procedural Generation technique are more often hand-written molds from which new items can be created.

The example I'm coming to in thinking about this for myself is the Minecraft mode of world generation. Minecraft uses a specific set of parameters that hand tuned to create the landscapes that make up the game (ie, Procedural Generation). But, way back during its early development, Mojang didn't analyze geographical structures on Earth, compile large amounts of genomics data, and base Minecraft worlds in their image (ie, Generative AI).

This is a difference that I think is important because the way that these techniques are used have Very Different Material Impacts. The concerns that people bring up with respect to Generative AI will sometimes just not apply to Procedural Generation. So when we're critiquing the role of AI and automation in the world, being specific about the processes we're criticizing and why is important. I don't mean any of this to take away from the points you're making, but drawing a line of separation between these two techniques still feels useful.