Writer, game developer, queer artist of failure. Half of @fpg: Future Proof Games.


Future Proof Games
futureproofgames.com/
Before the Future Came: A Star Trek Podcast
beforethefuture.space/

posts from @gaw tagged #generative ai

also:

My talk is titled, "Dada, Animal Crossing, and ChatGPT: Can We Ever Write Our Own Stories With the Master's Tools?" If all goes well, I'll be teaching an audience of mostly educators how ChatGPT works at a high level (it's a fancy Markov generator) and why they shouldn't use it (it serves hegemony).

If you can't make it to the conference, I'll be sharing my slides afterward and am open to doing similar talks elsewhere!

You can get more info on the conference here.



Deep-learning images like those made by GANs, GPTs, and diffusion models show up unexpectedly, and they tend to have a certain "look." Skepticism of images you see is healthy, and I think it's good to pay attention so you're not tricked into thinking that a piece was done by a real human artist instead of extruded from the conceptual slurry of a generative AI. Deep-learning generative AI is primarily bad for labor and aesthetic reasons: it lets big corporations profit off of the labor of artists without compensation and it looks generic and ugly1. But you (and I) are really bad at identifying AI by vibes.

I see plenty of people accusing real artists and graphic designers of using AI tools, which is both shitty for the artist and also makes the important conversations around AI less clear. An example that's sticking in my craw right now is the response to a recent episode of the comedy slideshow program Smartypants from the vocally pro-labor streaming service Dropout. The Dropout subreddit has multiple threads alleging that a series of mecha images from the episode are AI-generated2.

We've learned various red flags that can identify AI art, most of which have to do with either details or logic. Due to how it works under the hood, AI is good at making an image that's laid out like a piece of artwork in the thumbnail, and where any given region looks pretty good, but where the pieces don't fit together right. This is how you get fucked-up fingers and teeth, melting faces, and illegible text. AI's also bad at actually understanding how stuff works, which is why you'll see keyboards with too many keys, sightlines that don't match the events, or photo-real objects that would collapse under their own weight. Human artists also struggle with all of these things (see Rob Liefeld for a notorious example), but we tend to make different mistakes from AI.

Sometimes it's obvious that AI generated an image: it's a smooth, colorful, intricate image with inexplicable inconsistencies. But sometimes it's not clear if you're dealing with an AI image, an imperfect human artist who's bad at hands, a photorealistic but surreal human-made image, or a picture slapped together quickly by art staff because it'll only be on screen for a few seconds and it's funnier if it looks a little off.

We need to be skeptical about people using AI art to trick us or pass it off as human-made. But we can't fall into conspiracist logic. If an established artist or creative team that is otherwise trustworthy produces an image that triggers your AI radar, consider that your ability to spot generated images is just as imperfect as generative AIs' capability to make them. Give artists as much benefit of the doubt as they deserve. Big faceless corporations might prefer to have computers dispense slop rather than pay workers, but working artists and creative teams are unlikely to risk sullying their reputation by secretly using AI.


  1. The moral and aesthetic considerations are fuzzier for certain generative-AI-powered tools, like Photoshop's original content-aware fill. Right now, I'm writing about software that produces entire images from text descriptions or does a "style transfer" to make an existing image look as if it was produced using a different process than it was.

  2. None of the mecha images show any actual signs of being AI-generated, and basically all of them can be traced to demonstrably human-made stock images. There is one image of a Godzilla-like kaiju that is questionable; it's based on a stock image that is not labelled as AI-generated, but is from an artist who mostly does AI stuff. It's very convincing and probably was used mistakenly in good faith.