pendell

Current Hyperfixation: Wizard of Oz

  • He/Him

I use outdated technology just for fun, listen to crappy music, and watch a lot of horror movies. Expect posts about These Things. I talk a lot.

Check tags like Star Trek Archive and Media Piracy to find things I share for others.



Pixl
@Pixl

i was mistaken and now my notifications regret it


twilight-sparkle
@twilight-sparkle
This page's posts are visible only to users who are logged in.

vogon
@vogon

people who create AI models don't actually want to train on other AI art because it creates a feedback loop

a small note to expand upon this: yesterday I found out that Stable Diffusion (another AI art generator) silently watermarks all of its output invisibly to the human eye, to attempt to ensure that it can detect and prune out images it created from its input. it's likely that other model developers have at least considered doing this as well, and one would assume that at some point they'd settle on a standardized way of doing it, since it's in all their best interest to ensure that they can recognize all of their output.


woob
@woob

Is there anything to stop people from making a generic filter that artists can use to apply that very same watermark to their own real art?


vogon
@vogon

in one sense, no, nothing at all stops you from adding the watermark to the image. the technique is completely AI-free and just depends on a little bit of math that's easily comprehensible to anyone with experience in computer graphics.

in another sense, there are three basic methods I can think of for culture-jamming using this watermark:

  1. adding the watermark to images which were human-generated, in order to opt yourself out of training data and prevent it from doing harm directly to you (at least in the sense of infringing your intellectual property; it can still exert an opportunity cost on you by denying you work);
  2. removing the watermark from images which were AI-generated, in order to worsen the training data to the point where people stop using AI generators;
  3. creating a tool that renders the watermark human-readable, to e.g. enable people to name and shame people presenting art as human-generated when it was AI-generated.

both methods 1 and 2 suffer from the problem that datasets predating the invention of these strategies already exist, so at best you're going to fight the status quo to a standstill.

a unique problem with method 1 is that its impact is limited unless it's so easy and popular that a critical mass of digital artists apply it to all of their output.

a unique problem with method 3 is that if anyone is doing method 1 or method 2, you're going to get false positives and false negatives, and even one notable instance of people being able to repudiate the judgment of that tool is going to discredit it substantially.

unfortunately, the hard part is not the computer science.


You must log in to comment.

in reply to @Pixl's post:

in reply to @twilight-sparkle's post:

i kinda had a feeling the stuff spreading was fake but this is a really nice breakdown of some of the process behind these generators that really puts my wonders to rest, tyvm 🙇‍♀️🙇‍♀️🙇‍♀️

in reply to @vogon's post:

yeah, the watermark is embedded in the frequency domain so you can pretty easily remove the watermark the same way it was added, and with only minor impacts to the output image

in reply to @vogon's post: