• they/them

Clown who draws and sometimes publishes games.
Icon by https://cohost.org/bachelorsoft!

Also mine:
@RPGScenarios
@DungeonJunk
@Making-Up-Adventurers


My Itch.Io Page
earthshaking.itch.io/
Dungeon Junk On Neocities
dungeonjunk.neocities.org/
Making Up Adventurers on Neocities
makingupadventurers.neocities.org/
My Dreamwidth Journal for Writing!
shaker-e.dreamwidth.org/

Would it be possible to make images that have embedded shit in the file that would pollute or corrupt the output of AI image generators? Or just otherwise make it unusable ?

Like how people hide compressed i.ages inside other image hashes I think it is

My ignorance is showing

but like

It's a computer thing

So there's a way to exploit or attack with it / through it, there always is

Like, scrape my art at your own risk


You must log in to comment.

in reply to @EarthShaker's post:

there was one japanese artist putting very faint maybe 1-2% opacity digital watermarks of some man's face (i want to say it was like, the shape of the jerma sus face. it wasn't, but it was an impression sort of like that), as a prank, with the idea being to force 'man that shows up in ai-generated images sometimes' in the dataset. y doesn't know if it worked or that method is particularly effective, but there are steganographic attack attempts like that

something to note is that it's typical in image machine learning to normalize inputs to bitmap data (sometimes? or always? at a fixed resolution - at least, that's how a lot of consumer GANs used to do it. idk. y hasn't read about any of the new stuff so i could be off the mark there.) meaning it's not good enough to hide things in the file itself, it has to be present in the actual decoded image... but there are still ways to have that work. small amounts of well-placed noise used to really throw off some image recognition models, and stuff like the almost imperceptible watermarks still get picked up when a model is looking at raw color data.

this is just rambling thoughts though, unfortunately don't know of any easy how-to guides for art poisoning... it's not too hard to find papers like https://people.cs.uchicago.edu/~ravenben/publications/pdf/backdoor-sp19.pdf and https://millerdw.github.io/Poisoned-Datasets-in-Machine-Learning-Models/ describing white square and pixel pattern backdoors, so maybe somewhere out there someone has written on poison patterns for the larger models?

This is actually such a great starting point for me to work from! Between this and another person's idea to aggressively tag garbage images with one's name so the AIs can't identify the real art from all the junk images, there might be a lot of directions to go, ideas to try...

I have a lot of reading to do now haha

I think it is technically possible to add some noise to an image to fool some of it if you knew what kind of network you are working against. But I suspect many scrapers downscale the images, which denoises them enough that anything invisible would be lost. IIRC networks train at a very specific crop and resolution, so scrapers would preprocess images to fit that.

A better way might be to make a part of your artist site that is procedurally generated shape garbage, with CC images spliced in so it does get recognized as existing things. You need to get the concept of your artist name to be associated with as much random, but existing garbage as possible so it can't recognize your actual bespoke art in the sea of it.