Hönig told Ars that breaking Glaze was "simple." His team found that "low-effort and 'off-the-shelf' techniques"—such as image upscaling, "using a different finetuning script" when training AI on new data, or "adding Gaussian noise to the images before training"—"are sufficient to create robust mimicry methods that significantly degrade existing protections."
Sometimes, these attack techniques must be combined, but Hönig's team warned that a motivated, well-resourced art forger might try a variety of methods to break protections like Glaze. Hönig said that thieves could also just download glazed art and wait for new techniques to come along, then quietly break protections while leaving no way for the artist to intervene, even if an attack is widely known. This is why his team discourages uploading any art you want protected online.
Ultimately, Hönig's team's attack works by simply removing the adversarial noise that Glaze adds to images, making it once again possible to train an AI model on the art. They described four methods of attack that they claim worked to remove mimicry protections provided by popular tools, including Glaze, Mist, and Anti-DreamBooth. Three were considered "more accessible" because they don't require technical expertise. The fourth was more complex, leveraging algorithms to detect protections and purify the image so that AI can train on it.
Don't turn this thread into a debate about AI art or those tools, i'm just sharing because i know people here used those tools on their stuff but sadly, like all "protection", it was only a matter of time until flaws were found so I think it's useful to be aware of them.
