• they/them

@daboross-pottery

For future reference: https://daboross.net/. Blog & RSS feed not yet built as of 2024-09-13.


post-cohost newsletter
buttondown.com/daboross/

MOOMANiBE
@MOOMANiBE
  • Dev insisting it would have been infeasible to remake their game by actually paying people to do work so they used AI instead
  • Polygon completely uncritically talking about how great it looks and sounds

abso fucking lutely not


egotists-club
@egotists-club

My guess is that they did something like my 2023 paper with the UBC crew:

https://www.cs.ubc.ca/labs/imager/tr/2022/SubpixelDeblurring/

We looked at upsampling old pixel art in the context of de-blurring it. Our approach combines an initial upsampling (d) based on pix2pix (see: https://phillipi.github.io/pix2pix/ if you want the gritty details); a classical, non-AI algorithm then cleans the results (e). The paper claims "a few hundred hand-drawn sprites" and remastered versions created by rescanning the original assets were used. We got good results with 145 pieces of clip-art in the training set; 300 is more than enough. No extra random data sets from the Internet required.

"AI", to my mind, encompasses a lot of things. In the case of Broken Sword, and my paper, it's really not "AI" in the sense of an LLM trained on scraping the internet; instead, it should be read as "a very complex non-linear upsampling and convolution kernel filter" that is specifically optimized based on the target domain you're trying to work from. In that sense, it's much more about numerical optimization. It's not "shove everything into a tuned stable diffusion model and then fine tune it on your 300 inputs/outputs", mainly because despite what the stable diffusion model people want you to believe, this doesn't actually work.

So: what are the ethics of this?

No random data is being scraped from the Internet (presumably; I can only speak for myself here.) I am not a fan of papers or academic work that do this; I have loudly complained about a number of "data set" papers that just scrape vast quantities of models and publish the lot in a package with the idea that this is now a 'cleansed' training set. However, let us assume this is not the case here, because technically speaking I don't think it is. I may be wrong, in which case all bets are off: data scraping is contemptible. But I don't think I'm wrong.

Should artists have been hired, and were they put out of business by a machine? Maybe. I think we lost this fight some time in the Victorian era, and energy spent relitigating it is better spent overthrowing capitalism. Similarly, presumably the Broken Sword work was done either under an employment contract or as work-for-hire, at which point the original animator has no legal rights to the work, in the sense of copyright.

The question, therefore, is whether or not the animator has moral rights, and there are two standards for this in Canadian law. First, the integrity rights portion of moral rights mean that the author has the ability to preserve the intended meaning of the work and protect it from destruction or defamation. Second, and importantly, the alteration of a work in good faith to preserve its intended meaning or nature is not considered an infringement of moral rights under Canadian copyright law. There is a good summary of this over at the Heer Law blog: https://www.heerlaw.com/moral-rights-copyright-law - and I quote:

Canadian courts first examined the issue of moral rights infringement in Snow v. Eaton Centre Ltd., (1982) 70 C.P.R. (2d) 105 (Ont. H.C.J.). In this case, the Toronto Eaton Centre was found liable for infringing the plaintiff’s moral rights for putting festive ribbons on the plaintiff’s sculpture depicting sixty geese. The plaintiff argued that this modification was prejudicial to his reputation and compared it to “putting a wristwatch on Michelangelo’s David or earrings on the Venus de Milo.” The Ontario High Court of Justice—weighing the plaintiff’s opinion together with the opinions of other artists who were knowledgeable in the field—found that the plaintiff’s concern for his reputation was reasonable. The Court granted an injunction to compel the removal of the ribbons from the necks of the geese.

In 2003, the Ontario Superior Court of Justice considered the topic of moral rights again in a case between a photographer and a golf club, Ritchie v. Sawmill Creek Golf & Country Club Ltd. The photographer alleged his moral rights in his photographs displayed on the golf club’s website were infringed by the photos being enlarged beyond the size in which he provided them, and by his name having been removed from them. The court found the first argument unconvincing as the photos were not so enlarged as to be of markedly reduced quality and damaging to the plaintiff’s honor and reputation. As for the photographer’s moral rights of association “where reasonable in the circumstances” with his work, the court considered that, following a complaint to the RCMP that the golf club had infringed his copyright, it was no longer reasonable for the plaintiff to believe the club would continue to associate him with the photographs on their website.

So the question is: is this geese, or is this photographic enlargement?

I think ultimately this is going to be a question for the courts, which will be really interesting once we get there. Consider, if you will: we threw a mode into Dungeons of Dredmor which lets you upsample the original (drawn originally at far too small a resolution, because when we started game development I had a 1024x768 CRT monitor) sprites. One of those modes runs them through HQX, an upsampling filter developed by Max Stepin; the other mode is nearest-neighbour. Does HQX destroy the artistic integrity of the work? And assuming your ML model is entirely sourced from works you own or created, it is essentially applying a set of fixed transforms or rules to the input pixel art - is that any different than HQX, and if so how?

(I don't claim to have the right answers to this, by the way, and I do agree with Aura's original point that Polygon should not report this sort of thing uncritically. Poor media reporting around AI means that people can get away with "bad practices". But that doesn't mean there aren't "good practices", and achieving some sort of sensible ML praxis means we should ideate what the "good practices" are.)


You must log in to comment.

in reply to @MOOMANiBE's post:

I should clarify my stance here:

I feel like if you're calling just upscaling or keyframe interpolation AI, you've got some form of agenda, even if it's just the media piece doing some fucking laundering of bad actors.

i also feel a type of way that charles cecil was just casually like 'yeah so we had a lot of the original assets and just trained the models on those'. are any artists that worked on those still alive? did they consent to it being used like that? (probably not.)

i'm so unimpressed.

I always wonder about these kinds of claims. Are a few hundred handdrawn sprites and the original assets enough to train a generative model? I'd have guessed not, that really the model is trained on those things on top of every bit of stolen artwork they could find on the internet, but I don't understand it enough to know whether they're actually lying or if I just don't understand this stuff.

I'm not shocked cus this is the guy who keeps writing all their articles about the Microsoft ABK deal that are all pro-Microsoft and how they'll not do anything anti-competitive if they merge, his name on an article is an instant pass for me but still hate the pro-AI stance

in reply to @egotists-club's post:

For what it's worth, my views on AI ethics fall on that third rail as well. I can understand why many people feel so strongly about though, and I don't disagree with a lot of criticisms, I just think a lot of the issues are more with capitalism than AI.

I think "AI" as a term has been diluted to near meaninglessness (largely by the grifters shilling LLMs and image generators) and it's obnoxious bc it's also used for perfectly valid tools like intelligent upscaling and AstroRes

aside from the legality and morality side of things which are certainly grey, i personally think there’s an artistic argument to be made.

i think this algorithmic approach to upscaling definitely left a lot of stuff that a pixel artist wouldn’t leave in a final piece. at the end of the day i don’t think the average person seeing the finished thing is going to notice or care so it makes sense practically speaking if your looking at it as a product or something. but approaching it from an art standpoint as a pixel artist who loves the medium and its intricacies there’s a ton of stuff i think the average pixel artist would maybe have made slightly better that a computer isn’t able to do on its own. the nuances of the medium and representing complex shapes in tiny little colored boxes kinda falls apart when a computer is just trying it’s best to do math and doesn’t have the intuition that an artist has that would override the most logical or probable pixel placement.

and i think that can be pretty directly extrapolated out into other mediums like digital drawings like in the broken sword instance when using “ai” as well. it makes sense to me why if you’re strained for resources and can’t afford to hire an artist to rework it but it’s not going to have the same degree of actual artistic nuance that someone would be able to put in doing it manually.

Oh, no argument here. (I love the medium and its intricacies too!) If I were a lawyer, I would make the argument that whether or not the work passes the "goose" or the "photograph enlargement" test is how well it actually manages to perform the upscaling in a manner that captures those intricacies, and if the algorithm fails, that's a pretty good vote for "goose".

I didn't write this bit as I didn't think of it until afterwards (and the post was too long anyways and I already feel too much like the main character on Cohost, a softer and kinder experience than being the main character on Twitter) but the parallel that comes to mind is the William Morris-led "arts and crafts" movement in Britain, post-Industrial Revolution.

Arts and Crafts started as a design response to what was perceived as vast quantities of industrialized goods displacing traditional crafts-based manufacturing processes (especially for textiles), much of it very gaudy and over-ornamentalized, which in turn made decoration both meaningless and tasteless. Morris and his crew advocated for the creation of beautiful, well-made objects that could be used in every day life, which showcased a connection between the object and the manufacturer, and which allowed their makers to remain connected with both their product and with their customers and the people using the products; in doing so, he advocated for a return to smaller, medieval-style workshops for smaller-scale production.

One of his claims was that it was necessary to have a people-centered view of goods production, which sure seems relevant to the problem of AI in this day and age. Notable polymath, writer, and art critic John Ruskin wrote at the time that "fine art is that in which the hand, the head, and the heart of man go together", which was later espoused as a philosophy by Morris's group. Really, there's nothing new in the world, is there?

Pinned Tags