Writer, game developer, queer artist of failure. Half of @fpg: Future Proof Games.


Future Proof Games
futureproofgames.com/
Before the Future Came: A Star Trek Podcast
beforethefuture.space/

posts from @gaw tagged #capitalism

also:

I just saw a post saying that the capitalist class is the enemy, not AI tools. And while my initial response is, "Yeah, sure, I guess," it made me think of the recent lawsuit against the Internet Archive and their "National Emergency Library," where it used COVID-19 as a justification to make copies of e-books accessible to online borrowers in excess of the number of physical copies it owned, thus violating the copyright of writers and publishers.

Why do I very much dislike the use of plagiarizing learning AI like ChatGPT and DALL-E by everyday folks, while I have no problem with what the Internet Archive is doing? It's partly the difference between plagiarism and piracy: plagiarism produces work that benefits from the work of others without giving them credit, while piracy gives you access to a work without compensating its creators. Those are two different moral calculations, and in a vacuum I feel like piracy is a pretty minor sin... and only a sin at all when a living person is genuinely being deprived of compensation they'd get if you bought a copy of the work instead. Steal from Disney all you like. On the other hand, I feel like plagiarism is always a bad thing, and one that it's hard to find justification for.

There's a more core distinction here, though, and that's that most of the most spectacular and popular uses of AI learning models are through a few narrow access points that feed money to the rich. Deep-learning AI actually isn't that hard to set up; someone with enough software knowledge (like a CS college student or an experienced sysadmin) can get one running on a desktop computer with freely-available software. However, deep learning on the level of Stable Diffusion or DeepL Translate requires a whole lot of processing power to train and operate. This means that a lot of the AI work you see people sharing is the result of someone directly or indirectly paying companies like Microsoft, Google, and Amazon for access to their captive means of production. AI companies, for the most part, have harvested human labor without credit or compensation, and then they charge people for access to that learning database and the processing power to put it to use.

This is the way in which AI tools are a mask (of sorts) for the wealthy. There's a version of AI deep learning that's moral, where it's trained on open or donated material, and where it's used not to replace labor but enhance it. I have no real issue with AI denoising or upscaling of old anime, for example, as long as we recognize that it's a reinterpretation and not some sort of digital resurrection (see Siskel and Ebert's "Hollywood's New Vandalism" for these sorts of concerns about colorization of old films). But the current AI trend is, in a way, much like the thankfully-dying crypto trend: it's a way for the rich to transform capital (and a lot of energy) into more capital by telling us that they're giving us access to something that used to be expensive, while actually selling us a bill of false goods. Crypto was a scam, and AI art is nearly-universally mediocre.

The Internet Archive, on the other hand, makes very few rich people more money. I'm sure that people like the rich weirdo who started it are doing just fine, but there doesn't seem to be much of a way of extracting large amounts of money from it, especially since they seem to run their own servers. Do I wish authors got money when their books were accessed? Yes. Do I think patrons should be charged money to use libraries or the Internet Archive? No.



It's fucking wild that the way my doctor (well, clinician; she's a physician's assistant [affectionate]) measures the efficacy of my brain meds is by making me take a totally subjective survey. But I guess it actually makes sense.

The medicalization of neurodivergence and late-capitalist malaise is, to be diplomatic, fraught. But currently, my access to certain chemicals that make my life more joyful is gatekept by the medical system, for understandable (if questionable) reasons. I trust my doctor's expertise, and she absolutely should have some sort of guidelines for when she recommends a medication to me.

Surveys that ask a respondent to evaluate themself are inherently problematic. I want my brain pills. I'm answering the questions, consciously or subconsciously, informed by that understanding plus a passing familiarity with survey design. The PHQ-ADS, the standard survey for quickly judging depression/anxiety, is vague and arbitrary and vulnerable to all sorts of flaws.

But... like... the test is also being administered by a human. And my doc, as much as I trust her, is subject to any number of biases, too. And I don't think that things like brain problems can be quantified like this! So, in a way, this fundamentally-flawed survey is serving as a buffer or check on other biases that might come between me and my care.

So by asking me arbitrary, subjective questions that inaccurately capture my particular and specific situation, the PHQ-ADS sort of serves as a way for my doc to ask, in a way that helps to mitigate her own biases, "Hey. How are you doing? Are the pills working?"

I'm doing okay. I think my pills are working. I just ordered my refill.