Matytoonist

Bnnuy brainrot(?

19yo argentinian cis guy
Things i like range from art, to software, to DIY electronics, and whatever current project im having

big button that reads "powered by linux" featuring Xenia's left eye from the original drawing om the left
button that reads "bunny browser" parodying the netscape logo with a rabbit siluette


shel
@shel

I feel like sometimes people get into this sense of like “team based philosophy” or “team based values” where there’s an agreed upon stance to take or thing to oppose and if you’re on the same team then any argument you make against the thing that is opposed is good and valid because it’s a shot fired against the enemy. So there is very little room to be internally critical of the arguments being made by your own team because then you’re not being a team player. It’s not really about making compelling and correct arguments against something so much as coming up with every possible point against it in order to overwhelm someone into conceding without having the energy to interrogate every single point made.

I see this a lot with AI discourse where there are very real criticisms to make of the technology as it is being implemented and marketed by the bourgeoisie and how there are imperialist exploitative labor practices powering the development of the technology, the literal fueling of the technology which is very energy intensive, and in what ends the technology is being used to achieve.

All of these are very condemnatory on their own let alone together.

But what I see a lot more of in the discourse is these very abstract and ephemeral arguments that are very focused on how the machine lacks a soul, or how the present state of the AI output isn’t very high quality, or how the training datasets are somehow a form of stealing or plagiarism (an argument that is particularly absurd with visual generative AI like DALL-E and Midjourney where it is technologically impossible to recreate a given piece of art in the training set and the thing being copied or stolen is so abstract and effervescent that it basically becomes an argument against being inspired by other artists or becomes an argument that only art produced in pure isolation of the world is ethical).

But anyone who tries to disentangle the very serious economic, environmental, and political critiques of AI from the very abstract philosophical objections to AI gets labeled as not a team player and “pro-AI” which gets a lot of horrible vitriol directed at them and a lot of lumping in with the NFT crowd.

Confirmation bias can be a dangerous thing. Agreement with any argument for your team can lead to some pretty dangerous implications. Laws protecting “artistic style” or “writing style” as private intellectual property would make the world objectively worse is a way that could only benefit the same corporations seeking to utilize AI technology to extract value. Conceding that art generated by AI is “fake art produced by a soulless machine” concedes to those corporations that “the AI made it” and not a human using a computer program as a tool, which allows them to circumvent union contracts, devalue human labor, bolster IP claims for corporations, and removes culpability for actions taken via algorithms.

There isn’t really huge philosophical difference that can be drawn between Midjourney and Photoshop, especially since Adobe has been a major developer of algorithm-driven “smart” creative tools since long before the AI marketing craze. It’s always still a human being using a machine to achieve an end. That the algorithm is seemingly being “creative” seems to evoke a moral outrage in people due to the sanctity of “artistry” but it’s still fundamentally the same technological process described by Karl Marx in Capital. The development of a machine which makes human labor more efficient and thus allows for a greater extraction of surplus labor value from the proletariat over the same length of time and thus the same wages.

The proletarianization of artisan classes was a major topic of Capital. Making shoes was once something you commissioned an artisan to do. Now shoes are mass produced by factories. The problem here is capitalism, not that the mass produced shoes “lack soul” or were “made by a machine.” The machine is still run by human labor. The machine is still worthless without the proletariat. As more and more of the labor is mechanized and automated, the amount of labor that is done by the human just becomes even more efficient. Even more surplus labor value is produced over the same amount of time. The value still comes from the labor of a human being. The Bourgeoisie want to claim that the machine made the shoes and the shoes belong to the company and they owe the worker almost nothing. But the machine couldn’t run without the workers. Workers are still involved at every step.

Fossil fuel powered mass production machines have also been absolutely catastrophic for the environment and are incredibly energy intensive. So many unsold shoes end up in landfills. The colonization of Southeast Asia for Rubber and Latex was and has been violent and gruesome and devastating. These factories are often sweatshops.

The parallels are striking between AI and factories because they ultimately are the same economic phenomena under capitalism and require the same political reformations. The means of production must be controlled by labor, and not capital. The machines must be turned toward serving the interests of human wellness and the living ecosystem of the planet—not the interests of Capital. Unbounded infinite growth remains unsustainable. Degrowth becomes more and more of a harsh necessity the more we see unsustainable and destructive growth.

But were sustainable ecosocialist global equilibrium achieved, there is nothing about the technology of generative algorithms which is inherently morally evil or philosophically objectionable, from what I can see. Only insomuch as the running of these computer programs must not throw off the ecological equilibrium with humanity and must not be utilized for dangerous ends, such as exploiting labor and facilitating the generating and spreading of misinformation.

But of course the world we live in at present is not an ecosocialist equilibrium but global neoliberal capitalism. So algorithmic intelligence has only thus far been 95% a negative force for society and the environment and like 5% a source of neutral amusement.

I just think it’s important to retain a focus on the specifics of what is dangerous about this technology than to allow ourselves to fall into discursive traps which are either easily dismissed and argued against by AI salesmen which makes all the legitimate concerns look unserious—or which actually in trying to argue against the use of AI actually reinforce the exact philosophical justifications used by the bourgeoisie to extract more labor value from workers.

H/t txttletale on tumblr who has made some of these same points before in various blog posts.


You must log in to comment.

in reply to @shel's post:

the problem is the part where art is treated as a pipeline, not that the pipeline doesn't understand the art inside of it. it's another form of factory, you're completely right. efficiency at the cost of individual consideration

people seem to take criticisms of AI regarding "soullessness" and extrapolate that to be the problem rather than the agreeable emotional thruline of the problem. fuck, i'm even guilty of this looking back. thinking "because then you don't have to pay anyone" was the flat, boring, obvious answer that most people who needed convincing knew and ignored. but i was wrong!

because it's arguing in the language that might make change: copyright

Yeah like, I think it's 100% a legitimate thing to say "The art produced by DALL-E feels soulless" that's not moral philosophy or economic analysis it's art criticism that is genuinely engaging with the illustration on its own merits and saying "This is poorly made and feels devoid of emotions and intent."

Could DALL-E produce Rothko? Where the exact brush-struck and choices of color shades used in painting something so formless and abstract as shadows on a canvas still manages to evoke incredibly strong emotions? If someone tried to use DALL-E to make a Rothko, an art critic would be 100% in the right to say "this feels like a pale imitation of Rothko which lacks the soul and precision that makes Rothko so compelling."

That said,

  1. The [actual] pro-AI salespeople will just say "Yeah well, it's a new technology and it'll only get better" which so far has proven to be true. So far, all of my criticisms of GPT-3 from my standpoint as a librarian have been addressed in GPT-4. Someone with a ton of GPT-4 tokens ran a ton of tests for me where it correctly challenged loaded questions which presumed information that is not true. So while my criticisms of Bing spreading misinformation are still 100% valid right now, it won't necessarily be something that will have staying power as an objection as the technology improves. I am still very doubtful that DALL-E could ever produce a Rothko, but I could also be proven wrong.

  2. Corporations using generative AI aren't really trying to produce a Rothko. They're trying to produce an MCU tie-in comic book, a harlequin romance novel cover, an episode of Family Guy, an SEO-bait listicle about the best skin care products, and other low-brow gruel already produced by underpaid often nameless artists scrounging by on these corporate projects as a day job while they work on their passion projects. So telling a corporation "You'll never have a Rothko or an Erasehead from this technology" doesn't really convince them that the technology isn't valuable to them. They're not competing in the world of fine art. Besides, it doesn't even need to produce finished materials, it just needs to speed up the production process and then they can pay the same workers the same amount to edit and touch up the outputs, and that still produces a greater surplus labor value they can extract. Even without paying writers and illustrators less to touch up bad art, it still results in greater profits for corporations. (Though they will try to pay workers less, because of course they will)

Which is, again, why I think the world of art criticism, while valid within its own world here, isn't really a useful political tool in countering the present and imminent economic exploitation enabled by this technology.

this feels like a major leap in logic, to compare art to shoe production. folks don't make things when typing words into an art or word generator, except like "girl boobs gun boat" is technically a starting point? and the generator could only exist because a mass abundance of similar things made by real folks, with their (nonconsensual) contributions to its functionality obscured.
like, the way it's like plagiarism is similar to what folks might find wrong with plagiarists: passing off the work of others as their own. this is a machine that does extraordinary plagiarism for a creature. like if sentences from a hundred thousand books were cut apart and randomly combined, after they said "novel gun war science boat"

also wait aint txttletale on tumblr a tankie. we'd be hesitant about going along with logic produced by folks who can't even be anti-imperialist correctly

  1. Historically, people like my great grandfather who made shoes for a living were considered "artisans" and shoes were custom-made to your feet and considered a work of art. When machines began to make shoes, one might say "But you're not actually molding leather to a custom-carved wooden mold of the customer's feet and stitching together the fabric! You're just putting a piece of leather on the machine and having the machine do all the stitching for you according to a premade template!" This argument doesn't really work as a legitimate compelling argument against machine-generated art unless we take as an axiom some special place that 21st century Art holds that makes it unique among all other commodities. Art is already very nebulous to define. What makes a pop song art but not the Liberty Mutual jingle? What makes a 3-minute claymation short film on youtube Art but not a commercial that plays during the Superbowl? Ultimately the AI salesman can just say to this "Well, our software will allow companies to produce their commercial jingles and television ads, but you can still make your Real Art! Nobody is stopping you from doing that any more than commercials stop you from making short films!" This isn't a good argument in favor of AI either, in my opinion, but it would probably convince the customer, which is the only thing the AI salesman cares about. What is our battle in that interaction? Who are we trying to convince here? It just feels like a dead-end to me.

  2. Redaction poetry and collages are already considered Real Art by modern standards. Sampling in music also considered legitimate art. Those use much more of the source materials than Midjourney or DALL-E does from its learning set. If someone did sentences from one hundred thousand books and cut them out and pasted them together into a new work of literature, that would be seen by art critics as an impressive feat of literary art. The Glosa is a commonly accepted poetic form explicitly based around taking lines from existing poetry, often very recognizable line, and creating new poems.

Ultimately this argument fails to be compelling to anyone who has engaged seriously with the world of art. Plagiarism is passing off someone else's labor as your own, but DALL-E is not doing that. Not a single recognizable element from any of the learning set is present in the generated output. If the source material is so heavily transformed that it is beyond recognition, then that is not plagiarism, that is an original work that has taken inspiration from others. Nobody produces art or literature in a vacuum. It is just as impossible for me to have written my poetry books without having read every poem I have ever read. Every novel exists having come after the author read and learned from reading the writing of others. All works of illustration exist after the artist has seen other works of illustration, and most often, has used reference images to learn how to draw. There are entire books of nothing but reference images for artists to use to illustrate.

Again, what does this argument actually achieve? If someone in power full agrees that taking stylistic inspiration from a large set of existing works is plagiarism then the resulting intellectual property laws would make it illegal to make an original superhero comic, because you could not have had that idea without reading existing cape comics. There already was a famous case of big comic book publisher suing each other for "copying Superman" and the legal precedent is that they lost that fight. So you cannot use this argument in court effectively, because it's already failed.

  1. I wrote an essay heavily citing Das Kapital by Karl Marx. Do you think I care that someone has been labeled something as nebulous a word as "tankie?" Even if I disagree with her on how to interpret specific historical events, that doesn't change that she made some compelling points about how Marxists should be analyzing AI as a technology, and so by your own logic, it is plagiarism if I don't cite her. I'm giving her credit that she is owed as a thinker, yes? Does a trans woman's right to be credited for her ideas change if you don't like her?

dall-e isn't a real creature? it tastes weird for that to be the big refutation about it being a similar creative process if the blurriest vision is used. but it seems sufficient here.

but like, about the shoe thing. it'd be like if the raw materials in the shoe factories were... other shoes, crafted by those artisans, who still exist and are vital for the factories to not become totally meaningless noise rather than just uncanny noise. doesn't that feel off? like. it's comparing the process of shoes being made, to the process of shoes being designed, and creating a weird equivalence there.

figure most folks who read about marx would care about tankies! there's no issue in citing her, just, if there are a lot of words they have more opportunities to find a flaw in critical thought? tankies must be good at exploiting their own flaws to speak fondly of things like china while holding up one of marx's books. this is being wary about whether sources are being tricky. and the specific historical events that tankies often have to misinterpret are like, atrocities and genocides and stuff. like it's not impossible for good points to come from anywhere but like... (also am trans too hi)

  1. "Tastes weird" is an instinct, but not a conclusion. That is your cue to dig deeper and look into why it tastes weird, what sets that off, and how to best articulate it with words (or perhaps through art?). But "tastes weird" is the beginning, not the end. "Living creature" as a choice of words is interesting! When an elephant "paints a canvas" do you consider that true art, even if the elephant does not understand the concept of art? What about being alive infuses the art with meaning? These are interesting philosophical questions and it's worth digging deeper into them. I just don't think they're effective as critiques that can bolster workers against deeper exploitation of their labor from the capitalist class.

  2. Do you think that generative AI requires a constant stream of new learning sets in order to keep working? DALL-E 3 doesn't require being fed new drawings or it runs out of material. If all humans in the world died tomorrow, but there was a script running on a computer somewhere generating random words to spit into DALL-E 3, then it would continue to produce infinitely variable outputs until the material infrastructure required to power the computer breaks down because all the humans are dead. It only needs new training sets to improve the technology, but sometimes those training sets include the algorithm's own creations (there's a whole thing about this that's a big tangent where there's like a whole other algorithm that is trained to guess what algorithm-produced content is good enough to train the original algorithm? It's wild.) and sure, you can make an argument that people whose work is included in the learning set should be compensated, but I'm not sure if that's a winning battle because 1. eventually they won't need new training sets if they consider it good enough to train itself 2. it slows down development but doesn't halt it 3. there are very compelling legal arguments against the necessity to need to compensate someone for "learning from their work" with established legal precedent and even if you overturned that precedent, it's very difficult to draw a legal line that allows humans to learn and take inspiration from each other but doesn't allow machines to do so, given that ultimately it is a human feeding things into the machine and you can't arrest a machine (yet).

Also, shoe factories do require human beings to design new shoes in order to keep producing new shoes. I think the main thing that's different here between shoes and digital art is that if all the shoes are identical you still get something you can sell, but with art you need everything you produce to be unique in some manner or you've not produced anything new. That's actually a really interesting distinction that art has as a commodity compared to other products produced and sold under capitalism. "Multiple digital copies of this same illustration" isn't really valuable as a commodity. So you raise a good point there that art does hold something different from shoes. But I'm not yet entirely sure what the implications of that difference are. I'll have to think about it. Shoes used to always be unique, and AI-generated content at-present does tend to feel very samey and homogenous. Would the bourgeoisie settle for just shoving a million extremely similar pop songs and movies at us if it saves them enough money? would consumers still buy it all? That's an interesting area for inquiry and exploration.

  1. This is called an ad hominem attack. It's a logical fallacy. In this instance, I didn't cite her as a source for a historical fact, just as being an influential person is forming a subjective opinion. You're welcome to think lesser of me but I don't think her opinions, on, say, China, really mean that much on this particular topic.

Ultimately, I think I am on the side of "pro-critical thinking" and "pro-allowing nuanced disagreements with each other" which is much more likely to prevent Stalinesque political scenarios than refusing to follow the blogs of "tankies." Especially given that it is, at present, pretty highly unlikely you and I will be actually in a position of going "Should we allow Txttletale the tankie tumblr blogger to be the new people's commissar for foreign policy?"

I'm also a known frequent defender of Cuba, Vietnam, and East Germany. You might consider me a tankie depending on how you define that word. I don't support the USSR sending tanks into Czechoslovakia in the 1950s if that's what you mean, and I'm highly critical of American (Neo)Trotskyist, Marxist-Leninist, and Maoist groups which tend to just be cults LARPing as revolutionaries who treat historical analysis as more of an identity marker than an actually productive exercise at learning from the past.

absolutely would consider an elephant painting art! if anything, it feels like folks are too willing to pretend they're wholly separate from other creatures! but then like, could the creations of ants be said to have any creative intent behind them? or, like, aren't their death spirals weirdly related in flavor to this whole content generation thing, signals being emitted but without any real intent behind them? and isn't that in itself neat, that creatures with vastly more complicated minds than ants are, as a broad system, kinda falling to the same vulnerability? you could easily view llm's and stuff as beautiful facets of reality, like that, and if somepony would accept a picture of a sunset as art, well...
...and yeah everything eventually runs into how categories aren't real and there can't be easy lines. but, hehe

ghjkfsa did not, think about how vague the term "tankie" could get. have literally never heard it pointed at some of those places. like, sure, all states are bad because such-and-such incentives rewarding so-and-so behaviors and junk, but it seems weird to... go after, like, cuba? there are a lot of things to learn from cuba? why would someone say "tankie" about that? sorry, just, meant folks who like, say they're anti-imperialist (usually MLs?) and are even usually very very extremely good at listing every atrocity the american empire has committed, but then deny the atrocities of other empires (like specifically the tanks thing) in order to... well.. have a team to support x3! everything's connected~

anyway thanks for words <3

See you're hitting the issue. Arguing about the semantic category of what constitutes art is, in my opinion, an endless discussion that while often interesting distracts from the more urgent issue of how to protect workers from the impacts of the technology. I think there needs to be more organizing akin to the WGA strike's demands around this and less arguing about if a computer can make art or not.

Also yeah I hate the word "tankie" because people will just slap it on your back and then that's the end of the discussion when it really doesn't have a coherent meaning. A lot of people who get labeled "tankies" are people who do not 'support' the bad things done by the USSR but who also refuse to accept cold war narratives about the profound oppression and misery of 20th century socialism. People in East Germany preferred living under socialism to life after reunification, that is an important piece of history to learn from, it doesn't mean we have to approve of the KGB. The USSR had an amazing mass-literacy campaign and incredible gender equality, and that's worth learning from, it doesn't mean that the whole moral panic around Kulaks was good or that the mismanagement of agricultural collectivization didn't result in the Ukrainian famine. Separating historical truths from cold war propaganda is incredibly useful for learning from the past.

I'm also just skeptical of how meaningful it is to "support" or "not support" a historic action taken by a defunct country? Like what would it actually mean to "support" the USSR invasion of Czechoslovakia. It already happened. You're not exactly sending resources to support the war effort. It's at worst holding a stupid opinion that reveals having not put a lot of thought into things.

Whoa there's a coherent origin point for the term "tankie"? I've heard it used almost exclusively to denigrate people who argue that 20th century communism had significant positive externalities despite fundamental flaws and should therefore be a valid field of study for those interested in developing future systems. It's always felt like a term exclusively used to shut down learning, like something in there would inevitably infect you and you'd start doing purges and shit. To the point that the people I saw using that term didn't even want to explain what it originated from because even that would be too far.

I do recognize that those very interested in studying certain fields often contain those who are very...enthusiastic about those fields. Which sounds fine at a glance. Until you see what manifests from it. Like how Egyptologists occasionally turn into "Egypt was superior to Mesopotamia in this tiny metric and that's why Europeans are better than Middle Easterners" which has so many logical leaps I can't even... so it doesn't surprise me that people studying much more recent regimes have even more incentive to get problematic about them. Doesn't mean they aren't right sometimes though. Just be careful about marching orders IMO.

Sorry about the notif Shel I love this post, and I kinda can't help but be amused at the microcosm of another group being identified as "them" in the comments

"Tankie" originates with the 1968 (not 50s, my bad) invasion of Czechoslovakia by the other Warsaw Pact countries after the Prague Spring, when Alexander Dubček was elected and started implementing reforms which decentralized the government and economy, and also made motions towards separate Czechoslovakia into two or three separate countries. The Warsaw Pact basically invaded and forced them to stick to hard-line policies aligned with how the USSR operated at the time.

In Western countries like the US, left-wing groups were divided on how to interpret the event. Some people supported the Prague Spring and opposed the invasion, seeing it as overruling the democracy of the local proletariat who wanted the reforms. Others saw it as necessary for preventing the dismantling of socialism through liberalization and movements towards capitalism. People who supported the invasion were labeled "Tankies" because they supported "Sending the tanks into Czechoslovakia"

at the time, "support" here was very literal as communists and socialists within capitalist countries often actively provided aid and material support to socialist countries, and organized around overthrowing their local governments to unite with the International. American Communists would create multilingual pro-USSR propaganda, sabotage war efforts, assist the USSR through espionage of US government secrets, spycraft, etc. being a USSR sympathizer wasn't just having an opinion about them. So when people split over the issue of the tanks in Czechoslovakia, it was very literally a question of if your organization would continue to collaborate with the USSR or instead act independently and unaffiliated with the International.

Fascinating stuff actually. I kinda can't believe that this is what caused a significant split that's still talked about today and not like, the Makhnovists and the Kronstadt Rebellion, though I guess Comintern mostly formed from those who wanted such movements to fail in the first place

oh but! didn't! actually say! the initial point about there being high priority issues that don't have any bickering potential, and therefore should be a focus, is a good one :3 bickering is fun tho, as all creatures agree, with it there in your post, too, hehe

THANK YOU. I can't stand attempting to moralize the existence of tools when their use is what matters. IMO the question is always less "is this a good tool?" (which, admittedly, for the case of most AI tools is that they are not very good tools given their environmental cost) but "if this tool proliferates, who stands most to benefit?". Hence I think your factory analogy is spot-on because that's what it'll be used for.

Same idea as those "this foxy grandma loves greyhounds and staying up past 4 AM grinding loot drops on valorant" t-shirts -- shirts are obviously morally neutral but how on earth can such production be sustainable? It's demonstrably not -- those shirts are sustained by similar content-scraping routines, just lacking any sort of transformative process beyond transfer to a shirt, and I can guarantee you that no part of any of the manufacturing labor is fairly compensated.

Like nobody* says that "AI is the unsustainable shitty T-shirt factory for companies that would otherwise hire or contract a graphic design department" but that's obviously a better argument than just "it's ugly" because if the technology proliferates and is refined then many aesthetic-technical criticisms might be addressed but none of the power relations will be. Using aesthetics as proxy for politics is not going to be a winning argument for leftists because it requires buying into a thought process that justifies reactionary opinions.

  • EDIT: I mean this is probably an unfair generalization but it's a significantly less popular argument to make, probably because the reaction to something "ugly" is much more instinctive than a systemic process analysis...hence why it's such a popular line of recruitment for reactionaries

Quickly leaving a technical note that this clause in paragraph 4 is not true:

(an argument that is particularly absurd with visual generative AI like DALL-E and Midjourney where it is technologically impossible to recreate a given piece of art in the training set [...]

Diffusion models (which include Stable Diffusion, DALL-E, and Midjourney) often have issues with memorizing and regurgitating images from their data set. Carlini et al. 2023's Extracting Training Data from Diffusion Models (more readable summary here, or see Figure 3 in the paper) shows how to create prompts that produce near-copies for a variety of dataset images on Stable Diffusion. (They focus on Stable Diffusion in this paper because its weights are available to the public. DALL-E and Midjourney's weights are not available, but similar "memorization and regurgitation" have been observed before with these models — see e.g. https://nitter.net/kortizart/status/1588915427018559490).

Ted Chiang's also talked about this for generative text models rather than diffusion models in his "ChatGPT is a Blurry JPEG of the Web" article in the New Yorker.

This might not have an effect on the larger point you're trying to make — I don't think this clause was load-bearing for the argument — but means that concerns about copyright and the legality of these models' datasets are valid in several cases.

Oh!! This is very interesting new evidence! The "it's not possible to regurgitate the source" thing came from an explanation written by AI_Curio a couple years ago so it seems like that might be outdated, since this new paper is from 2023.

I'll have to look into this further because it does knock out a pretty big hit on some of the points I make about the viability of using copyright as a bolster against the exploitation through AI upon workers. Tho my overall point of "we should be thinking critically about what arguments we use and if they achieve what we want them to achieve" of course isn't really affected.

ok i also knew that gpt4 sometimes plagiarizes stuff verbatim, so i feel like this is common enough knowledge that you shouldn't have had 100% certainty that it was false?

yeah, multiple large "ai" models have been caught (or admitted to) explicitly downweighting direct reproductions of their training data, merely making it less likely to output excerpts of its inputs verbatim. github copilot does this, as if that somehow makes its output (based on ?licensed? and copyrighted code) better or more legal

So I read this first article and what I have learned is that the "it's impossible to replicate the source image" thing I read in the past was about GANs and not Diffusion models and that because I am not someone who is super interested in the development of AI technology outside of the political and economic implications I had no realized that the newer image generation algorithms using diffusion were using a different fundamental technology from the GANs i had read about in the past which were the ones that couldn't really replicate training images.

That the algorithm is seemingly being “creative” seems to evoke a moral outrage in people due to the sanctity of “artistry” but it’s still fundamentally the same technological process described by Karl Marx in Capital. The development of a machine which makes human labor more efficient and thus allows for a greater extraction of surplus labor value from the proletariat over the same length of time and thus the same wages.

i think the thing people feel reluctant to let go in the separation of aspects here, including myself, is that this does read as a concession that the technology actually functions to increase artistic output, when there are any number of ways in which it really fundamentally doesn't. it's marketed like it does, and almost only ever talked about like it very obviously does these things, but in many people's views it's missing something fundamental required to actually achieve those things today. it's presented as unquestionable truth so flagrantly and tramples over all arguments and reason so conspicuously that a lot of people instinctively attack that idea whenever they see it.

framing the argument that it sucks and doesn't really work creatively for 99% of scenarios, and also even if it did, which again it does not, it would still have all the inherent problems of a labor-amplifying technology... this may provoke less contention. that it fails at multiple levels, and each level's failure is a totally separate, often intractable problem preventing it from ever being a good thing even if all the other problems were solved. whether anyone concedes that it actually saves labor, as i'm personally reluctant to do, is irrelevant to the strength of the second argument. maybe i am just a hater of its aesthetics, it doesn't actually matter!

also hard agree that trying to address this by changing copyright law as if that will somehow stem the tide of damage is nonsense. copyright is still one of the master's tools, it's merely one of the few remaining scraps of law that still nominally purport to protect the rights of the invididual to the fruits of their own labor. use it when you can, but believing that it's really a solution here seems nauseatingly lib-brained to me

I generally avoid talking about discorse stuff, but since you are making a post about discourse about strategies of discourse being good to do, it feels like just once is fine here... I think I agree with you, but this is kinda poorly worded. Its easy to read this as stating people shouldnt have ethical, moral, and effectiveness concerns with AI, and is phrased as if it assumes every single statement someone makes has political impact (blatantly false). I definitely agree arguments with bad conclusions can exist even if you agree with the intent and direction, but I also dont think that the correct form of dealing with them is to discard the concept of bringing that topic up. People can make nuanced arguments on topics that avoid the downsides of the results of that argument, and I think pointing people in the direction of the added nuance that fixes their statements is more productive and less provocative. Like, "the tools are unethical" are things people are saying, which might be wrong, but as far as Im aware theyre saying it as a shortening of "the tools are created unethically, and for all practical concerns have to be created unethically to function" which is like. A different thing than what youre arguing against. Whether or not a hammer is ethical doesnt matter if the hammer is made unethically.

"It took both time and experience before the workers learnt to distinguish between machinery and its employment by capital, and therefore to transfer their attacks from the material instruments of production to the form of society which utilizes those instruments."
(capital vol. 1, ch. 15, section 5)

guess we gotta learn that one again!

Largely agree and I share so many of the same frustrations as you around critical thinking about the arguments and actions we make.

The only thing I would toss into the pile about AI specifically is something I really haven't seen a lot: what is the social cost of automating away critical thinking? (Or even just automating away modelling critical thinking). Specifically thinking about LLMs here rather than the image-generators. This is sort of disregarding class-based power dynamics at the small scale of the point of production in favor of examining the wider scope downstream effects.

Gut instinct that I have to examine more has me thinking this particular class of automation is actually different and a social harm specifically because of the goal of what it does, not just a current iteration or specific technical issues around its implementation. Automating away critical/creative thought seems inherently dangerous on a social level, to me (maybe that's an argument to make the tools inaccessible to the wider public rather than not making the tools at all?)

Right, they don't, but people use them in order to automate away the processes that involve that critical thinking, e.g. generating news articles, or students having it write essays, or IIRC a few research papers that slipped through.

People use LLMs to automate away generating what would be the result of critical thinking, by giving them the prompts that would normally lead a person to do research and critical thinking and then letting the machine spit out the result.

They sidestep critical thinking to focus on the result, and that is generally what their purpose is. Results without the process on the human's part, the same as any other automation. The issue is that the process in this case is just... Thinking, and putting those thoughts down.