AliceOverZero

Rogue Trans Void Witch

  • she/her

To evolve, to flourish.
To let die that which makes you dead.
My short fiction
Tag for my longform posts.


bruno
@bruno

Yes, the output of generative networks like ChatGPT and Dall-E is plagiaristic. It is, ultimately, reusing someone's work without so much as attribution, and without any original contribution. Maybe you can't make that case for all possible outputs, but certainly for a lot of typical output.

Reminder: These devices are statistical models that produce outputs that are probabilistically likely (according to the model) as a match to a given prompt. They frankly barely rise to the level of 'machine' let alone 'intelligence', and it's only through gross abuse of processing power and a healthy dose of apophenia that their output seems to mean anything at all.

If you ask ChatGPT a factual question, on those occasions where it does regurgitate an actual real answer, it's getting that information from somewhere. It's using the work of someone who did the actual legwork of putting that information online, and not attributing it. Yes, an individual instance of that is negligible, but it's tremendously poisonous to the information environment that this thing is just slurping up the work of reporting or archiving information and spitting it out in a way that doesn't even point towards the source.

Say you ask Dall-E to generate an image of a specific subject, like say His Holiness Pope Francis. Dall-E only knows what Pope Francis looks like based on images of Pope Francis that were produced by people. Actual photographers had to go out into the actual world with their actual cameras and take shots of the actual pope so that Dall-E can smear their work into nonsense garbage. This is, again, plagiaristic.

And when people say "oh so you think it should be forbidden to take inspiration from other artists?", I want to scream. It's not 'taking inspiration'. Your anthropomorphization of a spreadsheet isn't an argument. It can't be 'inspired' because it's not a person. The user also cannot be 'inspired' because they haven't even seen the source images that are being munged, they're just being handed a ready-made object to use.

If you break into my house and steal a kilo of bananas and I come to you and find you neck deep in a banana smoothie bender, neither of these assertions are valid arguments:

  • "Bro point out where your bananas are to me. You can't find an individual banana in my smoothie."
  • "Bro I was just inspired by your bananas."

There is no reasonable use for this kind of technology. Their output is just an informational and aesthetic pollutant. They shouldn't exist. And I think that in a political environment where powerful people (and lots and lots of powerless idiots) want to push this garbage as hard as it will go, it's actually politically necessary to adopt a posture of total rejection and not go out of our way to rhetorically soften how bad this stuff is or try to have a nuanced conversation about how it could be used ethically in different circumstances. I bet there's probably also some ethical and safe ways to use asbestos. I don't think I much care. I want this bathwater out of my house and I don't care how much baby goes with it.

This is why, by the way, I won't even repost "ha ha look how bad the LLM is" type material. I don't even want people to perceive this stuff as accidentally funny. It's not funny to me. It's just gross.

Like, this is not a controversial stance to take: some tech is just a bad idea and shouldn't be deployed. Asbestos home insulation. Fracking. Browser fingerprinting. Nuclear weapons. The Rollie egg cooker. You just disagree with me on whether this specific tech is bad and I am begging you to listen to me that it is very, very bad.


You must log in to comment.

in reply to @bruno's post:

even if several things were different about the situation (ie the tech worked better, fulfilled most of its promises, was more democratically controlled, wasn't a direct assault on labor power, didn't have a terrible environmental impact, etc etc) the fact remains that step one of these companies' business plan was "download a huge rip of shit from the internet, totally disregarding where we got it from, that we can generate piles of gold from that the creators of said shit will never ever see a penny of" and for that reason alone their businesses deserve to be fucking nuked from orbit.

i learned this lesson when trying to explain why crypto was bad a few years back: for any technology that does technically have a valid use case, acknowledge that, and then acknowledge how unfortunate it is that has been made impossible by the rest of the horrible, dangerous use cases it enables

"yeah, there's some interesting use cases for shared information and record keeping. now here's all the reasons why those narrow applications aren't worth the consequences"

for image generation AI i thought of a neat use case a while back; live-replacing button prompts in emulated games with ones that match the controller you're using. replacing xbox As with sony Xs, etc. that'd be cool, that's saving minute work that'd be inherently uncompensated for each individual game that'd otherwise need mods for that!

i'll gladly not have that tech because the eventual human cost, the damage to information and truth and art, is just too high.

Engaging with nuance is how we develop better arguments about the real harms that come from LLMs. For example, the argument about whether LLMs are "intelligent" or not requires a definition of intelligence and also some thought towards intelligence appearing to be an emergent property of probabilistic biological processes. Why can't the LLM's own probabilistic process also have some rudimentary emergent intelligence? If they are somewhat intelligent, does that change my stance on it? (It doesn't, for the record. The side effects are still not worth it.)

I hate LLMs and their downsides with a passion but I live in a world where lots of folks I work with or am friends with don't naturally understand why they're bad, and hardlining my stance on it doesn't really help convince them, it just makes the line upon which that they're choosing a side to be on very explicit.

The thing about "engaging with nuance" though is that people can in bad faith insist that any of their arguments are part of said "nuance". Emily Bender and many others have explained exhaustively how the way LLMs work is nothing like how human understanding and cognition work; the arguments for similarity are all coming from the (very wealthy and powerful) hype men and none of them are compelling. I think it's a reasonable stance to refuse to engage further with those claims and point out the extremely clear positions of self interest from which they're made. Capitalists know that they can purchase as much visibility and positive buzz as they need to wear down collective resistance to their lies and plunder. It is a very well-documented part of the playbook at this point.

That's true, I was thinking specifically in contexts like talking within a group to develop a better outward argument, or among friends or people you otherwise know aren't being disingenuous. Which I realize meant that I was taking this post to be in response to @shel's posts earlier about AI.

In a general context, yeah that defense mechanism is necessary for public spaces, but carrying it with you everywhere is disruptive.

You answered your own question here: your example of what nuance means in this context was rehashing a well-understood, settled issue because some snake oil salesman somewhere asserted some garbage they read from a comic book and we therefore need to ruminate on the topic as an open debate; presumably forever, cause that guy will effortlessly shit out more garbage tomorrow. When that point is for even a moment dispensed with it's on to some airy metacommentary on tone and sincerity, exactly like someone engaging in bad faith would do.

The tech huckster gish gallop is not a novel or exciting technique, and the guy who keeps good-faithedly dragging it in like a cat with a dead pigeon is not a positive contributor to any discourse.

Like I said, my point isn't about public social media or other contexts where bad actors are abound. If you refuse to engage with friends or coworkers when someone asks why LLMs are bad and hasn't read up as much as you, that's fine, but you're not helping to convince them or growing your movement by doing so, you're just filtering your own interactions.

This isn't about bad actors. No bad actors exist here, purge the concept of bad actors from your mind. Of what significance is it whether someone is a bad actor or incredibly, achingly sincere, if they behave in exactly the same way? Your thesis appears to be that we all ought to be way more deferential to claims about chatbots being some protosapient New Man, apparently regardless of their merit, in order to not alienate people who... already fully buy into this extreme fringe position to the point of being offended when others are dismissive of it? Who are you persuading here, of what?

For most normal people a simple "no, that's stupid" is more than sufficient. A world where anyone feels obligated to spend their time with friends picking over the most tiresome lesswrong masturbation about how many simulated angels can dance on the head of a bayesian pin and pretending that this is all very valid and important work sounds unbelievably bleak to me. It's the kind of profound meaninglessness that two instances of the same chatbot could very profitably automate away without even needing a very good chatbot. If that's what you want for your life, however, may I suggest you seek out some of the billions of people who don't have a preestablished Take on AI beyond "it creepy :(" or "I told bing to draw a robot and it drawed it! :)", who will be both much more rewarding conversationalists than the guy who's only in it to pitch you on the latest successor to NFTs, and theoretically persuadable for more than the five minutes it takes them to go back to Reddit and load up on more memes

I honestly don't understand how you got all that from me trying to say that developing effective strategies to combat LLMs require actually thinking about the details of why LLMs are bad and talking to people about that? Including sometimes people who maybe don't know why they're bad but liked using them for some project or funny art thing?

It was an example of what someone might say if they're not sure whether LLMs are actually intelligent, not literally me arguing that they are. @vectorpoem helpfully linked to an article which clarified the example, which is exactly the kind of thing that helps convince people vs either refusing to engage at all or assuming they're advocating for the debate hell you seem to think I'm suggesting.

i am at least a little mad because theres a very important difference between theft and plagiarism and its, lol, its that you didnt make the goddamn bananas. even the farmers didnt make the goddamn bananas. the bananas grew themselves.

and im the type of person who thinks stealing food is okay, actually.

because the other important difference is that youll die if you dont eat.

i dont know, the nuance that is "thsts a bad metaphor" may also literally be the type of nuance the op doesnt care about. i dont think they want to hear criticism about the "i dont care about nuance and criticsm" post

worth noting that our definition of plagiarism also needs to be about social good and not just legally-enforceable boundaries, as well. thinking specifically about Adobe's foray into generative tools; they've almost certainly built their model in-house without touching the legal landmine of Stable Diffusion, but they've done it by making all Creative Cloud images fair game for them to blenderize unless you opt out.

Big agree!

One of the strongest reasons why I don't buy the "it's just taking inspiration" argument from generative AI defenders is that when people take inspiration, they (usually) want you to know where it came from.

Generative AI doesn't cite its sources. It doesn't pay homage. It can't participate in a conversation or a movement.