trust me i do programming and music i just don't make great posts about it or complete most of it



alyaza
@alyaza
This page's posts are visible only to users who are logged in.

nex3
@nex3

LLMs ("large language models") need a lot of input to generate their statistical text models. Orders of magnitude more than is contained in a single webcomic, even one as large as Achewood. The way this works is, roughly, you train a "base model" on approximately the entire internet. Then you take this "base model" and you add additional training on top to give it a specific flavor, such as "Ray Smuckles". The article alludes to this:

The first challenge was formatting the data in a way the language model could use. Then, there was the matter of picking an underlying language model to train with Ray’s voice. OpenAI’s ChatGPT was a little too sanctimonious for Ray, who likes to color outside of the lines, Hall says. They wound up using a fine-tuned version of OpenAI’s Davinci, which Hall estimates is about 60 times more expensive than ChatGPT.

So, this is not just a matter of "he's only using his own writing so it's fine". The model Onstad is working with is exactly as plagiaristic as anything else OpenAI has put out, it just adds a layer of Smucklesiness on top of that. Whether you think "training a statistical model on the entire internet without authors' consent" is specifically plagiarism, otherwise exploitative, or totally fine is up to you. But you can't draw a clean line and say "training AI nonconsensually is bad but what Onstad is doing is okay."


SArpnt
@SArpnt

training a model on the entire internet and using a model already trained on the entire internet are two different things, especially when the model is already available to everyone

also a good factor is however the person using ai involved handles copyright on what they create with ai (directly or indirectly), especially however they give permission to others to use it for ai stuff

i don't really like the way copyright law works currently, but i generally don't dislike as much when regular people use it in the ways i don't like because they're already influenced by how the corporations use it, same with capitalism and ai and other things. if someone uses one of the plagarism bots for a shitpost that's fine by me because it doesn't make sense to allow ONLY evil people to use it for evil things, and i only have a problem with ai in the first place because corporations are using it to get around their own copyright laws and to smush and scramble people in other ways

id probably put an indepth take on what onstad is doing but im too lazy and i don't actually know him or achewood well (ill read the comic someday)


You must log in to comment.

in reply to @alyaza's post:

fwiw, my feelings aren't mad that he's doing it, but pity that he's writing new achewood & putting out new merch & has a patreon & generally a new source of income... and he's funnelling that time & money back to OpenAI. which, like, that's his choice to do so. but still it feels like a shame & like the kind of thing that will, a few years down the line, feel similar to the cancelled Netflix show or the cancelled Oni press book. a lot of effort into something that ultimately didn't pay off. maybe i'm wrong! i hope so! but that's what i took from the article.

in reply to @nex3's post:

Everybody has a different line on this stuff! I think it’s worth differentiating from smarterchild shit (even if, yknow, its output doesn’t feel much different) in that it’s built off the input of a whole bunch of unwilling participants. Whether that changes how you feel about it or not is up to you, but I get where folks think the tech is poisonous enough that they don’t want to support it at all, however indirectly

(unaddressed in comments on other posts so putting here) in addition to your point, which i agree with, the ecological cost is still huge on these things- closer to NFTs than a cute little in browser chatbot from a decade ago

Definitely true but this will be addressed in the short term, I think. Models are already available which can run on phones with dedicated tensor hardware at a usable speed, and almost no one needs more power than you can get on a beefy laptop, even if you’re training and not just using the model. These datacenter-scale powerhungry LLMs are tech demos, really. But of course, if they can keep attention they’ll keep being developed.

Porten ran through all 18,000 of Onstad’s Twitter followers and discovered many were Stanford graduates who’d gone into AI. ChatGPT had just come out. AI was hot. What if Onstad did something with those guys?

i feel bad for the guy struggling thru years of burnout and overbearing expectations but short of funneling fans to an NFT scam, this is just about the lowest integrity path out of that i can imagine. there are multiple highly understandable reasons people are clowning onstad for this and throwing "reactionary" at people criticizing him for it is not a winning argument. i'm so fuckin tired of this evil ass SV orchestrated hype wave and we're not even a year in.

$10,000 gets you maybe 83,000 queries, depending on the length of the response

oh god, this man is doing the equivalent of paying $9000/mo to live in the "damn bitch you live like this" apartment, and it's with his own creation. this is going to be such a cautionary tale in a few years' time.