
enthusiast enthusiast
offsite reposts: @nex3-reposts
clips & pics: @nex3-moments
Cohost utilities:
header photo by me
LLMs ("large language models") need a lot of input to generate their statistical text models. Orders of magnitude more than is contained in a single webcomic, even one as large as Achewood. The way this works is, roughly, you train a "base model" on approximately the entire internet. Then you take this "base model" and you add additional training on top to give it a specific flavor, such as "Ray Smuckles". The article alludes to this:
The first challenge was formatting the data in a way the language model could use. Then, there was the matter of picking an underlying language model to train with Rayās voice. OpenAIās ChatGPT was a little too sanctimonious for Ray, who likes to color outside of the lines, Hall says. They wound up using a fine-tuned version of OpenAIās Davinci, which Hall estimates is about 60 times more expensive than ChatGPT.
So, this is not just a matter of "he's only using his own writing so it's fine". The model Onstad is working with is exactly as plagiaristic as anything else OpenAI has put out, it just adds a layer of Smucklesiness on top of that. Whether you think "training a statistical model on the entire internet without authors' consent" is specifically plagiarism, otherwise exploitative, or totally fine is up to you. But you can't draw a clean line and say "training AI nonconsensually is bad but what Onstad is doing is okay."