LLMs ("large language models") need a lot of input to generate their statistical text models. Orders of magnitude more than is contained in a single webcomic, even one as large as Achewood. The way this works is, roughly, you train a "base model" on approximately the entire internet. Then you take this "base model" and you add additional training on top to give it a specific flavor, such as "Ray Smuckles". The article alludes to this:
The first challenge was formatting the data in a way the language model could use. Then, there was the matter of picking an underlying language model to train with Ray’s voice. OpenAI’s ChatGPT was a little too sanctimonious for Ray, who likes to color outside of the lines, Hall says. They wound up using a fine-tuned version of OpenAI’s Davinci, which Hall estimates is about 60 times more expensive than ChatGPT.
So, this is not just a matter of "he's only using his own writing so it's fine". The model Onstad is working with is exactly as plagiaristic as anything else OpenAI has put out, it just adds a layer of Smucklesiness on top of that. Whether you think "training a statistical model on the entire internet without authors' consent" is specifically plagiarism, otherwise exploitative, or totally fine is up to you. But you can't draw a clean line and say "training AI nonconsensually is bad but what Onstad is doing is okay."
what it comes down to for me is that I think if a guy who wishes he could still be "The Achewood Guy" can no longer come up with Achewood ideas without a robot that spits half-formed Achewood ideas at him, I do not want that. That content (derisive) is depressing to me. I love Achewood so much I have a Ray Smuckles tattoo on my arm and this shit just depresses me to death. However I will admit that it does seem like something Téador and Ray would come up with together and Beef would just be like 'oh well okay i guess might as well'
also I just never want a computer to tell me anything. Fuck Computers, even.

