Furry is doing AI Writing Discourse, which definitely seems fun and like something I want to get in on, because unlike with, say, AI-generated imagery, writing is something I’ve been doing for twenty years and enjoy, at least in theory. So. You know.
More to the point, I guess, I think the arguments against using AI are pretty unconvincing, but I think they’re bad in a way that is perhaps instructive, or that we are probably going to have to come to terms with.
I wrote the first short story I ever shared with other people back in college. At that time, I had a word processor, and what I’d do was I’d retype the section I was working on, take it to the cafeteria, edit it while I ate dinner, and then retype it again the next day.
I do recommend that everyone try writing something with some degree of physicality to it at least once. If your handwriting sucks, like mine, then use a word processor or a typewriter; there’s something unique, I think, about physically marking up a piece of paper, or manually correcting your errors.
Of course, I don’t do that anymore. The aesthetic advantages of typing out stories are significantly outweighed by the ease of use of Scrivener—being able to cut and paste things without scissors, for one, to say nothing of being able to create hyperlinks between submissions, or directly version them, or compile them into new formats.
So, what we consider to be the acceptable frontiers of writing technology is a moving target, obviously. Is AI different?
Let me take a step back.
SoFurry bans the use of any AI-generated content in submissions. No clarification is made on this. At least in theory, this would presumably apply to stories containing characters with names produced by online tools, or stories with worldmaps created by fantasy map generators.
But, you know, I think obviously if somebody reports a story as containing a character with a generated name, we’re not going to remove the submission, on the grounds that that clearly isn’t the intent.
Writing Dog Patreon backers have access to the tool I use to procedurally build new words in a language based on the existing language. Those are languages I created myself, and a tool I created myself. That seems like it should fly under basically any metric, even though that is, technically, generated text.
More to the point, I am certain that the automatic suggestions produced by spelling and grammar checkers are probabilistic. Formally speaking, any AI content would have to extend to anyone who ever took a computer’s recommendation for what word they meant to use, or what comma splice was inappropriate.
However, when I say this, I bet you’re not thinking: “wow, what a good point.” You’re thinking: “that’s asinine. Obviously spellcheck is different. That’s not what anyone means when they say ‘AI.’” Of course not. I agree. Indeed, maybe everyone agrees.
But.
Consider an argument from plagiarism grounds—i.e., that the central problem with LLMs like GPT-4 is that their training data was collected without the consent of those whose material was used. This can be extended not just to generative AIs but also to large-scale “big data” projects doing, say, sentiment analysis on a body of work.
Set aside whether or not this is legally acceptable. I think that it is, at least, a coherent and reasonable moral proposition to say that you shouldn’t use other people’s things without compensation or at least consent. It would, therefore, not be improper for your website to ban any AI-generated content to protect the intellectual property of artists.
On the other hand, I would be astonished to learn that the data behind Microsoft Word’s autocorrect is exclusively public domain, or exclusively sourced from a text library created with the explicit, informed consent and compensation of the work’s creators—again, setting aside the legal angle; I presume a good chunk of data collected from Office users is covered under the EULA, but we’re talking about the moral aspect here.
If the objection is that machine learning uses human creations without the consent of the humans—not with some artifact of that machine learning, but the fact that their creation implicitly requires some degree of plagiarism—then, from a consistency perspective, only human edited work is permissible, too.
Which, you know, sounds ridiculous. But is it ridiculous because some kinds of uncompensated, unacknowledged transformation of someone else’s work are objectively okay, or is it ridiculous because spellcheck has been a feature of word processing for decades and the benefits to writers significantly outweigh the drawbacks that some people might not want to be part of a word prediction model but are included in it anyway?
Perhaps it’s somewhere between the two. This is why I think the argument that OpenAI is “stealing from” artists is… unpersuasive, to say the least. It is even more unpersuasive specifically in a fandom that so readily embraces fanfiction. SoFurry does not ban fanfic, and, like… of course we don’t? It’s almost integral to the furry experience.
So that’s not to say that I think furry websites should ban fanfiction. But I think a site or publication that specifically declaims against machine learning as theft while permitting fanfics—or carving out exceptions for certain kinds of fanfiction, or borrowing certain elements from the work of others—is not acting from a sincere perspective of respecting the sanctity of intellectual property; they’re acting from the perspective of disliking GPT-4.
For similar reasons I don’t really think privileging the purity of human creativity—the idea that artists using AI tools are not really artists—is particularly persuasive either. Creating a good story purely through generative AI is not easy; it requires a lot of work, and skills that I don’t personally have. Using those tools effectively is a talent, and that talent is probably itself creative.
To be clear, that doesn’t mean “you need to accept it.” You should be able to run things the way you want. Banning AI is clearly what the SoFurry userbase wanted. A poetry zine should absolutely be able to say that all their submissions must rhyme, or add that couplets and doggerel don’t count even if I might disagree. A themed anthology should absolutely be trusted with the judgment call of what counts as fitting that theme. Art and literature websites should be able to say “no AI-generated content” whether or not that position is internally consistent or has clearly defined boundaries.
To me, the problem is twofold.
One, I think that as a general rule we should be careful with the moral arguments ascribed to creativity—what creators “should” or “shouldn’t” do, or what constitutes being an artist. If machine learning tools are morally bad, as opposed to simply “against the rules,” then creators who use them are, by extension, immoral.
Right?
I’m fine saying that if your creative process involves torturing animals or throwing rocks at passing automobiles, that’s morally suspect and probably it’s safe to say you shouldn’t do that. Please don’t do that. But we’re not talking about throwing rocks at cars, are we?
We are, at best, in a nebulous area analogous, I suspect, to the “no ethical consumption under capitalism” one, turned on its head. Should writers use open-source tools instead of Word or Scrivener? Should writers not talk about their coffee habit unless the coffee is sustainably sourced using fair labor practices? I mean. Maybe! I could entertain that argument.
But considering there is no accepted standard of what even constitutes art—the extent to which pop art is legitimately transformative or simple plagiarism, for example—I don’t think “models based on the work of others are by definition ethically unacceptable” is on safe ground.
Two, by extension, a discussion should happen around this, and I think whatever else can be said it is correct to say that the conversation is overheated. One of the first replies to someone saying the conversation is overheated over on Twitter was a “hey, [unrelated person], just thought you should know this person is saying something bad” callout.
To be clear, that kind of juvenile behavior is not the norm and I don’t want to tar everyone with that brush. But the moral framing has made it extremely easy for people to say very ridiculous things. If you read someone talking about their writing process and you feel compelled to posit, for example, that it’s the kind of language bigots use to justify racism, you know… go outside for a bit, my friend. You have been online Too Much.
We have all been online Too Much.
So when I say that the arguments against using AI are bad in an instructive way, what I mean is that I think they illustrate the extent to which the decision is going to be personal, and the lines will be fuzzy. For example, I think basically everyone agrees that a story straight from ChatGPT should be considered a step too far, and basically nobody thinks that spellcheck is a step at all, let alone a dubious one.
But if the red squiggly lines are allowed, what about using a tool like Grammarly? Is it fine to use storytelling dice to come up with an idea for a scene, or is it only permissible if I credit the creator of the dice, or is it never okay? Can I have moodboards with copyrighted content in my outline, or does that make my story compromised by art theft, too? Can I use AI to create a summary blurb for the story? What about alt-text for an image?
What if I have a hard time visualizing characters, so I use Midjourney to create one as a reference? What if I get stuck and ask ChatGPT for ideas—can I ever transform those ideas enough to make them my own? Can I put a character’s minced oaths and spoonerisms into GPT-4 and ask it to come up with a few more I can pepper into their dialogue?
I do not think there is a “good” answer to where that line is. I am not convinced that any single person I know would agree with every single one of my own stances on those. Which, I think, gets both to why extremely hardline arguments about the inherent turpitude of AI are unhelpful, and why it is necessary to acknowledge an ongoing conversation and to acknowledge that there is a degree of unproductive hostility that produces genuine chilling effects.
And what about the arguments for using machine learning in writing? Why am I not talking about them? Are they better? I mean I think the answer is ‘no,’ but with the exception of the “you’re just afraid of AI” one—which is silly and bad for the same reason “you just don’t care about plagiarism” is—they’re of a different kind.
Generally, in my opinion, it’s that they’re unfalsifiable. Will ChatGPT change the field of creativity forever? I don’t know. Who can evaluate that? What is the criteria by which I could agree or disagree that it is, in fact, the way of the future? That’s a matter of opinion.
So is “I find ChatGPT useful.” If you think that AI-driven code review makes you a better programmer, or helps you when debugging, who am I to say “no it doesn’t”? In the writing case, if you find conversations with an LLM-powered chatbot or storyboarding with an AI tool inspirational, I mean, there is no fixed standard for what constitutes valid creativity. Maybe it’s part of your process!
It’s not part of mine, but y’know, to each their own, right? Alcohol is so ingrained in the authorial persona that “write drunk, edit sober” is a truism I’ve heard at countless panels. I don’t drink when I’m writing, but I’m not going to tell you that it’s an immoral position and you’re capital-w Wrong to do it* any more than I would tell you that you must accept it as a Valid Literary Technique.
* (brb tagging other accounts to say “hey did you know [writer] is literally encouraging substance abuse, hope you don’t think that’s okay”)
And, anyway, I have another reason for focusing on the arguments against, which is my history on this website. Having gotten this far, perhaps you are even thinking: “why are you simping for LLMs, Writing Dog? I thought you hated AI” and, I mean... I do? I think OpenAI is an abhorrent company that should not be trusted with literally a goddamned thing. I think Stable Diffusion is Bad, Actually. I think it’s fine to not allow content principally generated this way and I am happy SoFurry does not do so.
In my opinion, as companies they are profoundly unethical. It is, conceivably, true that their training data counts as fair use, but I’m more concerned by the fact they don’t care. They could have ensured the content was entirely unencumbered or paid for, and chose not to because that would’ve been unprofitable, and they can get away with it.
That sociopathic disregard is what bothers me, the same way I am bothered that OpenAI doesn’t care that its product lies to people, and is being used in situations where those lies can and will cause real-world damage. I think that the largest companies involved are entirely unconcerned with ethics, full stop, in favor of blind greed that should not be encouraged or rewarded.
I am also concerned by the potential for AI to devalue creative works generally—the way the SAG-AFTRA strike seems to imply. It should be a given that artists can make effective, ethical use of artificial intelligence. It has to be acknowledged, though, that a significant use is going to be “get ‘good enough’ things for free that would formerly have required labor, so you don’t have to pay artists” and I think this is a genuine social harm.
To me, someone choosing to use Stable Diffusion to create cover art for a novel is a problem to the extent that it leads to artists not being employed to create that art. Someone using AI in a situation where this isn’t a possibility, though—I dunno, using a GAN to visualize a fantasy landscape to make the worldbuilding easier when they’re writing about it? I don’t see this as a problem.
Put another way, AI is bad because, and to the extent, that capitalism is bad. Hot take, I know. That, in my opinion, is where the actual ethical debate should lie, on the real-world impacts mediated through the lens of corporate control over creativity and the creative commons.
I am curious about the ways that people use this new generation of tools, and I think any writing community that intends to stay relevant should be curious, too. We should also be cognizant of the risks—the proliferation of explicitly derivative content by grifters, the lack of accountability of the major players in this space, the exposure of personal data or IP fed back into the models by their users.
But this isn’t a moral question, it’s a practical one. Framing it in moral terms, or terms of artistic merit, is not just wrong, it’s wrong for the worst reason: it’s unhelpful for determining any way forward, which we have to do whether we like it or not.
