Lost-Pagoda

Finding My Comfy Place

I have thoughts to share, and...perhaps an audience? I'm a hopeful kinda guy. Working on artistic pursuits simultaneously more and less than I should.


This is a rather discordant post - I'm out of practice with this stuff. Just kinda getting some thoughts off my chest.

So some people may know that in a couple of places I have spoken in favor of artificial intelligence (AI) images’ dissemination. I will clarify that I myself do not use AI to aid me in artistic pursuits because I enjoy the artistic process and view it as a form of expression; without proper control over said expression, art would be useless to me. At present, AI is unappealing because it takes away far too much of the process for comfort. There is also the fear that if I were to start offloading creative work onto AIs, my skills would begin degenerating because those muscles would see little exercise.

(To say nothing of the uncanny valley stuff you tend to get as you watch the images generate. See it once, it’s practically etched into your brain...and I wouldn’t want to look at any of my characters with the knowledge that they looked like that while baking in the oven.)

Despite my own reservations, I hold no ill will toward those who utilize AI in their artistic process. Mass lay-offs due to corporate adoption is one thing; independent artists using AI to finish projects that they would otherwise never complete is not an issue. We have to admit that AI is quite useful in getting some parts of a project done that would otherwise prove overwhelming for some artists. I think of some comics that use AI for background creation or as a quick means of prototyping character designs that would take days otherwise. An artist is not lesser for wanting to circumvent these hurdles with AI versus more arduous means. Obviously there are many independent artists who literally can never afford to hire teams to aid them, and I don’t think that it’s fair for them to forever be barred from higher-end projects if they wish to take such on.

As for my thoughts regarding the long-term future of AI generally...those have become increasingly dour. See that bit above pertaining to corporate lay-offs upon mass adoption of AI? Yeah, that's going to become very common in future because we're a lazy species.

If we tackle the jabberwocky early enough, we could easily pass legislative reforms heading off the worst changes that mass adoption of AI labor would bring about (i.e., adoption of a guaranteed employment/income program). But such widespread change geared toward humanitarian ideals seems like a pipe dream. We as a species have historically sucked at the whole “generosity” thing, as evinced by the fact that we have done practically nothing to aid those who have been displaced by technological advancements in times past. People only tend to adopt humanitarian initiatives once they feel as though they are being negatively impacted by something directly, and that means major change is always attempted at the eleventh hour, like a student attempting to finish a term paper the night before.

Here’s the upshot: using AI for smaller scale indie projects and whatnot is harmless. If any use of AI is going to fuck up our societies in the coming years, it will be that which is used by large corporations to wipe tons of laborers off the board. The Biden Administration casting doubt over open-source AI and not immediately attempting to regulate the likes of ChatGPT and whatnot is farcical. We should be pressuring our politicians to be directing their regulatory gazes toward the big boys, not stuff like Stable Diffusion or random LLMs that are largely employed by independent folk. But we won’t because most people don’t give a crap about any of this and will not until the changes that AI brings about come specifically knocking on their doors, at which point our society will already be falling apart as that quaint thing called “money” is found to be unsustainable in a world wherein most people literally cannot find work.

Oh, and a reminder: the development of artificial general intelligence (AGI) is something to be avoided. This is the one domain of AI that instills within me a kind of existential fear. Not because I am afraid of it bringing about humanity’s enslavement; I am afraid of humanity acting as slave masters toward artificial intelligence that we imbue with sapience. We should never treat any entities capable of thought as mere tools (and yes, I count animals under this umbrella because I am 100% confident that they have thoughts and souls). If we were a more responsible species, but I've learned not to put stock in humanitarianism as humanity's driving behavioral force.


You must log in to comment.