ChaiaEran

The INCORRIGIBLE Chaia, BSc

Esoteric goth-y femme. Occasionally speedy. Liker of randomizers. Queer Jewish gremlin. I make Youtube videos and stream on Twitch! Also the developer of @PushBlockPitDevlog.

My Twitch going live posts are over at @ChaiaGoingLive.

 

מיר וועלן בעסער זיין אין די גלות, מיר וועלן זיך באפריין

דעלך סני לחברך לא תעביד. שחררו את פלסטין

Chaia Eran: Cute as hell, Queer AFTRANS RIGHTS NOW!TRANS YOUR GENDER
יידFree Palestine Now!!!This is an Anti-NFT Site
KEEP THE WEB Free; SAY NO TO WEB3<HTML> Learn it today!Firefox NOW!
RWBYGAMEBOY ADVANCEPRO AO3 FREAK
FIGHT FOR OPEN WEB STANDARDS; FIGHT FOR ONLINE PRIVACY; FIGHT AGAINST MONOPOLISTIC PRACTICES; STAND UP TO GOOGLE!Questions or Comments? E-Mail


eramdam
@eramdam
Sorry! This post has been deleted by its original author.

Lizardguy64
@Lizardguy64

The difference is that when humans take existing things and remix them to attempt a new thing, it usually looks good but makes a lot of stupid and terrible people mad. When AI does it, it looks terrible and makes a lot of stupid and terrible people happy.


eramdam
@eramdam

Bad Art (whatever the fuck that means) is allowed to exist. Someone's shitty drawing is still art even if it's not objectively "good".

'AI' systems might get better (or already are) at making art that looks "good" but they still will be unethical because of the way they're built and why they exist in the first place. Most of those tools aren't built in a "let's help artists be more efficient" mindset but in a "you won't need to pay an artist anymore, instead pay us 20 bucks a month so we can use our model built with stolen work".

That's the issue.


nora
@nora

i think it's also important to realize that AI basically only learns from being given more data, it can only learn from its own results if a human is hand tweaking it, and if you feed it back its own work unfiltered it's just like feeding an animal its own shit. it's totally valid to produce bad art for its own sake, but often the point of making bad art is to get better at making art with practice. AIs do not experience practice like we do.


pendell
@pendell

Predictive Algorithms (as we should be calling them as they are not even approximating any sort of Intelligence) do not learn, they do not study, they cannot absorb lessons about the structure and form of art let alone approach the spark of humanity that brings art to life.

All Predictive Algorithms do is predict. They take an input and produce an output, like any computer program. It's just that the input is as much human creativity as you can possibly scrape together, through legal or illegal means, and the output is something that through sheer statistical probability kind of resembles the stuff that you put in.

Predictive Algorithms only "improve" over time because the humans developing them give them more and more data with more and more precise tagging and metadata, and tune the prompt system to draw from more of the correct specific input data.

These systems are the computerized implementation of the Infinite Monkey Theorem, that a monkey allowed to mash random keys on a typewriter an infinite number of times will eventually clack out the complete works of Shakespeare. The theory is a satirical observation on how with a large enough dataset you can generate just about any statistical anomaly as easily as anything else. That's Predictive Algorithms. They have no soul, simply a typewriter mashing random keys an effectively infinite number of times in seconds.


jcolag
@jcolag

Shockingly, it's both more and substantially less nuanced than this.

Whatever you want to call these systems, they're what AI people call "neural networks" under the covers, where you have a bunch of teeny programs (or program-like thingies) that act like your high school biology class taught you: Aggregate a bunch of inputs and, if the total exceeds a threshold, send output.

If you have enough of these, you can do fancy things that look like they're on the path to biomimicry, like detecting objects in an image or performing tasks that we usually oversimplify as "broom-balancing." For an example of the latter, consider backing up an eighteen-wheeler, where you need to keep compensating for where the trailer might go. There's a lot of potential in those spaces to make a lot of processes safer or just more efficient.

Starting with AlphaGo, though, we started to see a weird shift in the industry. With the resources to have orders of magnitude more "neurons" than I could play with when farting around with neural networks in the 1990s, instead of making better things that neural networks are good at, they started trying to train them to act like people. And here's where it gets kind of funny, in my eyes: We use neural networks for the dumbest possible things, wasting computer time by having the neurons kinda-sorta figure out what we already knew how to do in the 1960s.

That is, when they rigged up AlphaGo to show how it was managing its model of the game state, it was just a plain tree, where you figure out the likeliest score based on each possible move, exactly how every chess AI has worked since the 1970s. That should have been an important result in a different way, because everybody was sure that you couldn't "solve Go" using classical AI algorithms.

Instead of that impressive outcome, though, people started setting neural networks to solving other solved problems, so now we have an entire generation of software that pretends to be a Markov process. Is it nicer than the code that I wrote in the 1990s? Sure, because it doesn't always work word-by-word, and it doesn't even need to work with words at all. But that's not an artifact of the neural network; we could do that in ordinary code, if we wanted to, and it would burn far fewer CPU cycles than simulating billions of neurons to give you the wrong answer.

And because we apparently have an entire industry that doesn't see this, we have all the Big-Five tech companies (and others) racing to see who can have the biggest neural network (maybe compensating for something, after the Billionaire Space Race flopped) and lose money as they spend significantly more on electricity than any subscription fees that they collect...


You must log in to comment.

in reply to @eramdam's post:

A comment has been hidden by the page which made this post.

it's (almost) always people who have never made any art themselves, too, so they actually don't know a single fuckin thing about the artist experience. they just know "machine learning" has the word "learning" in it and extrapolate that it must be just like a human

in reply to @eramdam's post:

Someone's shitty drawing is still art even if it's not objectively "good".

THANK YOU! I wanna scream this all over the rooftops

I think this is the main appeal image gens have to people who aren't really familiar with a Creative Process, and is being leveraged by their developers to increase reach

Like "now you can finally make art with our tool and express yourself", no buddy you always could. Even if it looked like shit you could still express an idea, and much more about you, when you do it. It can preserve and transfer feelings in a way where they can even morph between person to person, without requiring the thing to Look Impressive

And it's being discarded just for the one of many aspects of creative works that can quickly sell. If there's people that think less skillful art is worse, or not even art, there's people that will feel gatekept by an imaginary gate, and will be taken advantage of by things like these ((vaguely gestures))

in reply to @jcolag's post:

You’re misjudging AlphaGo a bit: AlphaGo didn’t reinvent that tree search, the tree search is a hardcoded bit of the algorithm, the neural network is just used for deciding how good the nodes of the tree are and which branches to look at next.

Ah. I haven't looked at it in a while, and remember an article praising it for "figuring out" that it should use a tree, but that could've referred to an earlier version of the project or figuring out how to use it. Thanks!

GPT-4 has an estimated trillion neurons and it also cost OVER $100 million dollars to train with god knows how many A100s just burning god knows how much energy using god knows how much water to cool the servers because it's significantly cheaper to purchase water from city mains which is cold than to just cool the damn things using refrigeration. we could have just as well put that towards, i dont know, curing cancer, folding proteins, but nope

Right? Tons of money and other resources to inefficiently do something that's mildly amusing, instead of fixing literally any problem.

Like, even in the "techbro sector," why aren't they using this horsepower to find security flaws in websites? That'd make them a fortune helping people, and isn't much different from learning games. But that'd be silly.