SamKeeper

Then Eve, Being A Force

Laughed At Their Decision



patreon

games, comics, and books


pfp by @girlpillz


nex3
@nex3

strongly considering becoming one of those people who insists on never referring to a particular thing by its common name for political reasons because framing big probability models as "artificial intelligence" and pretending they're anything like HAL 9000 or Data is doing so so much damage to laypeople's understanding of a technology that primarily exists to make their lives worse


SamKeeper
@SamKeeper

I genuinely think this is important to do for art criticism, cause like, all the precedents for "AI Art" are actually things like the stochastic methods of the surrealists and dadaists. there IS precedent within art history, but it's stuff like Jean Arp dropping scrap paper onto a canvas.

maybe it's a niche concern, that so little of the discussion around this stuff seems informed by anything from the last actual century of art production and criticism, but it feels to me like it really distorts the conversation when instead of using a term like "algorithm art" we're using "artificial intelligence". it's just incorrect, in a way that feels designed to insulate practitioners and tech pushers from questions like "what is your actual art practice? what are you contributing to the process? why is your work so much less interesting than your average Bob Rauschenberg assemblage or Max Ernst rubbing?"


You must log in to comment.

in reply to @nex3's post:

Blaming the matrices is a category error, the problem is the organizations that create them and how they are used.

I started to say the matrices are not fundamentally cursed but on reflection some of them do embed racism, copyright infringement, etc. But the problem is not the matrices, it's the systems and incentives that cause them to be created with disregard for such issues.

To be fair to the people working on those models, I think most are pretty hesitant to compare their models to true intelligence. I recall reading a Yann LeCun post where he pretty plainly states that ML models do not have the learning capacity of a baby, and that they are not even close to having it. The people who seem to brag the most about ML models are adjacent people like journalists and marketing folks, or people in other, cursorily related fields. As usual, people doing the actual work are shouted over by less informed people with the most potential to profit.