for an awful lot of reasons, the notion of the "Paperclip Optimizer" has a lot of purchase right now. it's the precursor to what eventually might be "grey goo" or per the Culture novels a "hegemonizing swarm", a dumb system designed to do nothing but expand its capacity to convert everything into a reflection of its initial programming, i.e., turn all the matter in the universe into paperclips.
there was even a web game about it!
this article I wrote a couple years ago is about that game. I think it's worth reposting now cause people keep talking about the paperclip optimizer as a parable about dangerous dumb systems. and that's true, that's what the game is about! the game is very much a deliberate allegory meant to explain why you should support "friendly AI" grifters!
what this article proposes is: maybe you shouldn't do that, actually, because behind every "rogue AI" is actually some capitalist somewhere making a decision to make All The Money, damn the consequences. this is an article about playing Universal Paperclips radically wrong--both radically wrong mechanically, and radically wrong emotionally. what I think falls out when you shake the game that way is a lot of unstated assumptions about shit that's acceptable for human beings to inflict on each other but somehow monstrous when a machine is doing it.
like, I get that we're all attempting to be more materialist in our analysis and that's good, but sometimes it feels like we're sliding into a kind of Lovecraftian understanding of the corporation, like it's just this incomprehensible machine working for itself. but at every stage there's people making decisions and they COULD be held accountable! and also, there's a designer of this game making decisions about where to put content emphasis, in order to put a finger on the scales of the parable. you don't HAVE to inflict mind control drones on humanity in the game any more than people HAVE to use deceptive advertising practices.
and by the same token like, it's actually perfectly reasonable for someone who isn't in STEM to look at a search engine spitting out wrong results and say hey, this search engine is bad! you can say "ah but technically machine learning is not intended to output correct results, you've made a Category Error" all you want; a human being sold this to other human beings as an intelligent search engine, and that sale was based on a whole series of lies. the technical explanation can be helpful, but it's not the point. the point is that a human attempted to harm other human beings with technology, something we've been doing roughly since the opening sequence of 2001 A Space Odyssey.
anyway there's a lot of weird maybe kinda heterodox perspectives in this article that I still haven't really seen anywhere else but that still really guide a lot of my thinking about this tech. read it if it sounds interesting I guess!
