If you look up "paperclip maximizer" you'll find loads of references telling you that it's a cautionary analogy or thought experiment about AI.
But there's nothing about the paperclip maximizer scenario that requires AI at all: any algorithm hooked up directly to the means to achieve its goal can go there. (The unthinking magic in "The Sorcerer's Apprentice" is a transport-water maximizer) The only reason "intelligence" enters into it is because we're stupid/arrogant enough to not take threats by non-intelligent agents seriously, so you need to add "AI" into your thought experiment to make it appropriately scary. (What scares you more: violent crime, or your wet bathtub? For the vast majority of us living in the US, your wet bathtub is much, much likelier to kill you)
On the other hand, many of the "proposed solutions" to the paperclip maximizer (or "squiggle maximizer" as some people seem to have renamed it) problem from the kind of philosophy enthusiasts who worry that someone they know will think of Roko's basilisk absolutely require full general AI of a kind we aren't even vaguely close to yet. You can't build basic psychological impulses or restrictions into something that's barely a few hundred linear equations.
So we have the paperclip maximizer threat, but no useful solutions from the people who claim to be doing important philosophizing work on this. One might therefore suspect that either there are examples of this having happened, or that the threat is overblown.
Which brings me to Facebook.
For three years after it was allowed into Myanmar, the Facebook feed of everyone in the country was run purely by the FB algorithm with no moderator intervention (because the company didn't want to spend money to hire Burmese-speaking moderators).
The algorithm maximized engagement, per its design. It boosted scapegoating conspiracy theories until they turned into an actual genocide. It literally resulted in people being burned in service of maximizing engagement.
The paperclip maximizer is here, it's already killed hundreds of times over, we could turn it off if we wanted to, and we won't because its actions are propping up the material success of the same social class of the people who are sitting around Silicon Valley having Very Serious Thoughts about the future of AI and how we're all in mortal danger from spicy autocomplete soon being able to replace vapid opinion piece writers.
