thewaether
@thewaether

it is clear to me that no mechanical change to social media will reduce the toxicity. people will always try: they've tried changing the algorithm, using "AI", changing the gamification... and it doesn't work

and it doesn't work because there's only one real answer and it's an answer none of the people running these sites want to hear. you just have to ban the people who cause trouble. there's people out there who simply want to cause trouble, hurt people, and fuck things up, and they will stop at nothing to do it. no mechanic will get in their way because they will try their best to work around it.

...Jack Dorsey's run as CEO of twitter was characterised by this ignorant sense of optimism that everyone on the site was interested in approaching it with good faith and posting only the truth, and he seemed to believe that the only reason "bad" posts would appear is because someone with good in their heart had simply been misled, or misunderstood the site's rules.

...this eventually lead to him, in his ignorance, once the toxic posts started coming without end, to assume that the neo-nazi point of view was "winning" in the public sphere because it was the viewpoint that was argued the best. "if you disagree with it," he would say, "why don't you make a counter-argument?" but it doesn't work that way, because trolls on the internet, as I already said, don't like rationality. they don't like arguing. they like to come in and destroy things and the thing that pisses them off most is barriers that stand between them and destruction. they don't want to make friends and they don't want to help the site run smoothly. and if a site admin doesn't get that, then I just assume they're probably one of them


vectorpoem
@vectorpoem

Banning users for unacceptable speech requires you, a platform owner, to hold and publicly state values with regards to what things are unacceptable to say. Even the most libertarian-right freezepeachers have speech they won't tolerate - say, posting a photo of their home address with your current GPS coordinates a few blocks away and a gun sitting next to it - and it's adorable to watch them pretend they don't.

The reason they are so reticent to do this, though, is that they see it as being pure downside. If you state your values, people will criticize you where their values differ. You'll lose customers, beyond just the people you had to ban. Capital Brain tells you that losing customers is always bad, even if those customers are driving away other customers and making everyone miserable. You lose the safety of being able to pretend that you are "apolitical" and above it all. You're down in the muck with the rest of the human race, in the messy eternal squabble over what kind of world we actually want to live in - a domain where technology can almost never solve social problems, not really.

So that's why you couldn't hardly torture a moral stance of any clarity out of Jack Dorsey. That's why Steam's user culture was overflowing with white supremacy for years. The unbearable, unavoidably accountable daylight of saying they believe some ideas are good and other ideas are bad, sizzling the vampire's flesh.


pervocracy
@pervocracy

Another factor here is pretending that a software-enforced ban is the only way someone can be pushed off social media.

Because it's true in the technically literal sense that you can post on a site where everyone treats you with maximum cruelty at all times. If every time you log on, you're confronted with huge crowds of accounts--more than you could ever block--saying the most disgusting imaginable things about you, implying that you do horrible things to children, that your body is so repugnant as to be comical, and whatever comes within a TOS's breadth of saying someone oughta do something about you... the post button still works! You have not been banned!

But at some point... really? Haven't you?

If a swimming pool is unlocked, but full of piranhas, is it actually available to you? Because it's not locked! The only thing holding you back is your own pain tolerance, medical budget, and fear of death--but those are you problems, those aren't hard barriers! And a few people are in fact foolish or tough or armored enough to swim in the piranha pool, so see? You're totally not banned from the pool!

If some of your users have to exercise tremendously more moral fortitude than others in order to participate in your social media site, you are fucking kidding yourself if you think everyone has equal access to it. You're just allowing the decision to be made by piranhas instead of moderators.


pervocracy
@pervocracy

Also, while I'm belaboring this metaphor--it doesn't do any good to tell people to just jump in there and start biting the piranhas back. Apart from the fact that this would still not be fun, it just isn't a symmetrical situation.

You're not going to be able to make someone feel truly, deeply hurt and afraid over being white or straight or able-bodied. You're not going to be able to rally bomb threats against a Christian school just by saying "they're trying to make kids Christian." You're not going to keep a cis man up all night worrying that someone will send his parents pictures of him wearing men's clothing. There are weapons that are simply not available to "both sides."

Pretending that all attacks are equally harmful and thereby equally harmless, and I put up with being called a cracker so why shouldn't you put up with _________, is... it's not even an interesting logical fallacy. It's just total wank.


You must log in to comment.

in reply to @thewaether's post:

it is clear to me that no mechanical change to social media will reduce the toxicity.

That runs counter to my experience, but I think we could just be drawing different lines around what counts as a mechanical change. I agree that recommendation algorithms and such can't be counted on to solve moderation problems, and site-level moderation will always be important.

In addition to site-level moderation, there are other, smaller-scale mechanical differences that can make a difference to how things play out socially.

If the block feature creates a big notice saying "you have been blocked by (specific name here)" that someone can screenshot and parade around like a trophy, that's a mechanic that with social effects.

If blocklist information is public enough that people are afraid to use the feature for fear of being targeted specifically for who they've blocked, that's a mechanic with social effects.

If having a conversation necessarily broadcasts that conversation to all your followers and you inadvertently draw other people into a fight just by participating in one yourself, that's a mechanic with social effects.

If there are user-moderated groups where mods can remove off-topic posts and boot the trolls, that's a mechanic with social effects.

If the moderation is all ToS-based site-level and all you have is tag subscriptions that are vulnerable to off-topic posting and other misues that are technically-within-site-rules yet socially disruptive, to the point that all people can do is yell at each other to please behave, that's a mechanic with social effects.

So in light of things like these, mechanical changes can definitely make an impact on the social atmosphere and, by extension, toxicity. But no site should treat user-end options as a reason to dispense with site-level moderation, either.

in reply to @vectorpoem's post:

Important to remember that pretending to have no content moderation or even the understanding that such a concept existed was the keystone of social media companies' defense against legislators coming after them for, say, all the child porn. Or helping coordinate a pogrom for any police state with a buck. If they were seen to coherently enforce policies against anything less vile, so the toddler-logic goes, they'd be taking responsibility for all of it, and being responsible for something like Facebook or Twitter is the kind of thing tribunals hang people for

in reply to @pervocracy's post:

I've seen two approaches to moderation of bad actors that so far as I know haven't been tried on any social media platform I'm aware of - one is "disemvoweling," which is what gawker did to the accounts of toxic commenters. Simply put they couldn't comment anymore. Problem heavy-handedly solved (and just as heavy-handedly worked around). The other is "hellbanning," which iirc Stack Overflow implemented - the problem account can continue to interact, or have the appearance of interacting, but nobody else sees them. They're effectively locked in a hell of their own making.

I don't like disemvoweling because my brain always tries to solve the puzzle. ☹️

Hellbanning, however, is hilarious. Maybe if I realllllly squint I can see how it might become a safety situation (escalating threats by the damned user go unnoticed, the user reaches out for help in a crisis via the platform and isn't seen), but I don't think social platforms are responsible for being someone's only possible way of communicating with the outside world.

Twitter has something "similar" with its "quality" filter, but I have no idea what the criteria are for something to be snagged by it. I just know that about 99% of what I've seen hidden by it is chaff.