it is clear to me that no mechanical change to social media will reduce the toxicity. people will always try: they've tried changing the algorithm, using "AI", changing the gamification... and it doesn't work

and it doesn't work because there's only one real answer and it's an answer none of the people running these sites want to hear. you just have to ban the people who cause trouble. there's people out there who simply want to cause trouble, hurt people, and fuck things up, and they will stop at nothing to do it. no mechanic will get in their way because they will try their best to work around it.

...Jack Dorsey's run as CEO of twitter was characterised by this ignorant sense of optimism that everyone on the site was interested in approaching it with good faith and posting only the truth, and he seemed to believe that the only reason "bad" posts would appear is because someone with good in their heart had simply been misled, or misunderstood the site's rules.

...this eventually lead to him, in his ignorance, once the toxic posts started coming without end, to assume that the neo-nazi point of view was "winning" in the public sphere because it was the viewpoint that was argued the best. "if you disagree with it," he would say, "why don't you make a counter-argument?" but it doesn't work that way, because trolls on the internet, as I already said, don't like rationality. they don't like arguing. they like to come in and destroy things and the thing that pisses them off most is barriers that stand between them and destruction. they don't want to make friends and they don't want to help the site run smoothly. and if a site admin doesn't get that, then I just assume they're probably one of them


You must log in to comment.

in reply to @thewaether's post:

it is clear to me that no mechanical change to social media will reduce the toxicity.

That runs counter to my experience, but I think we could just be drawing different lines around what counts as a mechanical change. I agree that recommendation algorithms and such can't be counted on to solve moderation problems, and site-level moderation will always be important.

In addition to site-level moderation, there are other, smaller-scale mechanical differences that can make a difference to how things play out socially.

If the block feature creates a big notice saying "you have been blocked by (specific name here)" that someone can screenshot and parade around like a trophy, that's a mechanic that with social effects.

If blocklist information is public enough that people are afraid to use the feature for fear of being targeted specifically for who they've blocked, that's a mechanic with social effects.

If having a conversation necessarily broadcasts that conversation to all your followers and you inadvertently draw other people into a fight just by participating in one yourself, that's a mechanic with social effects.

If there are user-moderated groups where mods can remove off-topic posts and boot the trolls, that's a mechanic with social effects.

If the moderation is all ToS-based site-level and all you have is tag subscriptions that are vulnerable to off-topic posting and other misues that are technically-within-site-rules yet socially disruptive, to the point that all people can do is yell at each other to please behave, that's a mechanic with social effects.

So in light of things like these, mechanical changes can definitely make an impact on the social atmosphere and, by extension, toxicity. But no site should treat user-end options as a reason to dispense with site-level moderation, either.