Yes, this is inspired by recent discourse, but it's not directly about it and it's something I've written about elsewhere many times about various scuffles. It's just obviously timely right now.
I see a lot of people in a lot of communities right now that see community rules as something that has to be algorithmic in nature. Your rules have to be hard and concrete, and if there's any room for a grey area, then you shouldn't have those rules. Obviously in the most recent discourse this was about lolicon/cub content, "what if you can't determine if it's underage", "what if it's an elf character, is that considered human or no?" "how will you decide if it's porn or not?" And all of this is seen as a justification for why we shouldn't have rules against it.
But that's not how communities should work, and that's not how successful communities work. All rules have human input. Someone posting their gofundme a few days in a row could be considered spam by some sites, for other sites spam is more like an account dumping a thousand links to a shady knockoff Nike seller. You have rules against racism, there's no easy algorithmic rule to determine if something is racist, it's up to the moderator in charge to make that call. You have rules against trolling, it's up to the moderator in charge to determine if someone is trolling or posting sincerely. In a good community, a grey area will be dealt with with some grace--maybe they are open to an explanation from the person who posted it, open to community input, willing to reverse it if they believe they made the wrong choice. It's worth it to go through a little friction on grey areas in order to have rules that prevent the stuff that is so far away from the gray area that it can't even get cell reception anymore.
If you want an algorithmic view of community moderation, just let chatgpt be your moderator. But you don't, you want human moderators that you can trust to do the right thing. Now of course, that requires two things:
-
Y'know, human moderators you actually trust. If you don't trust them, then it doesn't matter what the rules are, they will do what they want. If you do trust them, then you don't need to worry about whether they'll be unfair with the rules as written.
-
A strong community ethos. A community that shares what they stand for lets you know how they will handle grey areas. A community that is focused on not allowing bigotry is going to be more heavy handed at removing things they think are bigoted. The grey area is enforced more harshly. A community that makes a big deal about free speech may do the opposite. The important thing is, you know what the community's vision of itself is. Now, its' easy for a site to say one thing and not actually follow it, but that goes back to the trust in #1.
I think it's an absolute joke to try to have a community where you refuse to make a rule because there could be grey areas. All of your other rules already have grey areas. Heck, Cohost has a rule saying they're allowed to decide whether your posts are funny. So I am not swayed at all by someone saying you can't make a rule against X because there are grey areas. You can't make a rule against X because it's hard to enforce. Most rules are hard to enforce. We do it because it makes the community better. If a rule gets rid of 99% of the stuff nowhere near the grey area that makes the site worse, then I think it's worth it for the 1% left in the grey area to be considered carefully. Most of the time, most of the content you're going to be moderating isn't very grey to begin with. But "this rule can't be applied perfectly in every scenario so we shouldn't have it" can apply to almost every rule in a community and at that point you might as well just not have rules and see how far that gets you.


