if you make me interact with a chatgpt bot, me and my friends are going to smash it with hammers
"...the company tells me [automatic moderation] will soon get an AI upgrade that allows it to not only follow the letter of the law, but interpret the context of messages."
alright, party's over everyone. neural network based moderation has never worked before and it sure as shit isn't going to work now. get out before you get kicked out.
I'm thinking about this part because while jailbreaking a few things I recently interacted a bit with the massive amount of piracy that gets organized through discord (from waitlists to get invites to torrent sites, to verification stuff, to just straight uploading shit into discord) If this "AI" is functional enough to start cracking down on all the piracy related discords they'll probably move over to Line or something but still I'm curious/concerned to see what happens there
Good news: Not only are language models notoriously bad at "interpret[ing] the context of messages", but as they're designed right now, it's effectively impossible for them to interpret linguistic context at all. All they really have to work with are statistics about which words happen to occur alongside other words (think every machine translation ever designed), and according to a number of experts, you need knowledge of the actual circumstances a given sentence is referring to - knowledge beyond language itself - to begin to understand context.
Bad news: They're being given the power to enforce the law regardless, and Discord isn't likely to make its AI moderation open to any sort of appeal. This thing is supposed to save labor, after all.