DoomMate

Morphic Korps writer

Writer, TF Fan, The other kind of TF Fan, nerd for superheroics

Major writer for the Morohic Korps setting



atomicthumbs
@atomicthumbs

i found a guy on twitter who is the exact opposite of cohost. he's an "ML researcher" who has created an app called "Furry Block List" that automatically and irreversibly blocks the 5% of people in his "furry accounts" dataset, amounting to thousands of users, that Google's Perspective API identifies as toxic.

screenshot above is the sort of comment that Google's Perspective API thinks is toxic.

and he insists he's doing the right thing, because he used a "freely available" API to do it. and that he is correct for doing so, because there's too much data on twitter to not use computers to do things. and that the computer is correct, because the computer is correct.

i am so glad there is now a website that would presumably ban this motherfucker on sight if he tried to pull this sort of thing


StrawberryDaquiri
@StrawberryDaquiri
This page's posts are visible only to users who are logged in.

You must log in to comment.

in reply to @atomicthumbs's post:

i agree with thumbs i think he might actually think this. he could just be a learned-to-code hype follower who just credulously chugged all the corporate propaganda for idiots and doesn't actually know fuck all about anything

Something I just wrote on birdsite:

BTW, even "toxic" as a metaphor is not a sufficiently complete way to judge the acceptability of particular speech.

  1. "Toxic" to whom and for what purpose?
  2. The dose makes the poison.

"Toxicity" isn't enough.

I'm not even a ML guy, but I've done the basics courses and worked with a real-world ML use-case.

The most interesting thing to me, while working on it, was that our data was biased just from being gathered internally. The moment we started adding data from a contracted third party, the model changed.

The third party, at worst, was getting data at a different location, and maybe the subjects were in a different sitting position.

Bias is one if the first things you learn when learning the fundamentals of ML

The thing that makes me suspicious is the way he's seemingly chosen the furry community out of thin air and dropped in our laps a tool destined to create a shit storm, even going so far as to make a website dedicated to it. Especially with goons like Simba joining in.

Still, you may be right and it almost feels worse somehow.

tech-optimism like that is seriously frightening to me and gives me extreme Nazi vibes, in that they famously justified atrocities (e.g. against people with mental illness) with scientists (especially in medicine) as cited authority

Not perspective! I know that AI, I wrote a post when it was in the news recently. 1) It has some flaws in what it determines to be toxic or not (some slurs were found to not trigger it but swearing almost always did) and 2) This isn't a good use for it, it's all probabilities of being toxic, there should be human review of any results you get from it.

the tech sector's credulity of "ai" snake oil is one of the things that was just... a source of endless background despair on twitter

however i see this thing doesn't even list who it's going to block (or any other information really) anywhere so i'm also gonna lay some blame on people who actually fed their credentials to a mystery box. surely we already learned this lesson once

i eagerly await his followup paper on the "impact" of this work on the communities he inflicted it on

i had to point out earlier that twitter itself already uses this exact species of snake oil bullshit to silence "abusive" speech, with its "most people don't send replies like this" mechanism. if you push through that dialog your reply is hidden by default and generates no notifications. it already makes it almost impossible to call out Nazis. every time it pops up it's for using softer language. you can't even call someone an idiot now

ugh do they really? i've never noticed such replies being hidden but i also have twitter filtered in such a way that if i fire one off, i don't expect notifications for them anyway. having five-digit follower count might also tip the calculus, i don't know

i had actually thought that was kind of nice because it adds a bit of friction without actually stopping you from doing anything. of course i have always told it to fuck off so i don't know if it's actually improved anything at all

i'm not 100% sure because i don't want to shadow realm my entire account testing it but it's certainly true at least sometimes, based on the same or similar metrics. i had to completely redo a whole reply because it was aggressively hidden, almost impossible to find without having a link to it in incognito. maybe that was just an exceptionally mean reply (it was and i was right to make it) and there's like some kind of Limit Break Overdrive Hyper Damage threshold that's like idk 50% higher than the normal "rude" threshold that summons the friction dialog.

i think the friction dialog is only nice in that it gives some haptic feedback when you're pushing up against those limits but i really don't like how low the bar is becoming

the fact remains that there is already definitely a sentiment-based quality filter that hides replies and it's not very good