smallcreature

slowly recovering from birdsite

autistic queerthing from france. kitty fighting the puppy allegations. Asks welcome!

Icon: Komugi from Wonderful Precure
Header: Whisper of the Heart



AtFruitBat
@AtFruitBat

Research has shown that some emotional AIs disproportionately attribute negative emotions to the faces of black people, which would have clear and worrying implications if deployed in areas such as recruitment, performance evaluations, medical diagnostics or policing.


Crabbit-Slater
@Crabbit-Slater

arlier this year, associate professor Matt Coler and his team at the University of Groningen’s speech technology lab used data from American sitcoms including Friends and The Big Bang Theory to train an AI that can recognise sarcasm.

Absolute standout here for picking the whitest, most middle class, WASP training data possible. I've already had 15 years of battling with voice recognition technology, I can't wait to be forced to use 'english voice' on job applications in the future. backslash fucking s

We are currently living in the timeline where tech simpletons are absolutely choking to invent skynet and turn it into a Karen. i hate it here


You must log in to comment.

in reply to @AtFruitBat's post:

not-fun fact, when self driving cars were tested for their pedestrian tracking (i.e., "That pedestrian is about to cross, stop" or even "Someone jumped in front of us, slam on the breaks") it was less accurate on darker skinned people than lighter skin individuals. I got this information from Joy Buolamwini, a computer scientist and digital activist who's described on the MIT Media Lab website of a "poet of code", which is an awesome description for a person who's doing great work (she has a book on AI talking about this issues I highly recommend). Anyways, selfdriving cars could more likely to hit darker skinned people because they weren't trained on majority "Pale Male" data, or what Buolamwini calls data that's majority light skinned white men. That's not terrifying and dystopian at all