stosb

wearer of programming socks

  • she/her

mid 20s | bisexual | programmer | european


profile pic: a picrew by Shirazu Yomi
picrew.me/en/image_maker/207297
i use arch btw
xenia the linux fox -> ๐ŸฆŠ๐Ÿณ๏ธโ€โšง๏ธ
the moon
๐ŸŒ™

AtFruitBat
@AtFruitBat

Research has shown that some emotional AIs disproportionately attribute negative emotions to the faces of black people, which would have clear and worrying implications if deployed in areas such as recruitment, performance evaluations, medical diagnostics or policing.


bytebat
@bytebat

...which would have clear and worrying implications if deployed in...

stop pretending zhis is a future problem zhat will happen when we use AI more. it has clear and worrying implications now

like seriously in what universe is it reasonable to see zhat your automated systems are saying "hmm black people bad" and respond wizh "well i guess we better not use zhis model in zhis specific area"


You must log in to comment.

in reply to @AtFruitBat's post:

not-fun fact, when self driving cars were tested for their pedestrian tracking (i.e., "That pedestrian is about to cross, stop" or even "Someone jumped in front of us, slam on the breaks") it was less accurate on darker skinned people than lighter skin individuals. I got this information from Joy Buolamwini, a computer scientist and digital activist who's described on the MIT Media Lab website of a "poet of code", which is an awesome description for a person who's doing great work (she has a book on AI talking about this issues I highly recommend). Anyways, selfdriving cars could more likely to hit darker skinned people because they weren't trained on majority "Pale Male" data, or what Buolamwini calls data that's majority light skinned white men. That's not terrifying and dystopian at all