akhra

๐Ÿด๐Ÿšฉโšง๏ธโšขโ™พ๏ธฮ˜ฮ”โšช

  • &๐Ÿฏshe/her ๐Ÿฒxie/xer ๐Ÿฆกe/em/es

wenchcoat system:
๐Ÿฏ Akhra (or Melli to disambiguate), ratel.
๐Ÿฒ Rhiannon, drangolin.
๐Ÿฆก Lenestre, American badger.

unless tagged or otherwise obvious, assume ๐Ÿฏ๐Ÿฒ๐Ÿฆก in chorus; even when that's not quite accurate, we will always be in consensus. address collectively as Akhra (she/her), or as wenchcoat (she/her or plural).

๐Ÿ’ž@atonal440
๐Ÿ’•@cattie-grace
โค๏ธโ€๐Ÿ”ฅ(not#onhere)
๐Ÿง‡@Reba-Rabbit


Discord (mention cohost, I get spam follows)
@akhra
Discord server ostensibly for the Twitch channel but with Cohost in hospice y'know what let's just link it here
discord.gg/AF57qnub3D

AtFruitBat
@AtFruitBat

Research has shown that some emotional AIs disproportionately attribute negative emotions to the faces of black people, which would have clear and worrying implications if deployed in areas such as recruitment, performance evaluations, medical diagnostics or policing.


You must log in to comment.

in reply to @AtFruitBat's post:

not-fun fact, when self driving cars were tested for their pedestrian tracking (i.e., "That pedestrian is about to cross, stop" or even "Someone jumped in front of us, slam on the breaks") it was less accurate on darker skinned people than lighter skin individuals. I got this information from Joy Buolamwini, a computer scientist and digital activist who's described on the MIT Media Lab website of a "poet of code", which is an awesome description for a person who's doing great work (she has a book on AI talking about this issues I highly recommend). Anyways, selfdriving cars could more likely to hit darker skinned people because they weren't trained on majority "Pale Male" data, or what Buolamwini calls data that's majority light skinned white men. That's not terrifying and dystopian at all