Research has shown that some emotional AIs disproportionately attribute negative emotions to the faces of black people, which would have clear and worrying implications if deployed in areas such as recruitment, performance evaluations, medical diagnostics or policing.
...which would have clear and worrying implications if deployed in...
stop pretending zhis is a future problem zhat will happen when we use AI more. it has clear and worrying implications now
like seriously in what universe is it reasonable to see zhat your automated systems are saying "hmm black people bad" and respond wizh "well i guess we better not use zhis model in zhis specific area"