Thinking about the divide between "reasonable, well-supported concerns about machine learning ethics" vs "existential risk nonsense totally untethered from reality"
(And how companies clearly prefer the latter)
It's almost like they want to say they "take ethics seriously" without having to actually behave ethically
When asked by Rolling Stone if he stands by his stance, Hinton says: “I believe that the possibility that digital intelligence will become much smarter than humans and will replace us as the apex intelligence is a more serious threat to humanity than bias and discrimination, even though bias and discrimination are happening now and need to be confronted urgently.
From this article