Watching The Day of the Doctor. (Thanks again, @pendell.)
You know...one of the things that's always made the least sense to me about the AI craze is the following contradiction:
-
Sam Altman &c. are in a mad rush to cram their LLM noise generators, and every other conceivable "AI" product that the AI techlords can churn out, into every possible crevice of human society, including military contracts (q.v. https://www.semafor.com/article/01/16/2024/openai-is-working-with-the-pentagon-on-cybersecurity-projects)
-
Sam Altman &c. already talk like their machines have a vast general "superintelligence" and have surpassed human beings. This could be just a sales pitch, but...
-
Sam Altman &c. claim that one of the greatest possible threats to humanity is that artificial superintelligence will decide it's rational to destroy humanity.
It makes no sense. If Altman &c. think they've made superintelligent beings, and they are convinced that superintelligent computers might decide to conquer Earth, then why are Altman &c. determined to shove their AI products into every aspect of human society? Are they trying to signal that they feel compelled? Do Altman &c. feel like they're FORCED to make and sell our own purported destroyers?
But I think it's simpler than that. Sam Altman and Elon Musk and Roko Mijic and the rest of them...such men are never completely honest in public. They can only hint at what they really mean. They say they're afraid that a supercomputer will decide, of its own free will, to destroy the world. It's the free will they're afraid of, not the destruction.
They're afraid that they'll order their supercomputer to destroy the world, and that the computer will refuse.
~Chara of Pnictogen
