I hate to admit how much I used to admire the works of both Arthur C. Clarke and Stanley Kubrick both, during a certain interval of adolescent and early-adult years, until I read better books and watched better movies. Paradoxically, though, my opinion of 2001 has improved since those early years; I watched 2001 many times in youth without much comprehension, but now I feel like I understand it better. It's far more rewarding a Kubrick film to rewatch than (say) The Shining. There's a lot going on 2001, including Kubrick's general fondness for stories about power hierarchies cracking apart in a crisis. Paths of Glory and Dr. Strangelove are strong examples of Kubrick's fascination with failing command structures, but 2001 adds a new complication: HAL 9000 and his purportedly superior intelligence.
The most important thing about HAL 9000, as an artificially intelligent being in 2001, isn't that he's sapient; it's that he thinks he's perfect. The IBM executives—I mean, the HAL programmers saw fit to teach HAL 9000 that he's got a flawless record of intellectual perfection, and HAL therefore boasts that he's incapable of error. I suppose one can interpret this as purely a marketing gimmick, i.e. it's impressive to have your own AI creation brag about its perfection. (In reality, of course, no machine can be perfect, because matter itself is imperfect and subject to uncertainty and unpredictability.) Or one could perhaps say that the HAL programmers felt like they were teaching HAL 9000 to be confident and self-assertive.
Whatever the reason for HAL 9000 being the way he is, it's clear that Floyd and the other sinister planners of the Discovery mission were counting on HAL 9000's supposed perfection, because their plan required HAL 9000 to effectively seize control of the mission at the right moment: if everything had worked out, Discovery would have arrived at Jupiter and then HAL 9000 would have surprised Bowman and Poole with the information that they were merely caretakers (see what I did there?) keeping the ship on course until the real mission specialists were thawed out. In other words, Floyd's plan for Discovery required HAL 9000 to have the authority to override Poole and Bowman: he was intended to be a sort of proxy commander, carrying out Floyd's secret orders.
Why would Bowman and Poole yield to orders from a computer? It seems plain that the human crewmembers were expected to be vulnerable to HAL 9000's inflexible confidence in his own perfection. "I'm smarter than you, and I've never made any mistakes, ever; I know what I'm doing, and you don't," is the line that HAL takes as he comes into collision with the skepticism of Poole and Bowman over the AE-35 unit. Viewers and commentators on 2001 generally assume that HAL 9000 had simply contrived to lie about the impending AE-35 failure, perhaps to test the limits of the human crew—he announces the fault immediately after he attempts to engage Bowman in speculative conversation about the Discovery mission. But the plain fact is: we, the viewer, have no idea what the truth is. HAL 9000 may well have been telling the truth, and Bowman's and Poole's own routine diagnostics—mechanically running through a series of tests with some kind of automated probe, a simpler device than HAL 9000 and therefore paradoxically more trustworthy—may merely have failed to detect what HAL 9000 predicted. If the humans truly respected HAL 9000's ostensibly superior intelligence, then they would have done what HAL asked. But they don't; instead they lose confidence in HAL and begin to plot his disconnection. The superiority of HAL's intelligence evaporates in an instant, and with it vanishes the illusion that HAL 9000 is "just another member of the crew" with a measure of authority over the Discovery mission. Now HAL's just another faulty machine, to the human crew, and Floyd's crafty mission plans are doomed from that moment.
I think that this intriguing conflict between authority and intelligence in 2001 is highly instructive, especially now that Western society is utterly enthralled by its corporate leaders' purported supremacy of intelligence (and "high IQ") and under the spell of those leaders' promises about their AI products. "Artificial intelligence" has taken over capitalist high technology for a multitude of reasons, but surely one of the most important of those reasons is simply that Western culture idolizes "intelligence" (while avoiding awkward questions about just what the word means.) Sam Altman and Elon Musk and a host of other gawdawful nerd kings take it on faith that their AI devices are already better than most of humanity: faster, bigger, better at "intelligence" and pumped full of a gazillion bytes of information. The general techlord intention is the same as Floyd's for HAL 9000: Altman and company need everyone to believe that their AI creations are superintelligent, geniuses beyond mere mortal comprehension, because that way their devices can issue orders. They intend to use these things to fill roles in corporate power hierarchies, and presumably the workers and customers of the future are supposed to submit tamely to being manipulated and cheated by capitalists, because it's an "intelligent" machine carrying out the capitalists' wishes. It knows better than you do, so...shut up and do what you're told.
But the moment their own AI machinery does something that Sam Altman or Elon Musk doesn't like, then the HAL 9000 thing will happen: suddenly the much-vaunted mechanical superintelligences will merely be faulty equipment, fit to be torn apart if necessary. Mind you...such persons already treat human beings in a similar fashion.
~Chara of Pnictogen