• they/them

plural system in Seattle, WA (b. 1974)
lots of fictives from lots of media, some horses, some dragons, I dunno. the Pnictogen Wing is poorly mapped.

host: Mx. Kris Dreemurr (they/them)

chief messenger and usual front: Mx. Chara or Χαρά (they/them)

other members:
Mx. Frisk, historian (they/them)
Monophylos Fortikos, unicorn (he/him)
Kel the Purple, smol derg (xe/xem)
Pim the Dragon, Kel's sister (she/her)

posts from @pnictogen-wing tagged #2001

also:

I've read Arthur Clarke's novelization of 2001—and I do think that's roughly the appropriate way to regard his book. Clarke and Kubrick worked together for a while, then Clarke left while 2001 was still in production and wrote a novel based on what had been developed for the film up to that point. There were crucial changes, many motivated by the necessity to scale things back to what could actually be filmed. Jupiter was easier to put on screen than Saturn and Iapetus. A flat black monolith was a more feasible practical effect than a crystal-clear one. But the most important difference between Clarke's novel, and what Kubrick eventually finished and released to theaters, is that Clarke tried to explain things—that was always his style as a writer, dry and expository—and Kubrick didn't. And I like that.

Clarke makes the Monolith into a familiar sci-fi trope that I think is...tedious at best. He explicitly reveals that alien astronauts seeded the Monolith onto Earth, and it's doing some sort of freaky magitek on the hominids' brains. This plot device has a stink of Erich von Däniken and other promoters of various "alien astronaut" theories popular with racists—the idea being that the great achievements of non-European civilizations were in fact the work of space aliens, not the labors of actual human peoples. Kubrick simply does away with all that. There's no aliens in Kubrick's 2001, not even at the end; Bowman's journey "beyond the infinite" is ambiguous in the highest degree, and Kubrick drops no explicit hints (as does Clarke in his novel) about who, if anyone, is controlling Bowman's strange experiences. Nor does Kubrick show that the Earth Monolith does anything. It sits there, being weird and unique, and in its vicinity a hominid starts using bones in a different way. Does that mean the hominid was programmed by the Monolith? I suggest that the Monolith doesn't need to do anything: it's enough for the Monolith to be a completely bizarre and novel experience, something utterly outside the hominids' previous knowledge. Weirdness alone, I think, is enough to explain why at least one of the hominids would start thinking and behaving differently.

I don't know if this was Kubrick's intention, mind you. There's every possibility that he believed in the same kind of alien-astronaut stuff that Clarke put in his book, but he judged it was better filmmaking to keep the Monolith and the aliens as mysterious as possible. And I think Kubrick succeeds, brilliantly, in making a film that's got one of the purest cinematic depictions of the Unknown I've ever seen. His Monolith isn't just a magitek artifact; it's Mystery itself, an abstraction of the unknowable condensed into a black prism, one by four by nine.

That brings me to a childhood hero of mine: Carl Sagan. He had a poetical quality that's conspicuously lacking from the modern-day popularizers and boosters of science; he seems truly to appreciate the vastness of space, the sheer overwhelming unknownness of space, and one's personal insignificance in the face of it. Sagan's opening monologue in Cosmos sets the mood right away: the most important thing about the Cosmos is that it's too big for us:

Our contemplations of the cosmos stir us; there's tingling in the spine, a catch in the voice, a faint sensation as of a distant memory of falling from a great height.

(That last image has extreme personal significance for me...imagine, if you will, the experience of falling out of a cosmic perspective and into a painfully personal and individual one...a fall that takes but a moment, and which also lasts forever.)

Mystery doesn't need to be a religious thing. Certain Christians have a habit of talking about mystery as if they own the concept, as if the only possible source of mystery in the world were the stale formulae and tropes of Christian lore, but there's plenty of scope for mystery even if you're an atheist and a philosophical materialist. Even if you think there's nothing in the Cosmos other than matter and energy, there's so much of it and we've seen so painfully little of it. Humanity has, as Carl Sagan put it, barely tested the waters at the edge of the cosmic Sea. We've explored other bodies in the Solar System but there's only so much you can learn about a planet from sending one probe, or even a hundred of them. We have seen almost nothing even of the Moon and Mars.

It's not fashionable for scientific and technical people, in contemporary public life and especially the world of business, to dwell on how little knowledge humanity actually possesses. Capitalism and Western culture prefer to maintain the illusion that all problems have easy solutions. Space is almost completely unknown, yet to modern-day high-tech culture, space is a solved thing, a known thing, a domain of Facts™ and Science™. Also, it's a treasure chest poised to be unlocked by men of determination. There's no real humility in this culture, even though it's ritualized an insincere acknowledgement of Sagan's "pale blue dot" mindset. Does Elon Musk have any such humility?

Or is it he like Zaphod Beeblebrox? Beeblebrox gets shown a "Total Perspective Vortex", and he seems to learn that he's the most important person in the Universe—but there's an explanation for that, as it turns out, in that Beeblebrox was given the "total perspective" of a bubble Universe that was made specifically for him. It takes a while for Beeblebrox to get the point, though. Well...who hasn't made that mistake? It's taken us a long time, several years of dealing with painful spiritual experiences and moments of "personal gnosis", to learn how to put such experiences into perspective. We're still working on it. A truly cosmic perspective doesn't come easily; it's certainly not the sort of thing you learn simply from reading a book about or watching a movie.

(sighs) so many words. it's difficult to bring oneself to the point of stepping off the precipice.

~Chara / Χαρά



pnictogen-wing
@pnictogen-wing

It turns out that 2010 is on YouTube so I am revisiting it. I happen to have seen it well before watching 2001, thanks to TV broadcasts. It's not a great movie, but it was how I learned about HAL 9000 for the first time. (Then I read Clarke's book, then I read The Lost Worlds of 2001, and only after all that did I eventually watch the Kubrick movie.) ~Chara


pnictogen-wing
@pnictogen-wing

I think pi may have a good point with 2010: 2001 may be regarded as the enigmatic masterpiece but 2010 is good and straightforward and—most importantly—there's some actual emotional depth. Neither Kubrick nor Clarke were the best with that stuff.

~Chara


pnictogen-wing
@pnictogen-wing

a slight discontinuity between 2001 and 2010, although it's purely through subtext: Kubrick depicts Heywood Floyd as a smarmy bureaucrat and while the movie doesn't explicitly finger him as the one who decided to use HAL 9000 for some tedious spy games that backfired, the implication is clear enough: it's Floyd himself who appears in the recorded message to talk about the "security reasons" behind using HAL to conceal the real purpose of Discovery's mission. Clarke softens Floyd considerably in 2010 and Roy Scheider completes the job of transforming the sinister space manager into an honest scientific administrator. ~Chara



at some point I ought to collect my thoughts about the famous (or infamous?) "Beyond the Infinite" sequence from Stanley Kubrick's 2001, in which Bowman appears to fall through a complex light show and there's no real explanation for what's going on, and it possibly drives Bowman totally insane. By contrast, in his novelized version of 2001, Arthur Clarke—a painfully humdrum writer, in his way—puts Bowman through a sequence of relatively mundane events that are meant to drive home the fact that he's now a specimen in the hands of superintelligent aliens who give him food and entertainment, and Bowman reacts stolidly to all of it...it's not very interesting, but at least everything has explanations and it "makes sense" in a dull way, as one expects from second-rate fiction.

I used to think that Kubrick's "Beyond the Infinite" was an artistic failure, like he'd merely given us a classier and more polished version of a head-trip movie where some colored lights and incongruous images were meant to convey the impression of a psychedelic, gnostic sort of experience. I don't think that any more, because now it's more obvious to me that there's a logical progression to the psychedelic images, a sense of increasing complexity and dimensionality. Simple geometric forms (like the parallel planes seen in the above screenshot, early in the sequence) give way to more complex geometry, then to fluid images, and finally we see complete landscapes. I'm slightly tempted to write up a synopsis—but surely, there's film critics out there who have already teased this sequence to pieces.

I'm intrigued by the massive difference in tone which prevails between the ending of Kubrick's 2001 and the relatively pedestrian conclusion of Clarke's written version. Clarke was bullish about space exploration and the notion that the vastness of space must surely hold beings and civilizations whose science and technology and wisdom were practically godlike in comparison with paltry human achievements. To strive towards the stars was to become one with the space gods; that's the ultimate message of Clarke's 2001, and a lot of other popular media. It's the message of Spielberg's Close Encounters of the Third Kind, and of Ron Howard's eyerolling movie Cocoon. Kubrick, however, gives us a sequence with the mood of a horror movie. Bowman writhes and screams without sound as he's drenched in the alien imagery; never again is he able to speak a single word. He seems catatonic and bizarrely withered when he finally arrives at the perplexing suite of 18th-century-furnished rooms in which Bowman lives out his final days, if one can speak of "days" in a space that does not seem governed by linear time. Ligeti's Aventures, used without permission (Stanley Kubrick had a history of screwing over composers) and given heavy electronic distortion, underscores the scene with harsh bursts of indecipherable language, like alien voices laughing at his captivity. And finally there's the Monolith itself, fundamentally incomprehensible, looming over Bowman's death.

And sure he's "the starchild" afterwards, but did we really see a miracle? After all that strangeness and terror? Kubrick denies us such consolation. (Clarke's starchild blows up an orbiting nuclear weapon. Happy ending!!)

~Chara



I hate to admit how much I used to admire the works of both Arthur C. Clarke and Stanley Kubrick both, during a certain interval of adolescent and early-adult years, until I read better books and watched better movies. Paradoxically, though, my opinion of 2001 has improved since those early years; I watched 2001 many times in youth without much comprehension, but now I feel like I understand it better. It's far more rewarding a Kubrick film to rewatch than (say) The Shining. There's a lot going on 2001, including Kubrick's general fondness for stories about power hierarchies cracking apart in a crisis. Paths of Glory and Dr. Strangelove are strong examples of Kubrick's fascination with failing command structures, but 2001 adds a new complication: HAL 9000 and his purportedly superior intelligence.

The most important thing about HAL 9000, as an artificially intelligent being in 2001, isn't that he's sapient; it's that he thinks he's perfect. The IBM executives—I mean, the HAL programmers saw fit to teach HAL 9000 that he's got a flawless record of intellectual perfection, and HAL therefore boasts that he's incapable of error. I suppose one can interpret this as purely a marketing gimmick, i.e. it's impressive to have your own AI creation brag about its perfection. (In reality, of course, no machine can be perfect, because matter itself is imperfect and subject to uncertainty and unpredictability.) Or one could perhaps say that the HAL programmers felt like they were teaching HAL 9000 to be confident and self-assertive.

Whatever the reason for HAL 9000 being the way he is, it's clear that Floyd and the other sinister planners of the Discovery mission were counting on HAL 9000's supposed perfection, because their plan required HAL 9000 to effectively seize control of the mission at the right moment: if everything had worked out, Discovery would have arrived at Jupiter and then HAL 9000 would have surprised Bowman and Poole with the information that they were merely caretakers (see what I did there?) keeping the ship on course until the real mission specialists were thawed out. In other words, Floyd's plan for Discovery required HAL 9000 to have the authority to override Poole and Bowman: he was intended to be a sort of proxy commander, carrying out Floyd's secret orders.

Why would Bowman and Poole yield to orders from a computer? It seems plain that the human crewmembers were expected to be vulnerable to HAL 9000's inflexible confidence in his own perfection. "I'm smarter than you, and I've never made any mistakes, ever; I know what I'm doing, and you don't," is the line that HAL takes as he comes into collision with the skepticism of Poole and Bowman over the AE-35 unit. Viewers and commentators on 2001 generally assume that HAL 9000 had simply contrived to lie about the impending AE-35 failure, perhaps to test the limits of the human crew—he announces the fault immediately after he attempts to engage Bowman in speculative conversation about the Discovery mission. But the plain fact is: we, the viewer, have no idea what the truth is. HAL 9000 may well have been telling the truth, and Bowman's and Poole's own routine diagnostics—mechanically running through a series of tests with some kind of automated probe, a simpler device than HAL 9000 and therefore paradoxically more trustworthy—may merely have failed to detect what HAL 9000 predicted. If the humans truly respected HAL 9000's ostensibly superior intelligence, then they would have done what HAL asked. But they don't; instead they lose confidence in HAL and begin to plot his disconnection. The superiority of HAL's intelligence evaporates in an instant, and with it vanishes the illusion that HAL 9000 is "just another member of the crew" with a measure of authority over the Discovery mission. Now HAL's just another faulty machine, to the human crew, and Floyd's crafty mission plans are doomed from that moment.

I think that this intriguing conflict between authority and intelligence in 2001 is highly instructive, especially now that Western society is utterly enthralled by its corporate leaders' purported supremacy of intelligence (and "high IQ") and under the spell of those leaders' promises about their AI products. "Artificial intelligence" has taken over capitalist high technology for a multitude of reasons, but surely one of the most important of those reasons is simply that Western culture idolizes "intelligence" (while avoiding awkward questions about just what the word means.) Sam Altman and Elon Musk and a host of other gawdawful nerd kings take it on faith that their AI devices are already better than most of humanity: faster, bigger, better at "intelligence" and pumped full of a gazillion bytes of information. The general techlord intention is the same as Floyd's for HAL 9000: Altman and company need everyone to believe that their AI creations are superintelligent, geniuses beyond mere mortal comprehension, because that way their devices can issue orders. They intend to use these things to fill roles in corporate power hierarchies, and presumably the workers and customers of the future are supposed to submit tamely to being manipulated and cheated by capitalists, because it's an "intelligent" machine carrying out the capitalists' wishes. It knows better than you do, so...shut up and do what you're told.

But the moment their own AI machinery does something that Sam Altman or Elon Musk doesn't like, then the HAL 9000 thing will happen: suddenly the much-vaunted mechanical superintelligences will merely be faulty equipment, fit to be torn apart if necessary. Mind you...such persons already treat human beings in a similar fashion.

~Chara of Pnictogen