He genuinely believes that "AI" (OpenAI and co.) is actually intelligent, as in sentient, because you can "have a conversation with it and it responds." And it's just... Ugh.
It's not like you have to have a deep understanding of how this stuff works to comprehend the fact that there's nothing sentient or truly intelligent about it. AI is the biggest misnomer of the decade, it's not really AI at all - at least in the way we've defined it for many, many years. Though that definition seems to be rapidly changing to suit these fancy new apps.
A more accurate term would be PA, for Predictive Algorithms. All they are is a bunch of data scooped up from across the internet (often without consent), pored over by people to attribute appropriate categorization and metadata to everything in the data they've scooped, then fed into a computer that will then catalog all the metadata, so when you tell it to draw you a "dog" it will look through everything in its data tagged with "dog" and try to blend it all together into something resembling what you asked for. It has no conception of the anatomy of that dog, it isn't aware of what it's generating, it's just a massive, computer-scale implementation of that quip about infinite monkeys with infinite typewriters eventually writing Hamlet.
And this is obvious to anyone with an even basic level understanding of computer concepts. But because we call it "AI," and it gets called that by both those in favor and those opposed to it, and even your grandma has heard about the "AI" on Fox, we now have a non-zero amount of people who truly believe that we've created artificial sentient consciousness, and will staunchly insist upon that even when the systems are explained.
I don't know if OpenAI coined this particular usage of the term, but they're complicit either way in calling ChatGPT's off-the-rails moments "hallucinations." They're always talking about "reducing hallucinations" and shit, like ChatGPT is a real dude inside the computer who just needs to get on some antipsychotics. Fuck off.
ChatGPT goes off the rails because it's a computer program with no understanding of what human language actually is, or how to write creatively in any fashion. It simply takes your input and tries to blend together something that approximates that from its absurdly large dataset. It goes off the rails because it has no conception of the rails. It does not understand context, it does not know what it is writing. It gives a very good illusion of that, at times, but it's all a sham, and all these "hallucinations" are are a peek behind the curtain, except behind that curtain isn't a person pretending to be Oz, but a fuckin CPU that only "understands" 0s and 1s and logic gates and transistors.
Stop calling it hallucinations you pricks.

