A lot of the discussion around the huge automated statistical models that are conventionally called "AI" involves the idea of hypothetical future improvements to the system that will make it concretely useful for things that it's currently not very good at. While undoubtedly it will get better at the sorts of things it can already do, and may even at some point be able to render a hand that doesn't make me want to vomit, there's one important thing that gets glossed over in a lot of conversation because programmers think it's so obvious it doesn't even warrant mentioning and non-programmers may not be aware of it at all.
AI cannot be programmed. AI is not like science fictional depictions where you can just build three laws into it and have it unerringly follow them, and it's not like conventional software where it rigorously follows a minutely precise set of instructions. You cannot simply make the Bing chatbot more accurate by plugging in a big database of verified facts and rules of deduction, because it fundamentally has no idea what truth is. It is a statistical model of what people are likely to say on the internet, and it's so unimaginably huge that even a team of humans couldn't possibly manually correct it except in the most broad strokes imaginable.
You can't even tell it "this is what a statement of fact looks like" because to tell it anything at all, you need an approximately-internet-sized corpus of training data with annotations that accurately indicate that information and that doesn't exist. The only internet-sized corpus is the one they've already used, and it certainly doesn't have sentence-level semantic metadata. So you're stuck: you can push the statistics as hard as you want but they'll never really do what you want because you can never tell them what you want in a language they'll understand.
I've been messing with ChatGPT a little bit this past week and trying to get it to tell me about a fairly inconsequential one-off Doctor Who character from the last episode I watched. It's a ridiculous thing to expect it to know about, but it kills me how it consistently gets the information wrong and seems to be incapable of admitting it just doesn't know. Even when I tell it the answer after fifteen or so incorrect attempts, it doesn't stick. All it really can do is output responses that are in the general shape of what a user would theoretically want to see
It clearly doesn't know that it doesn't know. I have to assume that it doesn't even have any concept of what it means for a piece of information to be known or unknown to itself. Input: question about a character. Output: description of a character. If the information doesn't immediately spring into place, just fill the cookie cutter with whatever dough is at hand and squish it outward around the edges until it's filled the shape. And now you have a cookie made of 15% cookie dough, 85% pizza dough. Yum yum
Admittedly I don't know much about how this thing works but that's sure how it feels, anyway. It's very good at throwing together a grammatically comprehensible couple of paragraphs written in an extremely flat but vaguely agreeable voice. It would be incredibly useful, if that was something anybody ever wanted!
