my two cents on the usage of the term "AI" is this: the term has been fully appropriated by techbros for the lifeless capitalistic pursuit of Product, and you can't undo that. don't bother trying to tell anyone that an LLM isn't "real" AI, or that it shouldn't be called that, or that the term should be used for things that are actually interesting or useful, or whatever.
instead, shun the term AI. use it exclusively for GAN/diffusion/LLM garbage that's designed to be Marketable. "artificial intelligence" is a pretty fitting name really. it doesn't have actual intelligence, it isn't smart, it just looks that way from the outside. it mimics intelligence through fixed pattern mapping.
if you want to talk about AI-related topics that are actually interesting, use a different term, there's plenty of them out there. stuff enemies do in video games? call it "enemy logic" or "behavior" or "pathfinding" or "decision trees" or any other term more descriptive than "AI". dwarf fortress characters and their interactions? procedural storytelling. processing human speech into something usable by computers? natural language processing. systems that actually try to be meaningfully intelligent in a way that's transparent and understandable, like CADUCEUS or GARVAN-ES1 or Dendral or IBM Watson? expert systems. there's also the term inference engine, which sounds way cooler than "AI" to me anyway. other terms that can be used to describe systems that model reasoning to solve generalized problems include "knowledge engine", "knowledge representation and reasoning" (KRR or KR&R), "automated reasoning", "open domain question answering", and "information retrieval". also worth mentioning is the term "Artificial General Intelligence", which refers to the (currently impossible) simulation of a human-level (or above) intelligence. Think "evil AI takes over the world", that kind of thing.
even for traditional trained-model-based neural networks, when they're used for solving interesting problems instead of stealing artists' work, you can call machine learning instead. i also like that term because it focuses on the "learning" aspect, rather than "intelligence" which is difficult to define. the real magic of GANs/LLMs/etc doesn't come from some expertly handcrafted algorithms, it comes from training the models and optimizing them to perform a task better. it's not "intelligent", but it is a kind of learning, and it's the training part that makes those systems powerful, not the basic structure (which has been around for decades)
I understand the sentiment, because the techbro grifting bullshit around LLMs has absolutely abused the term "AI" in the common parlance. However, as a researcher in AI I feel bound to say, at least, that if this were a widely adopted practice it would be deeply disheartening to see. Like, I really hope this does not come across aggressively, because certainly what you personally deem fit to call these isn't harming me. This is just something I'm really passionate about.
Fundamentally, in the academic world the term "Artificial Intelligence" arose quite directly as a unifying term for multiple efforts to computationally model or approximate aspects of cognition considered to be distinguishing or fundamental components of what we consider "human intelligence." Pattern recognition, learning, logical reasoning, planning ahead, processing natural language, etc. Large Language Models belong in this family of research interests, especially and importantly because it does represent an approximation of certain (but certainly not all) functioning of human intelligence.
And frankly, the ire needs to be directed primarily at the actual tech grifters who misrepresent what these academic words mean by confabulating academic and colloquial uses of the term AI and others. Partly because every AI endeavor will always be at risk from corruption by these same forces, and partly because Large Language Models do represent a startlingly impressive advance in our ability to computationally process natural language (not to understand natural language, though it may support our efforts towards this as time goes on).
We in other AI fields are frustrated by the impact the LLM grift and misrepresentation has had on society especially, and our field as well. But, our frustration is misuse, not the existence of LLMs. In fact, I do not work in Machine Learning directly (a field to which LLMs belong), but there are multiple ways LLMs may benefit my research, even though my research also directly competes in (and hopefully will actually succeed at) the goal of modeling further parts of human cognition.
Again, I hope this does not read as aggressive. I understand if my set of values in this does not match with yours, I merely want to express another way of looking at this situation that is important to me personally. Which is: please do not deny the reality and potential of our advances, but instead decry the systems which allow and encourage the use of technology for the purposes of scams, streamlining abuse, and the accumulation of capital.