Catgonbot

convincing counterfeit cat creature

  • it/she

~30, queer, probably other things too

Getting a Ph.D. Expect delays.

Profile pic by Nyanoraptor on twitter.


bytebat
@bytebat

my two cents on the usage of the term "AI" is this: the term has been fully appropriated by techbros for the lifeless capitalistic pursuit of Product, and you can't undo that. don't bother trying to tell anyone that an LLM isn't "real" AI, or that it shouldn't be called that, or that the term should be used for things that are actually interesting or useful, or whatever.

instead, shun the term AI. use it exclusively for GAN/diffusion/LLM garbage that's designed to be Marketable. "artificial intelligence" is a pretty fitting name really. it doesn't have actual intelligence, it isn't smart, it just looks that way from the outside. it mimics intelligence through fixed pattern mapping.

if you want to talk about AI-related topics that are actually interesting, use a different term, there's plenty of them out there. stuff enemies do in video games? call it "enemy logic" or "behavior" or "pathfinding" or "decision trees" or any other term more descriptive than "AI". dwarf fortress characters and their interactions? procedural storytelling. processing human speech into something usable by computers? natural language processing. systems that actually try to be meaningfully intelligent in a way that's transparent and understandable, like CADUCEUS or GARVAN-ES1 or Dendral or IBM Watson? expert systems. there's also the term inference engine, which sounds way cooler than "AI" to me anyway. other terms that can be used to describe systems that model reasoning to solve generalized problems include "knowledge engine", "knowledge representation and reasoning" (KRR or KR&R), "automated reasoning", "open domain question answering", and "information retrieval". also worth mentioning is the term "Artificial General Intelligence", which refers to the (currently impossible) simulation of a human-level (or above) intelligence. Think "evil AI takes over the world", that kind of thing.

even for traditional trained-model-based neural networks, when they're used for solving interesting problems instead of stealing artists' work, you can call machine learning instead. i also like that term because it focuses on the "learning" aspect, rather than "intelligence" which is difficult to define. the real magic of GANs/LLMs/etc doesn't come from some expertly handcrafted algorithms, it comes from training the models and optimizing them to perform a task better. it's not "intelligent", but it is a kind of learning, and it's the training part that makes those systems powerful, not the basic structure (which has been around for decades)


Catgonbot
@Catgonbot

I understand the sentiment, because the techbro grifting bullshit around LLMs has absolutely abused the term "AI" in the common parlance. However, as a researcher in AI I feel bound to say, at least, that if this were a widely adopted practice it would be deeply disheartening to see. Like, I really hope this does not come across aggressively, because certainly what you personally deem fit to call these isn't harming me. This is just something I'm really passionate about.

Fundamentally, in the academic world the term "Artificial Intelligence" arose quite directly as a unifying term for multiple efforts to computationally model or approximate aspects of cognition considered to be distinguishing or fundamental components of what we consider "human intelligence." Pattern recognition, learning, logical reasoning, planning ahead, processing natural language, etc. Large Language Models belong in this family of research interests, especially and importantly because it does represent an approximation of certain (but certainly not all) functioning of human intelligence.

And frankly, the ire needs to be directed primarily at the actual tech grifters who misrepresent what these academic words mean by confabulating academic and colloquial uses of the term AI and others. Partly because every AI endeavor will always be at risk from corruption by these same forces, and partly because Large Language Models do represent a startlingly impressive advance in our ability to computationally process natural language (not to understand natural language, though it may support our efforts towards this as time goes on).

We in other AI fields are frustrated by the impact the LLM grift and misrepresentation has had on society especially, and our field as well. But, our frustration is misuse, not the existence of LLMs. In fact, I do not work in Machine Learning directly (a field to which LLMs belong), but there are multiple ways LLMs may benefit my research, even though my research also directly competes in (and hopefully will actually succeed at) the goal of modeling further parts of human cognition.

Again, I hope this does not read as aggressive. I understand if my set of values in this does not match with yours, I merely want to express another way of looking at this situation that is important to me personally. Which is: please do not deny the reality and potential of our advances, but instead decry the systems which allow and encourage the use of technology for the purposes of scams, streamlining abuse, and the accumulation of capital.


You must log in to comment.

in reply to @Catgonbot's post:

thank you for this excessively polite reply. i personally do think there's value in LLMs and other neural network-based AI/ML systems, but they're vastly less interesting to me than things like expert systems because they're opaque. you provide input and get an output but there's nothing to be learned from looking at the model weights, it's just noise. it's certainly a useful and very powerful tool, especially when applied to narrow, well-defined problem spaces, but i feel like the future of "artificial intelligence", at least in a general sense, will come with the return of knowledge and reasoning based systems that are tractable and can show you their work when they come to a solution for something. this is especially relevant with regards to the systemic biases that can easily get incorporated into NN models.

I definitely share that preference for the transparency and provability of certain approaches, my own field being one such case. (Though I'm too excited not to share that, while it is difficult to learn from the weights that neural networks arrive upon, it's not impossible! I long for the subfield of research into "explainable" neural networks to see more interest, but I'm glad that it exists at all. Someday we may crack the black box! And I'm so excited to see what we'll find.)
I also will say that, on second thought, I do also very much appreciate seeing more terms used than just "AI", which clouds an extremely diverse field of approaches under the most visible subfield (which has been machine learning ever since like 2011 or so, then GANs, reinforcement learning, and now especially LLMs). There is so much outside of the "learning" approach! Automated planning, network modeling, knowledge representation systems---all sorts of stuff! So it's very nice to see more types of AI and what is interesting about them talked about as well. 💗