• it/its

// the deer!
// plural deer therian θΔ, trans demigirl
// stray pet with a keyboard
// i'm 20 & account is 18+!
name-color: #ebe41e
// yeah



tjc
@tjc

One article that lots of people have read and shared is titled, "You Are Not a Parrot".

We think of a parrot as something that mindlessly repeats sounds without understanding. Perhaps they can learn some patterns, like Apollo the Parrot, who can identify colors and say their names. We know we're not parrots; we have minds. They probably do too, but very simple ones, right?

Once you get past the headline, the article goes much further than that, and I would rephrase one of the points it makes as: You are not a language processor. Language is one of the least interesting things people do. I say this as someone who gets paid to design and implement programming languages, and writes for fun. Programming languages are much, much simpler than natural languages; they have to be simple in order for it to be possible to write a sequence of characters that, after many processing steps, causes switches to open or close inside your laptop or your phone.

Natural languages, the ones humans use for communicating with each other, do more than just open or close switches. Your brain, as best as we understand it, is an analog device, not a digital one. One word someone says to you can cause a cascade of chemical reactions and (analog) electrical signals that interact with each other. If your brain could laugh at your 64-core desktop computer, it would. The number of things that happen in your body at the same time is dizzying; and one of those things is that it protects you from perceiving that more than one thing is happening at any given moment. Not only that, everything that happens in your body can modify the same state: that's why the medication you take to stop your foot from hurting can also give you a stomachache. Computer programs that use multiple processors, on the other hand, are carefully written so that they share as little state as possible; otherwise it gets too complicated for the people writing the programs to think about.

It is an error to call a computer program a "mind" because it manipulates language in a convincing way. Minds are embodied. To quote Elizabeth Weil's article, the first one I linked to above:

How should we interpret the natural-sounding (i.e., humanlike) words that come out of LLMs? The models are built on statistics. They work by looking for patterns in huge troves of text and then using those patterns to guess what the next word in a string of words should be. They’re great at mimicry and bad at facts. Why? LLMs, like the octopus, have no access to real-world, embodied referents.

On most of the days when I'm doing work, I write language processors. I'm going to work on one after I finish this post. The systems inside my brain and body that process language (it's hard to even talk about them without using mechanistic terms like "system", "mechanism" or device") are much more complicated than the language processors I write for my job. Still, linguists and cognitive scientists have models and theories for how those systems work, even if we don't understand the entire biological substrate that makes it go.

If you're hyperlexic, it's easy to forget this, but language isn't all of who we are; it's a very minor aspect of who we are. What would convince me that we have built an artificial mind is if we could build a robotic rat who other rats accept as one of their own. Rats communicate with each other, too, but the interesting parts of being a rat, like the interesting parts of being a human, are non-verbal and largely unconscious. Like the drunk person looking for their keys under the streetlight because that's where they can see, despite knowing that it's not where they left the keys, engineers decided to model language because it's a relatively easy problem. Like the constructed, simplified languages (such as JavaScript, C, Lisp, assembly languages for various kinds of computers, and thousands of others whose differences are mostly superficial) that we use for programming computers, all that large language models do is present an interface that humans can type language into in order to make switches open and close inside a computer. Those switches generate streams of 0s and 1s that are transmitted over radio waves or physical cables, that are received by your computer, causing it to light certain pixels on a screen and darken others, or (if you're using a screen reader) generate sound. When you write a Python program and execute it, your computer transforms streams of 0s and 1s to other streams of 0s and 1s, eventually causing an output device (a speaker, a monitor) to generate a signal that's then processed by your own sensory organs. When you type something into ChatGPT, the same thing happens. The main differences between the second case and the first one are a huge database, and some very carefully written mathematical programs for comparing your input to the contents of that database to find the answer that's most likely to be what you're looking for.

There's no mind there, and the only way there could even hypothetically be one is if the computer starts collecting sensory data on its own and integrating it with its language database to generate something that's truly new, not cut and pasted together like a ransom note. That was my point with the rat example. "But robots exist, don't they?", you might ask. The dream of the human-looking android that I grew up with in the '80s hasn't materialized; robotics specialists have learned that it's more useful to build a machine that's very specialized to one job. Just as software that processes language is specialized to language. There are uses for these tools, to be sure. But to confuse them with minds is to cheapen yourself. Not just yourself, but every organism that has something resembling a mind, even a hamster. If you could understand a hamster well enough to build one from scratch, that would be a far more impressive achievement than writing a program that passes the Turing test.

Where does the temptation come from to reduce ourselves to language processors, as if all that mattered about us was the strings of words we exchange with other people (and sometimes our pets; who's a kitty?)? As if we were machines for turning stimuli into responses and therefore, a mind is whatever turns a sufficiently complex (but still pretty simple) stimulus into a sufficiently complex (ditto) response? Most communication is nonverbal, but more than that, most of what our minds do is internal; perhaps it's communication between different body systems or even different systems within one brain, but that kind of communication isn't what is modeled in a language model.

Anyone who's taken Gender Studies 101 could give one explanation for the confusion of language processing with Mind: the feminization of the body and masculinization of the mind, so that the male-dominated teams that build software and hardware would think that only the mind is worth simulating. There are more errors on top of that: Descartes' error that the body and mind can meaningfully be considered separately, and the error I've tried to sketch out above, which is to confuse the relatively simple and minor task of language processing with the mind as a whole.

Someone who knows more about physiology and cognition than I do could say much more, and they already have, at least prior to the current intellectual fad of large language models. Despite having learned and mostly forgotten a little bit, and never learned most of it, I can't stop myself from offloading my thoughts anyway. If you don't know much about computers, don't let anyone who claims to understand them shame you out of your innate skepticism that AI chat-bots are minds. And if you do know something about computers, and think that these programs are minds, then please, think harder.


You must log in to comment.