hierarchon

the weapon-doll

an extremely cool robot. sometimes i play as a lizard girl in an MMO. unironic linux user.


ultimately I think the problem with "this AI model clearly isn't sentient because it can't do X" arguments is that, while I agree ChatGPT et al aren't sentient, there's no real clear standard of what it'd take to go "yes, this is a full blown person" besides vibes-based arguments. and like a lot of the arguments for why it can't be sentient because it's just a bunch of nonlinearities in silicon would also apply if you, say, completely modeled a brain in silicon, and I certainly hope people think that a 100% accurate simulation of a brain is sentient!

anyway, the Chinese room as a whole understands Chinese and arguing otherwise is to imply that your own mind does not understand language because none of the individual neurons do

edit: locking shares because I'm grumpy


You must log in to comment.

in reply to @hierarchon's post:

i think the way ive preferred to model it is "there is no reason to reject the null hypothesis that [neural net] is not sentient", which i think is a more reasonable way to state "X is not sentient".

i like it because it highlights the real disagreement here which is "should we give benefit of the doubt for sentience/sapience/self-awareness/etc". it comes down to "should we choose the null hypothesis to be 'X thing is sentient' or 'X thing is not sentient' and why?"

I fucking hate the Chinese room argument for exactly the reason you point out—any system that is indistinguishable from sentience is definitionally sentient. I think the more pertinent issue here is that these models are vastly, as in many orders of magnitude, simpler than even an animal brain let alone a human.

What processing power they do have (which is certainly substantial relative to a desktop computer!) is laser targeted at things that mimic sentience, which from everything we know about cognition is almost certainly a dead end for actually achieving sentience. The mind as we understand it is a massive series of layered systems, each of which produces the next system up as an emergent phenomenon. Language use only emerges at the very peak of that set of layers, arguably later than the minimum requirements for sentience or personhood. Trying to "skip the line" and just do language without anything else underlying it is always going to produce an empty shell spouting regurgitated gibberish that only makes sense because of its vast input set, and that can't possibly be a person.

(Another reasonable test for personhood is that sentient beings are able to go from zero learning to coherent with vastly less data in vastly less structured formats than these language models. I'd be much more interested if they created a model with no prior language training that they hooked up to a camera feed for two years and eventually learned to talk.)

tbh I am always wary of sentience tests that rely on extrapolating from facts about humanity; feels a bit like saying "a plane clearly can't fly because it can't flap its wings and it has no feathers"

Sure, but with only one data point there's not much we can do beyond "look at humanity" or "pure speculation" 😅. I do think that most of these things are fairly plausibly derivable from just an expectation that the general level of complexity will be comparable even if the specifics are highly different, though.