your posting pal! these days i post about bikes, camping, games, and generally grumpy tech stuff

pfp by @doomvega



this is an interesting read on the limitations of LLMs and how what they're doing is more akin to retrieval than reasoning, but even here there's the one guy in the comments who says "ok sure but who's to say that humans don't also just do retrieval without reasoning"

it's me, i'm here to say humans don't just do that (alongside the OP of course), and this post has an excellent example to demonstrate it.

the post looks at the task of "given a sentence, reverse the words". LLMs struggle to do this when the words / sentence aren't common - it turns out it's easier to predict the next token to output when the tokens are well represented in your training data.

in contrast, a human asked to reverse the words in a sentence would have no trouble regardless of what words are in the sentence. it doesn't need to be a sentence at all! a human can just think "oh, i read the words in reverse order and write them down as i read them". to a human, it wouldn't even make sense to suggest that some words would be harder to reverse than others, because the specific words aren't really the important part of the task

that's what reasoning is! it's applying an understanding of the underlying nature of the problem to arrive at a solution. as demonstrated, it is not what an LLM is doing and honestly it's a real bummer that people think so little of human thought to suggest that human capacity for reason and expression is basically the same as a fancy machine for taking the average


You must log in to comment.