We expose a surprising failure of generalization in auto-regressive large language
models (LLMs). If a model is trained on a sentence of the form “A is B”, it will
not automatically generalize to the reverse direction “B is A”. This is the Reversal Curse.
This is an interesting paper showing that, again, the same AI scaling problems that plagued us in the 60s affect modern systems too, but the bias and intent of the researchers shows through so plainly. A "surprising failure of generalization" instead of a more or less expected result of what LLMs actually do (ie, predict which language token could come next) and vague appeals to "well maybe humans have the same problem!!!1"


