I scrutinise the so-called “reversal curse”, wherein LLMs seem not to consider inverse relationships between conceptual nodes.
I show that, far from being a proof of a lack of logical skills, it is a normal artefact of saliency, known in humans as associative recall asymmetry, and propose a conceptual-network model of the causes which works independently of substrate.
The “Reversal Curse”: you still aren’t antropomorphising enough.
Link post
I scrutinise the so-called “reversal curse”, wherein LLMs seem not to consider inverse relationships between conceptual nodes.
I show that, far from being a proof of a lack of logical skills, it is a normal artefact of saliency, known in humans as associative recall asymmetry, and propose a conceptual-network model of the causes which works independently of substrate.