In the poetry case study, we had set out to show that the model didn’t plan ahead, and found instead that it did.
I found it shocking they didn’t think the model plans ahead. The poetry ability of LLMs since at least GPT2 is well beyond what feels possible without anticipating a rhyme by planning at least a handful of tokens in advance.
It’s not so much that we didn’t think models plan ahead in general, as that we had various hypotheses (including “unknown unknowns”) and this kind of planning in poetry wasn’t obviously the best one until we saw the evidence.
[More generally: in Interpretability we often have the experience of being surprised by the specific mechanism a model is using, even though with the benefit of hindsight it seems obvious. E.g. when we did the work for Towards Monosemanticity we were initially quite surprised to see the “the in <context>” features, thought they were indicative of a bug in our setup, and had to spend a while thinking about them and poking around before we realized why the model wanted them (which now feels obvious).]
I found it shocking they didn’t think the model plans ahead. The poetry ability of LLMs since at least GPT2 is well beyond what feels possible without anticipating a rhyme by planning at least a handful of tokens in advance.
It’s not so much that we didn’t think models plan ahead in general, as that we had various hypotheses (including “unknown unknowns”) and this kind of planning in poetry wasn’t obviously the best one until we saw the evidence.
[More generally: in Interpretability we often have the experience of being surprised by the specific mechanism a model is using, even though with the benefit of hindsight it seems obvious. E.g. when we did the work for Towards Monosemanticity we were initially quite surprised to see the “the in <context>” features, thought they were indicative of a bug in our setup, and had to spend a while thinking about them and poking around before we realized why the model wanted them (which now feels obvious).]