Presumably the fine-tuning enhanced previous experience in the model with acrostic text. That also seems to have enhanced the ability to recognize and correctly explain that the text is an acrostic, even with only two letters of the acrostic currently in context. Presumably it’s fairly common to have both an acrostic and an explanation of it in the same document. What I suspect is rarer in the training data is for the acrostic text to explain itself, as the model’s response did here (though doubtless there are some examples somewhere). However, this is mostly just combining two skills, something LLMs are clearly capable of — the impressive part here is just that the model was aware, at the end of the second line, what word starting with “HE” the rest of the acrostic was going to spell out.
It would be interesting to look at this in the activation space — does the model already have a strong internal activation somewhere inside it for “HELLO” (or perhaps (“H… E… L… L… O…”) even while it’s working on generating the first or second line? It presumably needs to have something like this to be able generate acrostics, and previous work has suggested that there are directions for “words starting with the letter <X>” in the latent spaces of typical models.
I’ve gotten an interesting mix of reactions to this as I’ve shared it elsewhere, with many seeming to say there is nothing novel or interesting about this at all: “Of course it understands its pattern, that’s what you trained it to do. It’s trivial to generalize this to be able to explain it.”
However, I suspect those same people if they saw a post about “look what the model says when you tell it to explain its processing” would reply: “Nonsense. They have no ability to describe why they say anything. Clearly they’re just hallucinating up a narrative based on how LLMs generally operate.
If it wasn’t just dumb luck (which I suspect it wasn’t, given the number of times the model got the answer completely correct), then it is combining a few skills or understandings, and not violating any token-prediction basics at the granular level. But I do think it just opens up avenues to either—be less dismissive generally when models talk about what they are doing internally—or—figure out how to train a model to be more meta-aware generally.
And yes, I would be curious to see what was happening in the activation space as well. Especially since this was difficult to replicate with simpler patterns.
That is an impressive (and amusing) capability!
Presumably the fine-tuning enhanced previous experience in the model with acrostic text. That also seems to have enhanced the ability to recognize and correctly explain that the text is an acrostic, even with only two letters of the acrostic currently in context. Presumably it’s fairly common to have both an acrostic and an explanation of it in the same document. What I suspect is rarer in the training data is for the acrostic text to explain itself, as the model’s response did here (though doubtless there are some examples somewhere). However, this is mostly just combining two skills, something LLMs are clearly capable of — the impressive part here is just that the model was aware, at the end of the second line, what word starting with “HE” the rest of the acrostic was going to spell out.
It would be interesting to look at this in the activation space — does the model already have a strong internal activation somewhere inside it for “HELLO” (or perhaps (“H… E… L… L… O…”) even while it’s working on generating the first or second line? It presumably needs to have something like this to be able generate acrostics, and previous work has suggested that there are directions for “words starting with the letter <X>” in the latent spaces of typical models.
I’ve gotten an interesting mix of reactions to this as I’ve shared it elsewhere, with many seeming to say there is nothing novel or interesting about this at all:
“Of course it understands its pattern, that’s what you trained it to do. It’s trivial to generalize this to be able to explain it.”
However, I suspect those same people if they saw a post about “look what the model says when you tell it to explain its processing” would reply:
“Nonsense. They have no ability to describe why they say anything. Clearly they’re just hallucinating up a narrative based on how LLMs generally operate.
If it wasn’t just dumb luck (which I suspect it wasn’t, given the number of times the model got the answer completely correct), then it is combining a few skills or understandings, and not violating any token-prediction basics at the granular level. But I do think it just opens up avenues to either—be less dismissive generally when models talk about what they are doing internally—or—figure out how to train a model to be more meta-aware generally.
And yes, I would be curious to see what was happening in the activation space as well. Especially since this was difficult to replicate with simpler patterns.