The latter, especially, I thought was popularized because it was so surprisingly good at improving benchmark performance.
Inner-monologue is an example because as far as we know, it should have existed in pre-GPT-3 models and been constantly improving, but we wouldn’t have noticed because no one would have been prompting for it and if they had, they probably wouldn’t have noticed it. (The paper I linked might have demonstrated that by finding nontrivial performance in smaller models.) Only once it became fairly reliable in GPT-3 could hobbyists on 4chan stumble across it and be struck by the fact that, contrary to what all the experts said, GPT-3 could solve harder arithmetic or reasoning problems if you very carefully set it up just right as an elaborate multi-step process instead of what everyone did, which was just prompt it for the answer right away.
Saying it doesn’t count because once it was discovered it was such a large real improvement, is circular and defines away any example. (Did it not improve benchmarks once discovered? Then who cares about such an ‘uncoupled’ capability; it’s not a real improvement. Did it subsequently improve benchmarks once discovered? Then it’s not really an example because it’s ‘coupled’...) Surely the most interesting examples are ones which do exactly that!
And of course, now there is so much discussion, and so many examples, and it is in such widespread use, and has contaminated all LLMs being trained since, that they start to do it by default given the slightest pretext. The popularization eliminated the hiddenness. And here we are with ‘reasoning models’ which have blown through quite a few older forecasts and moved timelines earlier by years, to the extent that people are severely disappointed when a model like GPT-4.5 ‘only’ does as well as the scaling laws predicted and they start predicting the AI bubble is about to pop and scaling has been refuted.
would also be useful for accurately finishing a sentence starting with “Eliezer Yudkowsky says...”.
But that would be indistinguishable from many other sources of improvement. For starters, by giving a name, you are only testing one direction: ‘name → output’; truesight is about ‘name ← output’. The ‘reversal curse’ is an example of how such inference arrows are not necessarily bidirectional and do not necessarily scale much. (But if you didn’t know that, you would surely conclude the opposite.) There are many ways to improve performance of predicting output: better world-knowledge, abstract reasoning, use of context, access to tools or grounding like web search… No benchmark really distinguishes between these such that you could point to a single specific number and say, “that’s the truesight metric, and you can see it gets better with scale”.
Inner-monologue is an example because as far as we know, it should have existed in pre-GPT-3 models and been constantly improving, but we wouldn’t have noticed because no one would have been prompting for it and if they had, they probably wouldn’t have noticed it. (The paper I linked might have demonstrated that by finding nontrivial performance in smaller models.) Only once it became fairly reliable in GPT-3 could hobbyists on 4chan stumble across it and be struck by the fact that, contrary to what all the experts said, GPT-3 could solve harder arithmetic or reasoning problems if you very carefully set it up just right as an elaborate multi-step process instead of what everyone did, which was just prompt it for the answer right away.
Saying it doesn’t count because once it was discovered it was such a large real improvement, is circular and defines away any example. (Did it not improve benchmarks once discovered? Then who cares about such an ‘uncoupled’ capability; it’s not a real improvement. Did it subsequently improve benchmarks once discovered? Then it’s not really an example because it’s ‘coupled’...) Surely the most interesting examples are ones which do exactly that!
And of course, now there is so much discussion, and so many examples, and it is in such widespread use, and has contaminated all LLMs being trained since, that they start to do it by default given the slightest pretext. The popularization eliminated the hiddenness. And here we are with ‘reasoning models’ which have blown through quite a few older forecasts and moved timelines earlier by years, to the extent that people are severely disappointed when a model like GPT-4.5 ‘only’ does as well as the scaling laws predicted and they start predicting the AI bubble is about to pop and scaling has been refuted.
But that would be indistinguishable from many other sources of improvement. For starters, by giving a name, you are only testing one direction: ‘name → output’; truesight is about ‘name ← output’. The ‘reversal curse’ is an example of how such inference arrows are not necessarily bidirectional and do not necessarily scale much. (But if you didn’t know that, you would surely conclude the opposite.) There are many ways to improve performance of predicting output: better world-knowledge, abstract reasoning, use of context, access to tools or grounding like web search… No benchmark really distinguishes between these such that you could point to a single specific number and say, “that’s the truesight metric, and you can see it gets better with scale”.