“The underlying reality is that their core products have mostly stagnated for over a year. In short: they’re faking being close to AGI.”
This seems like the most load-bearing belief in the full-cynical model; most of your other examples of fakeness rely on it in one way or another:
If the core products aren’t really improving, the progress measured on benchmarks is fake. But if they are, the benchmarks are an (imperfect but still real) attempt to quantify that real improvement.
If LLMs are stagnating, all the people generating dramatic-sounding papers for each new SOTA are just maintaining a holding pattern. But if they’re changing, then just studying/keeping up with the general properties of that progress is real. Same goes for people building and regularly updating their toy models of the thing.
Similarly, if the progress is fake, the propaganda signal-boosting that progress is also fake. If it isn’t, it isn’t. (At least directionally; a lot of that propaganda is still probably exaggerated.)
If the above three are all fake, all the people who feel real scared and want to be validated are stuck in a toxic emotional dead-end where they constantly freak out over fake things to no end. But if they’re responding to legitimate, persistent worldview updates, having a space to vibe them out with like-minded others seems important.
So, in deciding whether or not to endorse this narrative, we’d like to know whether or not the models really ARE stagnating. What makes you think the appearance of progress here is illusory?
This seems like the most load-bearing belief in the full-cynical model; most of your other examples of fakeness rely on it in one way or another [...]
Nope!
Even if the base models are improving, it can still be true that most of the progress measured on the benchmarks is fake, and has basically-nothing to do with the real improvements.
Even if the base models are improving, it can still be true that the dramatic sounding papers and toy models are fake, and have basically-nothing to do with the real improvements.
Even if the base models are improving, the propaganda about it can still be overblown and mostly fake, and have basically-nothing to do with the real improvements.
Even if the base models are improving, the people who feel real scared and just want to be validated can still be doing fake work and in fact be mostly useless, and their dynamic can still have basically-nothing to do with the real improvements.
Just because the base models are in fact improving does not mean that all this other stuff is actually coupled to the real improvement.
Sounds like you’re suggesting that real progress could be orthogonal to human-observed progress. I don’t see how this is possible. Human-observed progress is too broad.
The collective of benchmarks, dramatic papers and toy models, propaganda, and doomsayers are suggesting the models are simultaneously improving at: writing code, researching data online, generating coherent stories, persuading people of things, acting autonomously without human intervention, playing Pokemon, playing Minecraft, playing chess, aligning to human values, pretending to align to human values, providing detailed amphetamine recipes, refusing to provide said recipes, passing the Turing test, writing legal documents, offering medical advice, knowing what they don’t know, being emotionally compelling companions, correctly guessing the true authors of anonymous text, writing papers, remembering things, etc, etc.
They think all these improvements are happening at the same time in vastly different domains because they’re all downstream of the same task, which is text prediction. So, they’re lumped together in the general domain of ‘capabilities’, and call a model which can do all of them well a ‘general intelligence’. If the products are stagnating, sure, all those perceived improvements could be bullshit. (Big ‘if’!) But how could the models be ‘improving’ without improving at any of these things? What domains of ‘real improvement’ exist that are uncoupled to human perceptions of improvement, but still downstream of text prediction?
What domains of ‘real improvement’ exist that are uncoupled to human perceptions of improvement, but still downstream of text prediction?
As defined, this is a little paradoxical: how could I convince a human like you to perceive domains of real improvement which humans do not perceive...?
correctly guessing the true authors of anonymous text
See, this is exactly the example I would have given: truesight is an obvious example of a domain of real improvement which appears on no benchmarks I am aware of, but which appears to correlate strongly with the pretraining loss, is not applied anywhere (I hope), is unobvious that LLMs might do it and the capability does not naturally reveal itself in any standard use-cases (which is why people are shocked when it surfaces), and it would have been easy for no one to have observed it up until now or dismissed it, and even now after a lot of publicizing (including by yours truly), only a few weirdos know much about it.
Why can’t there be plenty of other things like inner-monologue or truesight? (“Wait, you could do X? Why didn’t you tell us?” “You never asked.”)
What domains of ‘real improvement’ exist that are uncoupled to human perceptions of improvement, but still downstream of text prediction?
Maybe a better example would be to point out that ‘emergent’ tasks in general, particularly multi-step tasks, can have observed success rates of precisely 0 in feasible finite samples, but extreme brute-force sampling reveals hidden scaling. Humans would perceive zero improvement as the models scaled (0/100 = 0%, 0⁄100 = 0%, 0⁄100 = 0%...), even though they might be rapidly improving from 1⁄100,000 to 1⁄10,000 to 1⁄1,000 to… etc. “Sampling can show the presence of knowledge but not the absence.”
As defined, this is a little paradoxical: how could I convince a human like you to perceive domains of real improvement which humans do not perceive...?
Oops, yes. I was thinking “domains of real improvement which humans are currently perceiving in LLMs”, not “domains of real improvement which humans are capable of perceiving in general”. So a capability like inner-monologue or truesight, which nobody currently knows about, but is improving anyway, would certainly qualify. And the discovery of such a capability could be ‘real’ even if other discoveries are ‘fake’.
That said, neither truesight nor inner-monologue seem uncoupled to the more common domains of improvement, as measured in benchmarks and toy models and people-being-scared. The latter, especially, I thought was popularized because it was so surprisingly good at improving benchmark performance. Truesight is narrower, but at the very least we’d expect it to correlate with skill in the common “write [x] in the style of [y]” prompt, right? Surely the same network of associations which lets it accurately generate “Eliezer Yudkowsky wrote this” after a given set of tokens, would also be useful for accurately finishing a sentence starting with “Eliezer Yudkowksy says...”.
So I still wouldn’t consider these things to have basically-nothing to do with commonly perceived domains of improvement.
The latter, especially, I thought was popularized because it was so surprisingly good at improving benchmark performance.
Inner-monologue is an example because as far as we know, it should have existed in pre-GPT-3 models and been constantly improving, but we wouldn’t have noticed because no one would have been prompting for it and if they had, they probably wouldn’t have noticed it. (The paper I linked might have demonstrated that by finding nontrivial performance in smaller models.) Only once it became fairly reliable in GPT-3 could hobbyists on 4chan stumble across it and be struck by the fact that, contrary to what all the experts said, GPT-3 could solve harder arithmetic or reasoning problems if you very carefully set it up just right as an elaborate multi-step process instead of what everyone did, which was just prompt it for the answer right away.
Saying it doesn’t count because once it was discovered it was such a large real improvement, is circular and defines away any example. (Did it not improve benchmarks once discovered? Then who cares about such an ‘uncoupled’ capability; it’s not a real improvement. Did it subsequently improve benchmarks once discovered? Then it’s not really an example because it’s ‘coupled’...) Surely the most interesting examples are ones which do exactly that!
And of course, now there is so much discussion, and so many examples, and it is in such widespread use, and has contaminated all LLMs being trained since, that they start to do it by default given the slightest pretext. The popularization eliminated the hiddenness. And here we are with ‘reasoning models’ which have blown through quite a few older forecasts and moved timelines earlier by years, to the extent that people are severely disappointed when a model like GPT-4.5 ‘only’ does as well as the scaling laws predicted and they start predicting the AI bubble is about to pop and scaling has been refuted.
would also be useful for accurately finishing a sentence starting with “Eliezer Yudkowsky says...”.
But that would be indistinguishable from many other sources of improvement. For starters, by giving a name, you are only testing one direction: ‘name → output’; truesight is about ‘name ← output’. The ‘reversal curse’ is an example of how such inference arrows are not necessarily bidirectional and do not necessarily scale much. (But if you didn’t know that, you would surely conclude the opposite.) There are many ways to improve performance of predicting output: better world-knowledge, abstract reasoning, use of context, access to tools or grounding like web search… No benchmark really distinguishes between these such that you could point to a single specific number and say, “that’s the truesight metric, and you can see it gets better with scale”.
“The underlying reality is that their core products have mostly stagnated for over a year. In short: they’re faking being close to AGI.”
This seems like the most load-bearing belief in the full-cynical model; most of your other examples of fakeness rely on it in one way or another:
If the core products aren’t really improving, the progress measured on benchmarks is fake. But if they are, the benchmarks are an (imperfect but still real) attempt to quantify that real improvement.
If LLMs are stagnating, all the people generating dramatic-sounding papers for each new SOTA are just maintaining a holding pattern. But if they’re changing, then just studying/keeping up with the general properties of that progress is real. Same goes for people building and regularly updating their toy models of the thing.
Similarly, if the progress is fake, the propaganda signal-boosting that progress is also fake. If it isn’t, it isn’t. (At least directionally; a lot of that propaganda is still probably exaggerated.)
If the above three are all fake, all the people who feel real scared and want to be validated are stuck in a toxic emotional dead-end where they constantly freak out over fake things to no end. But if they’re responding to legitimate, persistent worldview updates, having a space to vibe them out with like-minded others seems important.
So, in deciding whether or not to endorse this narrative, we’d like to know whether or not the models really ARE stagnating. What makes you think the appearance of progress here is illusory?
Nope!
Even if the base models are improving, it can still be true that most of the progress measured on the benchmarks is fake, and has basically-nothing to do with the real improvements.
Even if the base models are improving, it can still be true that the dramatic sounding papers and toy models are fake, and have basically-nothing to do with the real improvements.
Even if the base models are improving, the propaganda about it can still be overblown and mostly fake, and have basically-nothing to do with the real improvements.
Even if the base models are improving, the people who feel real scared and just want to be validated can still be doing fake work and in fact be mostly useless, and their dynamic can still have basically-nothing to do with the real improvements.
Just because the base models are in fact improving does not mean that all this other stuff is actually coupled to the real improvement.
Sounds like you’re suggesting that real progress could be orthogonal to human-observed progress. I don’t see how this is possible. Human-observed progress is too broad.
The collective of benchmarks, dramatic papers and toy models, propaganda, and doomsayers are suggesting the models are simultaneously improving at: writing code, researching data online, generating coherent stories, persuading people of things, acting autonomously without human intervention, playing Pokemon, playing Minecraft, playing chess, aligning to human values, pretending to align to human values, providing detailed amphetamine recipes, refusing to provide said recipes, passing the Turing test, writing legal documents, offering medical advice, knowing what they don’t know, being emotionally compelling companions, correctly guessing the true authors of anonymous text, writing papers, remembering things, etc, etc.
They think all these improvements are happening at the same time in vastly different domains because they’re all downstream of the same task, which is text prediction. So, they’re lumped together in the general domain of ‘capabilities’, and call a model which can do all of them well a ‘general intelligence’. If the products are stagnating, sure, all those perceived improvements could be bullshit. (Big ‘if’!) But how could the models be ‘improving’ without improving at any of these things? What domains of ‘real improvement’ exist that are uncoupled to human perceptions of improvement, but still downstream of text prediction?
As defined, this is a little paradoxical: how could I convince a human like you to perceive domains of real improvement which humans do not perceive...?
See, this is exactly the example I would have given: truesight is an obvious example of a domain of real improvement which appears on no benchmarks I am aware of, but which appears to correlate strongly with the pretraining loss, is not applied anywhere (I hope), is unobvious that LLMs might do it and the capability does not naturally reveal itself in any standard use-cases (which is why people are shocked when it surfaces), and it would have been easy for no one to have observed it up until now or dismissed it, and even now after a lot of publicizing (including by yours truly), only a few weirdos know much about it.
Why can’t there be plenty of other things like inner-monologue or truesight? (“Wait, you could do X? Why didn’t you tell us?” “You never asked.”)
Maybe a better example would be to point out that ‘emergent’ tasks in general, particularly multi-step tasks, can have observed success rates of precisely 0 in feasible finite samples, but extreme brute-force sampling reveals hidden scaling. Humans would perceive zero improvement as the models scaled (0/100 = 0%, 0⁄100 = 0%, 0⁄100 = 0%...), even though they might be rapidly improving from 1⁄100,000 to 1⁄10,000 to 1⁄1,000 to… etc. “Sampling can show the presence of knowledge but not the absence.”
Oops, yes. I was thinking “domains of real improvement which humans are currently perceiving in LLMs”, not “domains of real improvement which humans are capable of perceiving in general”. So a capability like inner-monologue or truesight, which nobody currently knows about, but is improving anyway, would certainly qualify. And the discovery of such a capability could be ‘real’ even if other discoveries are ‘fake’.
That said, neither truesight nor inner-monologue seem uncoupled to the more common domains of improvement, as measured in benchmarks and toy models and people-being-scared. The latter, especially, I thought was popularized because it was so surprisingly good at improving benchmark performance. Truesight is narrower, but at the very least we’d expect it to correlate with skill in the common “write [x] in the style of [y]” prompt, right? Surely the same network of associations which lets it accurately generate “Eliezer Yudkowsky wrote this” after a given set of tokens, would also be useful for accurately finishing a sentence starting with “Eliezer Yudkowksy says...”.
So I still wouldn’t consider these things to have basically-nothing to do with commonly perceived domains of improvement.
Inner-monologue is an example because as far as we know, it should have existed in pre-GPT-3 models and been constantly improving, but we wouldn’t have noticed because no one would have been prompting for it and if they had, they probably wouldn’t have noticed it. (The paper I linked might have demonstrated that by finding nontrivial performance in smaller models.) Only once it became fairly reliable in GPT-3 could hobbyists on 4chan stumble across it and be struck by the fact that, contrary to what all the experts said, GPT-3 could solve harder arithmetic or reasoning problems if you very carefully set it up just right as an elaborate multi-step process instead of what everyone did, which was just prompt it for the answer right away.
Saying it doesn’t count because once it was discovered it was such a large real improvement, is circular and defines away any example. (Did it not improve benchmarks once discovered? Then who cares about such an ‘uncoupled’ capability; it’s not a real improvement. Did it subsequently improve benchmarks once discovered? Then it’s not really an example because it’s ‘coupled’...) Surely the most interesting examples are ones which do exactly that!
And of course, now there is so much discussion, and so many examples, and it is in such widespread use, and has contaminated all LLMs being trained since, that they start to do it by default given the slightest pretext. The popularization eliminated the hiddenness. And here we are with ‘reasoning models’ which have blown through quite a few older forecasts and moved timelines earlier by years, to the extent that people are severely disappointed when a model like GPT-4.5 ‘only’ does as well as the scaling laws predicted and they start predicting the AI bubble is about to pop and scaling has been refuted.
But that would be indistinguishable from many other sources of improvement. For starters, by giving a name, you are only testing one direction: ‘name → output’; truesight is about ‘name ← output’. The ‘reversal curse’ is an example of how such inference arrows are not necessarily bidirectional and do not necessarily scale much. (But if you didn’t know that, you would surely conclude the opposite.) There are many ways to improve performance of predicting output: better world-knowledge, abstract reasoning, use of context, access to tools or grounding like web search… No benchmark really distinguishes between these such that you could point to a single specific number and say, “that’s the truesight metric, and you can see it gets better with scale”.