I think there are currently 5 live players, Google, Anthropic, OpenAI, xAI, and Meta (but not DeepSeek and SSI), because frontier training compute is necessary and only these 5 seem to have a prospect of keeping up in 2025-2026. This can change if someone else gets enough funding or access to chips (as it quickly did with xAI), but that’s still a major additional hurdle no matter how competent a company is in other ways.
Llama-3-405B, with known details and the handicap of being a dense model, demonstrates that the rumored compute multipliers of other AI companies don’t have enough oomph to really matter. Probably the numbers like 4x per year refer to benchmark performance rather than perplexity, and so most of it doesn’t directly help with general intelligence and doesn’t scale when much more data becomes necessary with more compute. Low spread between different frontier AI companies is a similar observation.
There were multiplereports claiming that scaling base LLM pretraining yielded unexpected diminishing returns for several new frontier models in 2024, like OpenAI’s Orion, which was apparently planned to be GPT-5. They mention a lack of high quality training data, which being the cause would not be surprising, as the Chinchilla scaling law only applies to perplexity, not necessarily to practical (e.g. benchmark) performance. Base language models perform a form of imitation learning, and it seems that you don’t get performance that is significantly smarter than the humans who wrote the text in the pretraining data, even if perplexity keeps improving.
Since pretraining compute has in the past been a major bottleneck for frontier LLM performance, a now reduced effect of pretraining means that algorithmic progress within a lab is now more important than it was two years ago. Which would mean the relative importance of having a lot of compute has gone down, and the relative importance of having highly capable AI researchers (which can improve model performance through better AI architectures or training procedures) has gone up. The ability of the AI engineers seems to be much less dependent on available money than compute resources. Which would explain why e.g. Microsoft or Apple don’t have highly competitive models, despite large financial resources, and why xAI’s Grok 3 isn’t very far beyond DeepSeek’s R1, despite a vastly greater compute budget.
Now it seems possible that this changes in the future, e.g. when performance starts to strongly depend on inference compute (i.e. not just logarithmically), or when pre-training switches from primarily text to primarily sensory data (like video), which wouldn’t be bottlenecked by imitation learning on human-written text. Another possibility is that pre-training on synthetic LLM outputs, like CoTs, could provide the necessary superhuman text for the pretraining data. But none of this is currently the case, as far as I can tell.
Pretraining on a $150bn system in 2028 gives 150x compute compared to Grok 3 (which seems to be a 3e26 FLOPs model). We haven’t seen what happens if DeepSeek-V3 methods are used in pretraining on a $5bn system that trained Grok 3 in 2025 (which would 100x its compute), or a $20bn system in 2026 (to further 8x the FLOPs).
Chinchilla scaling law only applies to perplexity, not necessarily to practical (e.g. benchmark) performance
I think perplexity is a better measure of general intelligence than any legible benchmark. There are rumors that in some settings R1-like methods only started showing signs of life for GPT-4 level models where exactly the same thing didn’t work for weaker models[1]. Something else might first start working with the kind of perplexity that a competent lab can concoct in a 5e27 FLOPs model, even if it can later be adopted for weaker models.
lack of high quality training data
This is an example of a compute multiplier that doesn’t scale, and the usual story is that there are many algorithmic advancements with the same character, they help at 1e21 FLOPs but become mostly useless at 1e24 FLOPs. The distinction between perplexity and benchmarks in measuring compute multipliers (keeping the dataset unchanged) might be a good proxy for predicting which is which.
you don’t get performance that is significantly smarter than the humans who wrote the text in the pretraining data
Prediction of details can make use of arbitrarily high levels of capability, vastly exceeding that of the authors of the predicted text. What the token prediction objective gives you is generality and grounding in the world, even if it seems to be inefficient compared to imagined currently-unavailable alternatives.
Before 2024, only OpenAI (and briefly Google) had a GPT-4 level model, while in 2024 GPT-4 level models became ubiquitous. This might explain how a series of reproductions of o1-like long reasoning performance followed in quick succession, in a way that doesn’t significantly rely on secrets leaking from OpenAI.
Chinchilla scaling law only applies to perplexity, not necessarily to practical (e.g. benchmark) performance
I think perplexity is a better measure of general intelligence than any legible benchmark. There are rumors that in some settings R1-like methods only started showing signs of life for GPT-4 level models where exactly the same thing didn’t work for weaker models[1]. Something else might first start working with the kind of perplexity that a competent lab can concoct in a 5e27 FLOPs model, even if it can later be adopted for weaker models.
But GPT-4 didn’t just have better perplexity than previous models, it also had substantially better downstream performance. To me it seems more likely that better downstream performance is responsible for the model being well-suited for reasoning RL, since this is what we would intuitively describe as its degree of “intelligence”, and intelligence seems important when teaching a model how to reason, while its not clear what perplexity itself would be useful for. (One could probably test this by training a GPT-4 scale model with similar perplexity but on bad training data, such that it only reaches the downstream performance of older models. Then I predict that it would be as bad as those older models when doing reasoning RL. But of course this is a test far too expensive to carry out.)
you don’t get performance that is significantly smarter than the humans who wrote the text in the pretraining data
Prediction of details can make use of arbitrarily high levels of capability, vastly exceeding that of the authors of the predicted text. What the token prediction objective gives you is generality and grounding in the world, even if it seems to be inefficient compared to imagined currently-unavailable alternatives.
You may train a model on text typed by little children, such that the model is able to competently imitate a child typing, but then the resulting model performance wouldn’t significantly exceed that of a child, even though the model uses a lot of compute. Training on text doesn’t really give a lot of direct grounding in the world, because text represents real world data that is compressed and filtered by human brains, and their intelligence acts as a fundamental bottleneck. Imagine you are a natural scientist, but instead of making direct observations in the world, you are locked in a room and limited to listening to what a little kid, who saw the natural world, happens to say about it. After listening to it for a while, at some point you wouldn’t learn much more from it about the world.
Oh yeah I forgot about Meta. As for DeepSeek: Will they not get a ton more compute in the next year or so? I imagine they’ll have an easy time raising money and getting government to cut red tape for them now that they’ve made international news and become the bestselling app.
In principle sufficiently granular MoEs keep matrices at a manageable size, and critical minibatch size scales quickly enough in the first several trillion tokens of pretraining that relatively small scale-up world sizes (from poor inter-chip networking and weaker individual chips) is not a barrier. So unconscionable numbers of weaker chips should still be usable (at a good compute utilization) in frontier training going forward. Still a major hurdle, that is even more expensive and complicated.
Do you take Grok 3 as an update on the importance of hardware scaling? If xAI used 5-10x more compute than any other model (which seems likely but not necessarily true?), then the fact that it wasn’t discontinuously better than other models seems like evidence against the importance of hardware scaling.
Using 100x more compute showed discontinuous changes so far, of which 10x is half and 3x is quarter. The scale of Grok 3 is 100K H100s, and 20K H100s clusters were around since summer 2023, so some current models are likely trained on merely 3x less compute than Grok 3. Also, if Gemini 2.0 Ultra was never planned (failed or not), then Pro got the bulk of the 2.0 compute, which is plausibly about 6e26 FLOPs, 2x the Grok 3 compute.
My sense is that the difference of 3x is less significant than post-training or obscure pretraining compute multipliers that can differ between contemporary models, and only the difference of 10x is usually noticeable (but can still be overcome with much better methods, especially at smaller scale). I think most compute multipliers from better data mixes and algorithms don’t really work in improving general intelligence (especially those demonstrated in terms of benchmark performance rather than perplexity), or don’t scale to much more compute (and therefore data), so raw compute remains a crucial anchor of capability. A 100x change in raw compute is likely to remain the single most important factor in explaining the difference in capability.
MoEs were recently shown to offer a 3x compute multiplier at 1:8 sparsity (as rumored for original GPT-4) compared to dense (like Llama-3-405B), and 6x multiplier at 1:32 sparsity (as in DeepSeek-V3). I think these multipliers are real, describe scaling of general intelligence. For example, raw compute of DeepSeek-V3 is about 4e24 FLOPs, which corresponds to effective compute of 2.5e25 FLOPs in a dense model, merely 1.5x less than 4e25 FLOPs of Llama-3-405B. And raw compute of original GPT-4 is rumored to be 2e25 FLOPs, which corresponds to 6e25 FLOPs in a dense model, 1.5x more than Llama-3-405B. Across this range, DeepSeek-V3 still manages to win out.
Grok 3 used maybe 3x more compute than 4o or Gemini and topped Chatbot Arena and many benchmarks despite the facts that xAI was playing catch-up and 3x isn’t that significant since the gain is logorithmic.
I take Grok 3′s slight superiority as evidence for, not against, the importance of scaling hardware.
I think there are currently 5 live players, Google, Anthropic, OpenAI, xAI, and Meta (but not DeepSeek and SSI), because frontier training compute is necessary and only these 5 seem to have a prospect of keeping up in 2025-2026. This can change if someone else gets enough funding or access to chips (as it quickly did with xAI), but that’s still a major additional hurdle no matter how competent a company is in other ways.
Llama-3-405B, with known details and the handicap of being a dense model, demonstrates that the rumored compute multipliers of other AI companies don’t have enough oomph to really matter. Probably the numbers like 4x per year refer to benchmark performance rather than perplexity, and so most of it doesn’t directly help with general intelligence and doesn’t scale when much more data becomes necessary with more compute. Low spread between different frontier AI companies is a similar observation.
There were multiple reports claiming that scaling base LLM pretraining yielded unexpected diminishing returns for several new frontier models in 2024, like OpenAI’s Orion, which was apparently planned to be GPT-5. They mention a lack of high quality training data, which being the cause would not be surprising, as the Chinchilla scaling law only applies to perplexity, not necessarily to practical (e.g. benchmark) performance. Base language models perform a form of imitation learning, and it seems that you don’t get performance that is significantly smarter than the humans who wrote the text in the pretraining data, even if perplexity keeps improving.
Since pretraining compute has in the past been a major bottleneck for frontier LLM performance, a now reduced effect of pretraining means that algorithmic progress within a lab is now more important than it was two years ago. Which would mean the relative importance of having a lot of compute has gone down, and the relative importance of having highly capable AI researchers (which can improve model performance through better AI architectures or training procedures) has gone up. The ability of the AI engineers seems to be much less dependent on available money than compute resources. Which would explain why e.g. Microsoft or Apple don’t have highly competitive models, despite large financial resources, and why xAI’s Grok 3 isn’t very far beyond DeepSeek’s R1, despite a vastly greater compute budget.
Now it seems possible that this changes in the future, e.g. when performance starts to strongly depend on inference compute (i.e. not just logarithmically), or when pre-training switches from primarily text to primarily sensory data (like video), which wouldn’t be bottlenecked by imitation learning on human-written text. Another possibility is that pre-training on synthetic LLM outputs, like CoTs, could provide the necessary superhuman text for the pretraining data. But none of this is currently the case, as far as I can tell.
Pretraining on a $150bn system in 2028 gives 150x compute compared to Grok 3 (which seems to be a 3e26 FLOPs model). We haven’t seen what happens if DeepSeek-V3 methods are used in pretraining on a $5bn system that trained Grok 3 in 2025 (which would 100x its compute), or a $20bn system in 2026 (to further 8x the FLOPs).
I think perplexity is a better measure of general intelligence than any legible benchmark. There are rumors that in some settings R1-like methods only started showing signs of life for GPT-4 level models where exactly the same thing didn’t work for weaker models[1]. Something else might first start working with the kind of perplexity that a competent lab can concoct in a 5e27 FLOPs model, even if it can later be adopted for weaker models.
This is an example of a compute multiplier that doesn’t scale, and the usual story is that there are many algorithmic advancements with the same character, they help at 1e21 FLOPs but become mostly useless at 1e24 FLOPs. The distinction between perplexity and benchmarks in measuring compute multipliers (keeping the dataset unchanged) might be a good proxy for predicting which is which.
Prediction of details can make use of arbitrarily high levels of capability, vastly exceeding that of the authors of the predicted text. What the token prediction objective gives you is generality and grounding in the world, even if it seems to be inefficient compared to imagined currently-unavailable alternatives.
Before 2024, only OpenAI (and briefly Google) had a GPT-4 level model, while in 2024 GPT-4 level models became ubiquitous. This might explain how a series of reproductions of o1-like long reasoning performance followed in quick succession, in a way that doesn’t significantly rely on secrets leaking from OpenAI.
But GPT-4 didn’t just have better perplexity than previous models, it also had substantially better downstream performance. To me it seems more likely that better downstream performance is responsible for the model being well-suited for reasoning RL, since this is what we would intuitively describe as its degree of “intelligence”, and intelligence seems important when teaching a model how to reason, while its not clear what perplexity itself would be useful for. (One could probably test this by training a GPT-4 scale model with similar perplexity but on bad training data, such that it only reaches the downstream performance of older models. Then I predict that it would be as bad as those older models when doing reasoning RL. But of course this is a test far too expensive to carry out.)
You may train a model on text typed by little children, such that the model is able to competently imitate a child typing, but then the resulting model performance wouldn’t significantly exceed that of a child, even though the model uses a lot of compute. Training on text doesn’t really give a lot of direct grounding in the world, because text represents real world data that is compressed and filtered by human brains, and their intelligence acts as a fundamental bottleneck. Imagine you are a natural scientist, but instead of making direct observations in the world, you are locked in a room and limited to listening to what a little kid, who saw the natural world, happens to say about it. After listening to it for a while, at some point you wouldn’t learn much more from it about the world.
Oh yeah I forgot about Meta. As for DeepSeek: Will they not get a ton more compute in the next year or so? I imagine they’ll have an easy time raising money and getting government to cut red tape for them now that they’ve made international news and become the bestselling app.
In principle sufficiently granular MoEs keep matrices at a manageable size, and critical minibatch size scales quickly enough in the first several trillion tokens of pretraining that relatively small scale-up world sizes (from poor inter-chip networking and weaker individual chips) is not a barrier. So unconscionable numbers of weaker chips should still be usable (at a good compute utilization) in frontier training going forward. Still a major hurdle, that is even more expensive and complicated.
Do you take Grok 3 as an update on the importance of hardware scaling? If xAI used 5-10x more compute than any other model (which seems likely but not necessarily true?), then the fact that it wasn’t discontinuously better than other models seems like evidence against the importance of hardware scaling.
Using 100x more compute showed discontinuous changes so far, of which 10x is half and 3x is quarter. The scale of Grok 3 is 100K H100s, and 20K H100s clusters were around since summer 2023, so some current models are likely trained on merely 3x less compute than Grok 3. Also, if Gemini 2.0 Ultra was never planned (failed or not), then Pro got the bulk of the 2.0 compute, which is plausibly about 6e26 FLOPs, 2x the Grok 3 compute.
My sense is that the difference of 3x is less significant than post-training or obscure pretraining compute multipliers that can differ between contemporary models, and only the difference of 10x is usually noticeable (but can still be overcome with much better methods, especially at smaller scale). I think most compute multipliers from better data mixes and algorithms don’t really work in improving general intelligence (especially those demonstrated in terms of benchmark performance rather than perplexity), or don’t scale to much more compute (and therefore data), so raw compute remains a crucial anchor of capability. A 100x change in raw compute is likely to remain the single most important factor in explaining the difference in capability.
MoEs were recently shown to offer a 3x compute multiplier at 1:8 sparsity (as rumored for original GPT-4) compared to dense (like Llama-3-405B), and 6x multiplier at 1:32 sparsity (as in DeepSeek-V3). I think these multipliers are real, describe scaling of general intelligence. For example, raw compute of DeepSeek-V3 is about 4e24 FLOPs, which corresponds to effective compute of 2.5e25 FLOPs in a dense model, merely 1.5x less than 4e25 FLOPs of Llama-3-405B. And raw compute of original GPT-4 is rumored to be 2e25 FLOPs, which corresponds to 6e25 FLOPs in a dense model, 1.5x more than Llama-3-405B. Across this range, DeepSeek-V3 still manages to win out.
Grok 3 used maybe 3x more compute than 4o or Gemini and topped Chatbot Arena and many benchmarks despite the facts that xAI was playing catch-up and 3x isn’t that significant since the gain is logorithmic.
I take Grok 3′s slight superiority as evidence for, not against, the importance of scaling hardware.
How do we know it was 3x? (If true, I agree with your analysis)
Based on Vladimir_Nesov’s calculations:
https://www.lesswrong.com/posts/WNYvFCkhZvnwAPzJY/go-grok-yourself?commentId=p3nTkpshMq7SmXLjc