Why does ai-2027 predict China falling behind? Because the next level of compute beyond the current level is going to be hard for DeepSeek to muster. In other words, that DeepSeek will be behind in 2026 because of hardware deficits in late 2025. If things moved more slowly, and the critical strategic point hit in 2030 instead of 2027, I think it’s likely China would have closed the compute gap by then.
I agree with this take, but I think it misses some key alternative possibilities. The failure of the compute-rich Llama models to compete with the compute poorer but talent and drive rich Alibaba and DeepSeek shows that even a substantial compute lead can be squandered. Given that there is a lot of room for algorithmic improvements (as proven by the efficiency of the human brain), this means that determined engineering plus willingness to experiment rather than doubling-down on currently working tech (as it seems like Anthropic, Google DM, and OpenAI are likely to do) may give enough of a breakthrough to hit the regime of recursive self-improvement before or around the same time as the compute-rich companies. Once that point is hit, a lead can be gained and maintained through reckless acceleration....
Adopting new things as quickly as the latest model predicts that they work, without pausing for cautious review, and you can move a lot faster than a company proceeding cautiously.
How much faster?
How much compute advantage does the recklessness compensate for?
How reckless will the underdogs be?
These are all open questions in my mind, with large error bars. This is what I think ai-2027 misses in their analysis.
The failure of the compute-rich Llama models to compete with the compute poorer but talent and drive rich Alibaba and DeepSeek
This seems like it’s exaggerating the Llama failure. Maybe the small Llama-4s just released yesterday are a bit of a disappointment because they don’t convincingly beat all the rivals; but how big a gap is that absolutely? When it comes to DL models, there’s generally little reason to use #2; but that doesn’t mean #2 was all that much worse and ‘a failure’ - it might only have been weeks behind #1. (Indeed, a model might’ve been the best when it was trained, and release just took a while. Would it be reasonable to call such a model a ‘failure’? I wouldn’t. It might be a failure of a business model or a corporate strategy, but that model qua model is a good model, Bront.) #2 just means it’s #2, lesser by any amount. How far back would we have to go for the small Llama-4s to have been on the Pareto frontier? It’s still early, but I’m getting the impression so far that you wouldn’t have to go that far back. Certainly not ‘years’ (it couldn’t perform that well on LMArena in its ‘special chatbot configuration’ even sloptimized if it was years behind), unless the wilder rumors turn out to be true (like deliberately training on the test sets—in which case, Zuckerberg may have to burn FB AI with fire and reboot the entire AI org because the culture is irretrievably rotten—but of course such rumors usually do not, so I mention this mostly to indicate that right now Llama Internet commentary is high on heat and low on light).
The failure of the compute-rich Llama models to compete with the compute poorer but talent and drive rich Alibaba and DeepSeek shows that even a substantial compute lead can be squandered. Given that there is a lot of room for algorithmic improvements (as proven by the efficiency of the human brain), this means that determined engineering plus willingness to experiment rather than doubling-down on currently working tech.
I’m not really following your argument here. Even if LLaMA-4 is disappointing compared to what DeepSeek could’ve done with the same compute because they’d get 40% MFU instead of FB’s 20% or whatever, and are 2x as good in effective-compute, that doesn’t close the lead when FB finishes its new Manhattan-sized datacenter, say, and has 100x DS’s compute. Or are you arguing for the possibility of someone making an asymptotic scaling law breakthrough with a better exponent, so that even with 1/100th the compute, they can beat one of the giants?
As for the Llama 4 models… It’s true that it’s too soon to be sure, but the pattern sure looks like they are on trend with the previous Llama versions 2 and 3. I’ve been working with 2 and 3 a bunch. Evals and fine-tuning and various experimentation. Currently I’m working with the 70B Llama3 r1 distill plus the 32B Qwen r1 distill. The 32B Qwen r1 is so much better it’s ridiculous. So yeah, it’s possible that Llama4 will be a departure from trend, but I doubt it.
Contrast this with the Gemini trend. They started back at 1.0 with disproportionately weak models given the engineering and compute they had available. My guess is that this was related to poor internal coordination, and there was the merger of DeepMind with Google Brain that probably contributed to this. But if you look at the trend of 1.0 to 1.5 to 2.0… there’s a clear trend of improving more per month than other groups were. Thus, I was unsurprised when 2.5 turned out to be a leading frontier model. Llama team has shown no such “catchup” trend, so Llama4 turning out to be as strong as they claim would surprise me a lot.
No, it would probably be a mix of “all of the above”. FB is buying data from the same places everyone else does, like Scale (which we know from anecdotes like when Scale delivered FB a bunch of blatantly-ChatGPT-written ‘human rating data’ and FB was displeased), and was using datasets like books3 that are reasonable quality. The reported hardware efficiency numbers have never been impressive, they haven’t really innovated in architecture or training method (even the co-distillation for Llama-4 is not new, eg. ERNIE was doing that like 3 years ago), and insider rumors/gossip don’t indicate good things about the quality of the research culture. (It’s a stark contrast to things like Jeff Dean overseeing a big overhaul to ensure bit-identical reproducibility of runs and Google apparently getting multi-datacenter training working by emphasizing TPU interconnect.) So my guess is that if it’s bad, it’s not any one single thing like ‘we trained for too few tokens’ or ‘some of our purchased data was shite’: it’s just everything in the pipeline being a bit mediocre and it multiplying out to a bad end-product which is less than the sum of its parts.
Remember Karpathy’s warning: “neural nets want to work”. You can screw things up and the neural nets will still work, they will just be 1% worse than they should be. If you don’t have a research culture which is rigorous about methodology or where people just have good enough taste/intuition to always do the right thing, you’ll settle for whatever seems to work… (Especially if you are not going above and beyond to ensure your metrics aren’t fooling yourself.) Now have a 1% penalty on everything, from architecture to compute throughput to data quality to hyperparameters to debugging implementation issues, and you wind up with a model which is already obsolete on release with no place on the Pareto frontier and so gets 0% use.
Yes, that’s what I’m arguing. Really massive gains in algorithmic efficiency, plus gains in decentralized training and peak capability and continual learning, not necessarily all at once though. Maybe just enough that you then feel confident to continue scraping together additional resources to pour into your ongoing continual training. Renting GPUs from datacenters all around the world (smaller providers like Vast.ai, Runpod, Lambda Labs, plus marginal amounts from larger providers like AWS and GCP, all rented in the name of a variety of shell companies). The more compute you put in, the better it works, the more money you are able to earn (or convince investors or governments to give you) with the model-so-far, the more compute you can afford to rent....
Not necessarily exactly this story, just something in this direction.
I made the same comment on the original post. I really think this is a blindspot for US-based AI analysis.
China has smart engineers, as much as DM, OpenAI etc. Even the talent in a lot of these labs is from China originally. With a) immigration going the way it is, b) the ability to coordinate massive resources as a state, subsidies, c) potentially invading Taiwan, d) how close DeepSeek / Qwen models seem to be and the rate of catchup, e) how uncertain we are about hardware overhand (again, see deepseek training costs) etc, I think we should put at least a 50% chance of China being ahead in the next year.
These tariffs may get rid of the compute disadvantage China faces (ie Taiwan starts to ignore export controls). We might see China being comfortably ahead in a year or two assuming we don’t see Congress take drastic action to eliminate the president’s tariffing powers.
By the way, I don’t mean to imply that Meta AI doesn’t have talented AI researchers working there. The problem is more that the competent minority are so diluted and hampered by bureaucratic parasites that they can’t do their jobs properly.
Why does ai-2027 predict China falling behind? Because the next level of compute beyond the current level is going to be hard for DeepSeek to muster. In other words, that DeepSeek will be behind in 2026 because of hardware deficits in late 2025. If things moved more slowly, and the critical strategic point hit in 2030 instead of 2027, I think it’s likely China would have closed the compute gap by then.
I agree with this take, but I think it misses some key alternative possibilities. The failure of the compute-rich Llama models to compete with the compute poorer but talent and drive rich Alibaba and DeepSeek shows that even a substantial compute lead can be squandered. Given that there is a lot of room for algorithmic improvements (as proven by the efficiency of the human brain), this means that determined engineering plus willingness to experiment rather than doubling-down on currently working tech (as it seems like Anthropic, Google DM, and OpenAI are likely to do) may give enough of a breakthrough to hit the regime of recursive self-improvement before or around the same time as the compute-rich companies. Once that point is hit, a lead can be gained and maintained through reckless acceleration....
Adopting new things as quickly as the latest model predicts that they work, without pausing for cautious review, and you can move a lot faster than a company proceeding cautiously.
How much faster?
How much compute advantage does the recklessness compensate for?
How reckless will the underdogs be?
These are all open questions in my mind, with large error bars. This is what I think ai-2027 misses in their analysis.
This seems like it’s exaggerating the Llama failure. Maybe the small Llama-4s just released yesterday are a bit of a disappointment because they don’t convincingly beat all the rivals; but how big a gap is that absolutely? When it comes to DL models, there’s generally little reason to use #2; but that doesn’t mean #2 was all that much worse and ‘a failure’ - it might only have been weeks behind #1. (Indeed, a model might’ve been the best when it was trained, and release just took a while. Would it be reasonable to call such a model a ‘failure’? I wouldn’t. It might be a failure of a business model or a corporate strategy, but that model qua model is a good model, Bront.) #2 just means it’s #2, lesser by any amount. How far back would we have to go for the small Llama-4s to have been on the Pareto frontier? It’s still early, but I’m getting the impression so far that you wouldn’t have to go that far back. Certainly not ‘years’ (it couldn’t perform that well on LMArena in its ‘special chatbot configuration’ even sloptimized if it was years behind), unless the wilder rumors turn out to be true (like deliberately training on the test sets—in which case, Zuckerberg may have to burn FB AI with fire and reboot the entire AI org because the culture is irretrievably rotten—but of course such rumors usually do not, so I mention this mostly to indicate that right now Llama Internet commentary is high on heat and low on light).
I’m not really following your argument here. Even if LLaMA-4 is disappointing compared to what DeepSeek could’ve done with the same compute because they’d get 40% MFU instead of FB’s 20% or whatever, and are 2x as good in effective-compute, that doesn’t close the lead when FB finishes its new Manhattan-sized datacenter, say, and has 100x DS’s compute. Or are you arguing for the possibility of someone making an asymptotic scaling law breakthrough with a better exponent, so that even with 1/100th the compute, they can beat one of the giants?
As for the Llama 4 models… It’s true that it’s too soon to be sure, but the pattern sure looks like they are on trend with the previous Llama versions 2 and 3. I’ve been working with 2 and 3 a bunch. Evals and fine-tuning and various experimentation. Currently I’m working with the 70B Llama3 r1 distill plus the 32B Qwen r1 distill. The 32B Qwen r1 is so much better it’s ridiculous. So yeah, it’s possible that Llama4 will be a departure from trend, but I doubt it.
Contrast this with the Gemini trend. They started back at 1.0 with disproportionately weak models given the engineering and compute they had available. My guess is that this was related to poor internal coordination, and there was the merger of DeepMind with Google Brain that probably contributed to this. But if you look at the trend of 1.0 to 1.5 to 2.0… there’s a clear trend of improving more per month than other groups were. Thus, I was unsurprised when 2.5 turned out to be a leading frontier model. Llama team has shown no such “catchup” trend, so Llama4 turning out to be as strong as they claim would surprise me a lot.
Is it possible Meta just trained on bad data while Google and DeepSeek trained on good? See my two comments here: https://www.lesswrong.com/posts/Wnv739iQjkBrLbZnr/meta-releases-llama-4-herd-of-models?commentId=KkvDqZAuTwR7PCybB
No, it would probably be a mix of “all of the above”. FB is buying data from the same places everyone else does, like Scale (which we know from anecdotes like when Scale delivered FB a bunch of blatantly-ChatGPT-written ‘human rating data’ and FB was displeased), and was using datasets like
books3that are reasonable quality. The reported hardware efficiency numbers have never been impressive, they haven’t really innovated in architecture or training method (even the co-distillation for Llama-4 is not new, eg. ERNIE was doing that like 3 years ago), and insider rumors/gossip don’t indicate good things about the quality of the research culture. (It’s a stark contrast to things like Jeff Dean overseeing a big overhaul to ensure bit-identical reproducibility of runs and Google apparently getting multi-datacenter training working by emphasizing TPU interconnect.) So my guess is that if it’s bad, it’s not any one single thing like ‘we trained for too few tokens’ or ‘some of our purchased data was shite’: it’s just everything in the pipeline being a bit mediocre and it multiplying out to a bad end-product which is less than the sum of its parts.Remember Karpathy’s warning: “neural nets want to work”. You can screw things up and the neural nets will still work, they will just be 1% worse than they should be. If you don’t have a research culture which is rigorous about methodology or where people just have good enough taste/intuition to always do the right thing, you’ll settle for whatever seems to work… (Especially if you are not going above and beyond to ensure your metrics aren’t fooling yourself.) Now have a 1% penalty on everything, from architecture to compute throughput to data quality to hyperparameters to debugging implementation issues, and you wind up with a model which is already obsolete on release with no place on the Pareto frontier and so gets 0% use.
Yes, that’s what I’m arguing. Really massive gains in algorithmic efficiency, plus gains in decentralized training and peak capability and continual learning, not necessarily all at once though. Maybe just enough that you then feel confident to continue scraping together additional resources to pour into your ongoing continual training. Renting GPUs from datacenters all around the world (smaller providers like Vast.ai, Runpod, Lambda Labs, plus marginal amounts from larger providers like AWS and GCP, all rented in the name of a variety of shell companies). The more compute you put in, the better it works, the more money you are able to earn (or convince investors or governments to give you) with the model-so-far, the more compute you can afford to rent....
Not necessarily exactly this story, just something in this direction.
I made the same comment on the original post. I really think this is a blindspot for US-based AI analysis.
China has smart engineers, as much as DM, OpenAI etc. Even the talent in a lot of these labs is from China originally. With a) immigration going the way it is, b) the ability to coordinate massive resources as a state, subsidies, c) potentially invading Taiwan, d) how close DeepSeek / Qwen models seem to be and the rate of catchup, e) how uncertain we are about hardware overhand (again, see deepseek training costs) etc, I think we should put at least a 50% chance of China being ahead in the next year.
These tariffs may get rid of the compute disadvantage China faces (ie Taiwan starts to ignore export controls). We might see China being comfortably ahead in a year or two assuming we don’t see Congress take drastic action to eliminate the president’s tariffing powers.
Some ask, “what should the US gov have done instead?”
Here’s an answer I like to that question, from max_paperclips:
https://x.com/max_paperclips/status/1909085803978035357
https://x.com/max_paperclips/status/1907946171290775844
By the way, I don’t mean to imply that Meta AI doesn’t have talented AI researchers working there. The problem is more that the competent minority are so diluted and hampered by bureaucratic parasites that they can’t do their jobs properly.