The failure of the compute-rich Llama models to compete with the compute poorer but talent and drive rich Alibaba and DeepSeek
This seems like it’s exaggerating the Llama failure. Maybe the small Llama-4s just released yesterday are a bit of a disappointment because they don’t convincingly beat all the rivals; but how big a gap is that absolutely? When it comes to DL models, there’s generally little reason to use #2; but that doesn’t mean #2 was all that much worse and ‘a failure’ - it might only have been weeks behind #1. (Indeed, a model might’ve been the best when it was trained, and release just took a while. Would it be reasonable to call such a model a ‘failure’? I wouldn’t. It might be a failure of a business model or a corporate strategy, but that model qua model is a good model, Bront.) #2 just means it’s #2, lesser by any amount. How far back would we have to go for the small Llama-4s to have been on the Pareto frontier? It’s still early, but I’m getting the impression so far that you wouldn’t have to go that far back. Certainly not ‘years’ (it couldn’t perform that well on LMArena in its ‘special chatbot configuration’ even sloptimized if it was years behind), unless the wilder rumors turn out to be true (like deliberately training on the test sets—in which case, Zuckerberg may have to burn FB AI with fire and reboot the entire AI org because the culture is irretrievably rotten—but of course such rumors usually do not, so I mention this mostly to indicate that right now Llama Internet commentary is high on heat and low on light).
The failure of the compute-rich Llama models to compete with the compute poorer but talent and drive rich Alibaba and DeepSeek shows that even a substantial compute lead can be squandered. Given that there is a lot of room for algorithmic improvements (as proven by the efficiency of the human brain), this means that determined engineering plus willingness to experiment rather than doubling-down on currently working tech.
I’m not really following your argument here. Even if LLaMA-4 is disappointing compared to what DeepSeek could’ve done with the same compute because they’d get 40% MFU instead of FB’s 20% or whatever, and are 2x as good in effective-compute, that doesn’t close the lead when FB finishes its new Manhattan-sized datacenter, say, and has 100x DS’s compute. Or are you arguing for the possibility of someone making an asymptotic scaling law breakthrough with a better exponent, so that even with 1/100th the compute, they can beat one of the giants?
As for the Llama 4 models… It’s true that it’s too soon to be sure, but the pattern sure looks like they are on trend with the previous Llama versions 2 and 3. I’ve been working with 2 and 3 a bunch. Evals and fine-tuning and various experimentation. Currently I’m working with the 70B Llama3 r1 distill plus the 32B Qwen r1 distill. The 32B Qwen r1 is so much better it’s ridiculous. So yeah, it’s possible that Llama4 will be a departure from trend, but I doubt it.
Contrast this with the Gemini trend. They started back at 1.0 with disproportionately weak models given the engineering and compute they had available. My guess is that this was related to poor internal coordination, and there was the merger of DeepMind with Google Brain that probably contributed to this. But if you look at the trend of 1.0 to 1.5 to 2.0… there’s a clear trend of improving more per month than other groups were. Thus, I was unsurprised when 2.5 turned out to be a leading frontier model. Llama team has shown no such “catchup” trend, so Llama4 turning out to be as strong as they claim would surprise me a lot.
No, it would probably be a mix of “all of the above”. FB is buying data from the same places everyone else does, like Scale (which we know from anecdotes like when Scale delivered FB a bunch of blatantly-ChatGPT-written ‘human rating data’ and FB was displeased), and was using datasets like books3 that are reasonable quality. The reported hardware efficiency numbers have never been impressive, they haven’t really innovated in architecture or training method (even the co-distillation for Llama-4 is not new, eg. ERNIE was doing that like 3 years ago), and insider rumors/gossip don’t indicate good things about the quality of the research culture. (It’s a stark contrast to things like Jeff Dean overseeing a big overhaul to ensure bit-identical reproducibility of runs and Google apparently getting multi-datacenter training working by emphasizing TPU interconnect.) So my guess is that if it’s bad, it’s not any one single thing like ‘we trained for too few tokens’ or ‘some of our purchased data was shite’: it’s just everything in the pipeline being a bit mediocre and it multiplying out to a bad end-product which is less than the sum of its parts.
Remember Karpathy’s warning: “neural nets want to work”. You can screw things up and the neural nets will still work, they will just be 1% worse than they should be. If you don’t have a research culture which is rigorous about methodology or where people just have good enough taste/intuition to always do the right thing, you’ll settle for whatever seems to work… (Especially if you are not going above and beyond to ensure your metrics aren’t fooling yourself.) Now have a 1% penalty on everything, from architecture to compute throughput to data quality to hyperparameters to debugging implementation issues, and you wind up with a model which is already obsolete on release with no place on the Pareto frontier and so gets 0% use.
Yes, that’s what I’m arguing. Really massive gains in algorithmic efficiency, plus gains in decentralized training and peak capability and continual learning, not necessarily all at once though. Maybe just enough that you then feel confident to continue scraping together additional resources to pour into your ongoing continual training. Renting GPUs from datacenters all around the world (smaller providers like Vast.ai, Runpod, Lambda Labs, plus marginal amounts from larger providers like AWS and GCP, all rented in the name of a variety of shell companies). The more compute you put in, the better it works, the more money you are able to earn (or convince investors or governments to give you) with the model-so-far, the more compute you can afford to rent....
Not necessarily exactly this story, just something in this direction.
This seems like it’s exaggerating the Llama failure. Maybe the small Llama-4s just released yesterday are a bit of a disappointment because they don’t convincingly beat all the rivals; but how big a gap is that absolutely? When it comes to DL models, there’s generally little reason to use #2; but that doesn’t mean #2 was all that much worse and ‘a failure’ - it might only have been weeks behind #1. (Indeed, a model might’ve been the best when it was trained, and release just took a while. Would it be reasonable to call such a model a ‘failure’? I wouldn’t. It might be a failure of a business model or a corporate strategy, but that model qua model is a good model, Bront.) #2 just means it’s #2, lesser by any amount. How far back would we have to go for the small Llama-4s to have been on the Pareto frontier? It’s still early, but I’m getting the impression so far that you wouldn’t have to go that far back. Certainly not ‘years’ (it couldn’t perform that well on LMArena in its ‘special chatbot configuration’ even sloptimized if it was years behind), unless the wilder rumors turn out to be true (like deliberately training on the test sets—in which case, Zuckerberg may have to burn FB AI with fire and reboot the entire AI org because the culture is irretrievably rotten—but of course such rumors usually do not, so I mention this mostly to indicate that right now Llama Internet commentary is high on heat and low on light).
I’m not really following your argument here. Even if LLaMA-4 is disappointing compared to what DeepSeek could’ve done with the same compute because they’d get 40% MFU instead of FB’s 20% or whatever, and are 2x as good in effective-compute, that doesn’t close the lead when FB finishes its new Manhattan-sized datacenter, say, and has 100x DS’s compute. Or are you arguing for the possibility of someone making an asymptotic scaling law breakthrough with a better exponent, so that even with 1/100th the compute, they can beat one of the giants?
As for the Llama 4 models… It’s true that it’s too soon to be sure, but the pattern sure looks like they are on trend with the previous Llama versions 2 and 3. I’ve been working with 2 and 3 a bunch. Evals and fine-tuning and various experimentation. Currently I’m working with the 70B Llama3 r1 distill plus the 32B Qwen r1 distill. The 32B Qwen r1 is so much better it’s ridiculous. So yeah, it’s possible that Llama4 will be a departure from trend, but I doubt it.
Contrast this with the Gemini trend. They started back at 1.0 with disproportionately weak models given the engineering and compute they had available. My guess is that this was related to poor internal coordination, and there was the merger of DeepMind with Google Brain that probably contributed to this. But if you look at the trend of 1.0 to 1.5 to 2.0… there’s a clear trend of improving more per month than other groups were. Thus, I was unsurprised when 2.5 turned out to be a leading frontier model. Llama team has shown no such “catchup” trend, so Llama4 turning out to be as strong as they claim would surprise me a lot.
Is it possible Meta just trained on bad data while Google and DeepSeek trained on good? See my two comments here: https://www.lesswrong.com/posts/Wnv739iQjkBrLbZnr/meta-releases-llama-4-herd-of-models?commentId=KkvDqZAuTwR7PCybB
No, it would probably be a mix of “all of the above”. FB is buying data from the same places everyone else does, like Scale (which we know from anecdotes like when Scale delivered FB a bunch of blatantly-ChatGPT-written ‘human rating data’ and FB was displeased), and was using datasets like
books3that are reasonable quality. The reported hardware efficiency numbers have never been impressive, they haven’t really innovated in architecture or training method (even the co-distillation for Llama-4 is not new, eg. ERNIE was doing that like 3 years ago), and insider rumors/gossip don’t indicate good things about the quality of the research culture. (It’s a stark contrast to things like Jeff Dean overseeing a big overhaul to ensure bit-identical reproducibility of runs and Google apparently getting multi-datacenter training working by emphasizing TPU interconnect.) So my guess is that if it’s bad, it’s not any one single thing like ‘we trained for too few tokens’ or ‘some of our purchased data was shite’: it’s just everything in the pipeline being a bit mediocre and it multiplying out to a bad end-product which is less than the sum of its parts.Remember Karpathy’s warning: “neural nets want to work”. You can screw things up and the neural nets will still work, they will just be 1% worse than they should be. If you don’t have a research culture which is rigorous about methodology or where people just have good enough taste/intuition to always do the right thing, you’ll settle for whatever seems to work… (Especially if you are not going above and beyond to ensure your metrics aren’t fooling yourself.) Now have a 1% penalty on everything, from architecture to compute throughput to data quality to hyperparameters to debugging implementation issues, and you wind up with a model which is already obsolete on release with no place on the Pareto frontier and so gets 0% use.
Yes, that’s what I’m arguing. Really massive gains in algorithmic efficiency, plus gains in decentralized training and peak capability and continual learning, not necessarily all at once though. Maybe just enough that you then feel confident to continue scraping together additional resources to pour into your ongoing continual training. Renting GPUs from datacenters all around the world (smaller providers like Vast.ai, Runpod, Lambda Labs, plus marginal amounts from larger providers like AWS and GCP, all rented in the name of a variety of shell companies). The more compute you put in, the better it works, the more money you are able to earn (or convince investors or governments to give you) with the model-so-far, the more compute you can afford to rent....
Not necessarily exactly this story, just something in this direction.