As for the Llama 4 models… It’s true that it’s too soon to be sure, but the pattern sure looks like they are on trend with the previous Llama versions 2 and 3. I’ve been working with 2 and 3 a bunch. Evals and fine-tuning and various experimentation. Currently I’m working with the 70B Llama3 r1 distill plus the 32B Qwen r1 distill. The 32B Qwen r1 is so much better it’s ridiculous. So yeah, it’s possible that Llama4 will be a departure from trend, but I doubt it.
Contrast this with the Gemini trend. They started back at 1.0 with disproportionately weak models given the engineering and compute they had available. My guess is that this was related to poor internal coordination, and there was the merger of DeepMind with Google Brain that probably contributed to this. But if you look at the trend of 1.0 to 1.5 to 2.0… there’s a clear trend of improving more per month than other groups were. Thus, I was unsurprised when 2.5 turned out to be a leading frontier model. Llama team has shown no such “catchup” trend, so Llama4 turning out to be as strong as they claim would surprise me a lot.
No, it would probably be a mix of “all of the above”. FB is buying data from the same places everyone else does, like Scale (which we know from anecdotes like when Scale delivered FB a bunch of blatantly-ChatGPT-written ‘human rating data’ and FB was displeased), and was using datasets like books3 that are reasonable quality. The reported hardware efficiency numbers have never been impressive, they haven’t really innovated in architecture or training method (even the co-distillation for Llama-4 is not new, eg. ERNIE was doing that like 3 years ago), and insider rumors/gossip don’t indicate good things about the quality of the research culture. (It’s a stark contrast to things like Jeff Dean overseeing a big overhaul to ensure bit-identical reproducibility of runs and Google apparently getting multi-datacenter training working by emphasizing TPU interconnect.) So my guess is that if it’s bad, it’s not any one single thing like ‘we trained for too few tokens’ or ‘some of our purchased data was shite’: it’s just everything in the pipeline being a bit mediocre and it multiplying out to a bad end-product which is less than the sum of its parts.
Remember Karpathy’s warning: “neural nets want to work”. You can screw things up and the neural nets will still work, they will just be 1% worse than they should be. If you don’t have a research culture which is rigorous about methodology or where people just have good enough taste/intuition to always do the right thing, you’ll settle for whatever seems to work… (Especially if you are not going above and beyond to ensure your metrics aren’t fooling yourself.) Now have a 1% penalty on everything, from architecture to compute throughput to data quality to hyperparameters to debugging implementation issues, and you wind up with a model which is already obsolete on release with no place on the Pareto frontier and so gets 0% use.
As for the Llama 4 models… It’s true that it’s too soon to be sure, but the pattern sure looks like they are on trend with the previous Llama versions 2 and 3. I’ve been working with 2 and 3 a bunch. Evals and fine-tuning and various experimentation. Currently I’m working with the 70B Llama3 r1 distill plus the 32B Qwen r1 distill. The 32B Qwen r1 is so much better it’s ridiculous. So yeah, it’s possible that Llama4 will be a departure from trend, but I doubt it.
Contrast this with the Gemini trend. They started back at 1.0 with disproportionately weak models given the engineering and compute they had available. My guess is that this was related to poor internal coordination, and there was the merger of DeepMind with Google Brain that probably contributed to this. But if you look at the trend of 1.0 to 1.5 to 2.0… there’s a clear trend of improving more per month than other groups were. Thus, I was unsurprised when 2.5 turned out to be a leading frontier model. Llama team has shown no such “catchup” trend, so Llama4 turning out to be as strong as they claim would surprise me a lot.
Is it possible Meta just trained on bad data while Google and DeepSeek trained on good? See my two comments here: https://www.lesswrong.com/posts/Wnv739iQjkBrLbZnr/meta-releases-llama-4-herd-of-models?commentId=KkvDqZAuTwR7PCybB
No, it would probably be a mix of “all of the above”. FB is buying data from the same places everyone else does, like Scale (which we know from anecdotes like when Scale delivered FB a bunch of blatantly-ChatGPT-written ‘human rating data’ and FB was displeased), and was using datasets like
books3that are reasonable quality. The reported hardware efficiency numbers have never been impressive, they haven’t really innovated in architecture or training method (even the co-distillation for Llama-4 is not new, eg. ERNIE was doing that like 3 years ago), and insider rumors/gossip don’t indicate good things about the quality of the research culture. (It’s a stark contrast to things like Jeff Dean overseeing a big overhaul to ensure bit-identical reproducibility of runs and Google apparently getting multi-datacenter training working by emphasizing TPU interconnect.) So my guess is that if it’s bad, it’s not any one single thing like ‘we trained for too few tokens’ or ‘some of our purchased data was shite’: it’s just everything in the pipeline being a bit mediocre and it multiplying out to a bad end-product which is less than the sum of its parts.Remember Karpathy’s warning: “neural nets want to work”. You can screw things up and the neural nets will still work, they will just be 1% worse than they should be. If you don’t have a research culture which is rigorous about methodology or where people just have good enough taste/intuition to always do the right thing, you’ll settle for whatever seems to work… (Especially if you are not going above and beyond to ensure your metrics aren’t fooling yourself.) Now have a 1% penalty on everything, from architecture to compute throughput to data quality to hyperparameters to debugging implementation issues, and you wind up with a model which is already obsolete on release with no place on the Pareto frontier and so gets 0% use.