Is AI Progress Impossible To Predict?

People seem to be continually surprised, over and over again, by the new capabilities of big machine learning models, such as PaLM, DALL-E, Chinchilla, SayCan, Socratic Models, Flamingo, and Gato (all in the last two months!). Luckily, there is a famous paper on how AI progress is governed by scaling laws, where models predictably get better as they get larger. Could we forecast AI progress ahead of time by seeing how each task gets better with model size, draw out the curve, and calculate which size model is needed to reach human performance?

I tried this, and apparently the answer is no. In fact, whether AI has improved on a task recently gives us exactly zero predictive power for how much the next model will improve on the same task. The sheer consistency of this unpredictability is remarkable, almost like a law of statistical thermodynamics. No matter what I plug in, the correlation is always zero! For example, does a task improving rapidly when you go from a small model to a 7B parameter model predict similar improvement when you go from a 7B model to Gopher’s 280B? No:

I tried making the same graph with MMLU tasks instead of BIG-bench, same result:

What about DeepMind’s new Chinchilla? Did rapid improvement of a task on Gopher predict continued improvement going from Gopher to Chinchilla? Nope:

What about Google’s PaLM? The full results of PaLM on BIG-bench don’t seem to have been published yet, so I couldn’t directly compare to Chinchilla or Gopher, but the PaLM paper described an 8B parameter model, a 62B model and a 540B model. Did fast improvement from 8B to 62B predict improvement from 62B to 540B? Not really, R^2 = 0.04:

PaLM also provides data on 30 different NLU benchmark tasks. Plot those and you get the same thing:

The results here seem pretty clear, but I’m honestly not sure how to interpret them. Before trying this, I assumed you would find that some tasks are “easy” and scale quickly, while others are “hard” and scale slowly. But that would get you high predictability, since fast progress between one pair of models would imply that the task is inherently “easy”, and predict (perhaps with some noise) fast progress on the next pair. I didn’t see that.

You could also have a theory where tasks scaled similarly (all are of comparable “difficulty”), but there was some noise between model training runs, so that task performance on any given run would bounce up and down around some “true” average value. (Since if you did badly on one run, you’d expect to regress to the mean, and do unusually well on the next.) But I didn’t see that either. The two effects (some tasks being intrinsically easier, and individual model runs being noisy) could also cancel out, since one implies a positive correlation and the other implies a negative one… but it seems unlikely that they would exactly cancel every time!

Is AI task performance a type of submartingale, like a stock market index that goes up over time, but where each particular movement is intrinsically unpredictable? Maybe we can compare it to the growth in company profits, where the literature says that companies might grow slowly or quickly, but whether a company has grown fast recently has zero predictive power for future growth. I guess if we knew what we were doing, it wouldn’t be called research.

EDIT: By request, here’s a Google sheet with the raw data, copy-pasted from the Gopher, PaLM and Chinchilla papers: https://​​​​spreadsheets/​​d/​​1Y_00UcsYZeOwRuwXWD5_nQWAJp4A0aNoySW0EOhnp0Y/​​edit?usp=sharing

EDIT 2: Several people suggested using logits instead of raw percentages. I tried that with the Gopher numbers, still got zero correlation:

EDIT 3: Tamay noted that if you try to predict 7B Gopher from 1B Gopher, you get a negative correlation:

If the models become small enough, maybe this means that scale isn’t helping you at that level, so the differences between performances are noise and you should expect mean reversion? Eg., here is a graph of a negative correlation between different “runs”, where the “runs” are just draws from a random Gaussian: