Are Emergent Abilities of Large Language Models a Mirage? [linkpost]

Link post

Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, one can choose a metric which leads to the inference of an emergent ability or another metric which does not. Thus, our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale. We present our explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/​GPT-3 family on tasks with claimed emergent abilities, (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show how similar metric decisions suggest apparent emergent abilities on vision tasks in diverse deep network architectures (convolutional, autoencoder, transformers). In all three analyses, we find strong supporting evidence that emergent abilities may not be a fundamental property of scaling AI models.

This result seems important for two reasons:

  1. If AI abilities are predictable, then we can forecast when we’ll get dangerous capabilities ahead of time, rather than being taken by surprise. This result strengthens the case for a research program of devising a ton of interesting benchmarks to measure how capabilities are improving as a function of scale.

  2. It provides some evidence against the idea that “understanding is discontinuous”, and that important AI abilities will suddenly click together at some level, which is a very loose description of what I understood to be one of the primary intuitions behind AI foom.