Inverse scaling can become U-shaped

Link post

Edit: Here’s a great comment by Ethan Perez that caveats this result, that I’d recommend reading for context.

This is a paper by folks at Quoc Le’s team at Google that examines the winning tasks from Round 1 of Anthropic’s Inverse Scaling Prize. They find that 34 of the winning tasks — which exhibited negative returns-to-scale when tested on LMs up to the scale of Gopher (280B) — go back to exhibiting positive returns-to-scale at even greater model sizes such as PaLM (540B).

The abstract in full:

Although scaling language models improves performance on a range of tasks, there are apparently some scenarios where scaling hurts performance. For instance, the Inverse Scaling Prize Round 1 identified four ″inverse scaling″ tasks, for which performance gets worse for larger models. These tasks were evaluated on models of up to 280B parameters, trained up to 500 zettaFLOPs of compute.

This paper takes a closer look at these four tasks. We evaluate models of up to 540B parameters, trained on five times more compute than those evaluated in the Inverse Scaling Prize. With this increased range of model sizes and training compute, three out of the four tasks exhibit what we call ″U-shaped scaling″ -- performance decreases up to a certain model size, and then increases again up to the largest model evaluated. One hypothesis is that U-shaped scaling occurs when a task comprises a ″true task″ and a ″distractor task″. Medium-size models can do the distractor task, which hurts performance, while only large-enough models can ignore the distractor task and do the true task. The existence of U-shaped scaling implies that inverse scaling may not hold for larger models.


Second, we evaluate the inverse scaling tasks using chain-of-thought (CoT) prompting, in addition to basic prompting without CoT. With CoT prompting, all four tasks show either U-shaped scaling or positive scaling, achieving perfect solve rates on two tasks and several sub-tasks. This suggests that the term “inverse scaling task” is under-specified—a given task may be inverse scaling for one prompt but positive or U-shaped scaling for a different prompt.

Key figure from the paper is below, showing results for LMs up to PaLM 540B. Note that positive scaling resumes for 34 of the inverse scaling tasks at the 2.5e24 FLOPs datapoint, which indeed corresponds exactly to vanilla PaLM 540B.[1]

  1. ^

    From Table 22 in the PaLM paper.