When is Goodhart catastrophic?

Thanks to Aryan Bhatt, Eric Neyman, and Vivek Hebbar for feedback.

This post gets more math-heavy over time; we convey some intuitions and overall takeaways first, and then get more detailed. Read for as long as you’re getting value out of things!

TLDR

How much should you optimize for a flawed measurement? If you model optimization as selecting for high values of your goal plus an independent error , then the answer ends up being very sensitive to the distribution of the error : if it’s heavy-tailed you shouldn’t optimize too hard, but if it’s light-tailed you can go full speed ahead.

Why the tails come apart by Thrasymachus discusses a sort of “weak Goodhart” effect, where extremal proxy measurements won’t have extremal values of your goal (even if they’re still pretty good). It implicitly looks at cases similar to a normal distribution.

Scott Garrabrant’s taxonomy of Goodhart’s Law discusses several ways that the law can manifest. This post is about the “Regressional Goodhart” case.

Scaling Laws for Reward Model Overoptimization (Gao et al., 2022) considers very similar conditioning dynamics in real-world RLHF reward models. In their Appendix A, they show a special case of this phenomenon for light-tailed error, which we’ll prove a generalization of in the next post.

Defining and Characterizing Reward Hacking (Skalse et al., 2022) shows that under certain conditions, leaving any terms out of a reward function makes it possible to increase expected proxy return while decreasing expected true return.

How much do you believe your results? by Eric Neyman tackles very similar phenomena to the ones discussed here, particularly in section IV; in this post we’re interested in characterizing that sort of behavior and when it occurs. We strongly recommend reading it first if you’d like better intuitions behind some of the math presented here—though our post was written independently, it’s something of a sequel to Eric’s.

An Arbital page defines Goodhart’s Curse and notes

The exact conditions for Goodhart’s Curse applying between and a point estimate or probability distribution over [a proxy measure that an AI is optimizing], have not yet been written out in a convincing way.

To the extent this post adopts a reasonable frame, we think it makes progress towards this goal.

Motivation/​intuition

Goodhart’s Law says

When a measure becomes a target, it ceases to be a good measure.

When I (Drake) first heard about Goodhart’s Law, I internalized something like “if you have a goal, and you optimize for a proxy that is less than perfectly correlated with the goal, hard enough optimization for the proxy won’t get you what you wanted.” This was a useful frame to have in my toolbox, but it wasn’t very detailed—I mostly had vague intuitions and some idealized fables from real life.

Much later, I saw some objections to this frame on Goodhart that actually used math.[1] The objection went something like:

Let’s try to sketch out an actual formal model here. What’s the simplest setup of “two correlated measurements”? We could have a joint normal distribution over two random variables, and , with zero mean and positive covariance. You actually value , but you measure a proxy . Then we can just do the math: if I optimize really hard for , and give you a random datapoint with or something, how much do you expect to get?

If we look at the joint distribution of and , we’ll see a distribution with elliptical contour lines, like so:

Now, the naïve hope is that expected as a function of observed would go along the semi-major axis, shown in red below:

But actually we’ll get the blue line, passing through the points at which the ellipses are tangent to the -axis.[2]

Importantly, though, we’re still getting a line: we get linearly more value for every additional unit of we select for! Applying percentile selection on isn’t going to be as good as percentile selection on , but it’s still going to give us more than any lower percentile selection on .[3] The proxy is inefficient, but it’s not doomed.

Lately, however, I’ve come to think that this story is a little too rosy. One thing that’s going on here is that we’re just thinking about a “regressional Goodhart” problem, which is only one of several ways something Goodhart-like can come into play—see Scott Garrabrant’s taxonomy. But even in this setting, I think things can be much thornier.

In the story above, we can think of our measurement as being some multiple of plus an independent normally-distributed source of error, . When we ask for an outcome with a really high value of , we’re asking for a datapoint where is very high.[4]

Because normal distributions drop off in probability very fast, it gets harder and harder to select for high values of either component: given that a datapoint is at least 4 standard deviations above the mean, the odds that it’s at least 5 standard deviations above are less than 1%. So the least-rare outcomes with high are going to look like a compromise between the noise and value , where we have a medium amount of each piece (because going to the extremes for either one is disproportionately costly in terms of improbability).

To see this more visually, here are some plots of possible pairs, restricted to the triangle of values where . Points are brighter if that outcome is more probable, and the black contour lines show regions of equal probability density. On the right, we have the expected value of as a function of our proxy threshold .

We can see that the most likely outcomes skew towards one side or the other depending on which of and has more variance, but because these contour lines are convex, we still expect to see outcomes that have some of each component.

But now let’s look at a case where and are heavy-tailed, such that each additional unit of or requires fewer bits of optimization power.[5] Say that the probability density functions (PDFs) of and are proportional to , instead of like before.[6] Then we’ll see something more like

The resulting distribution is symmetric about and , of course, but unlike in the normal case, that doesn’t manifest as and will be about the same”, but instead as “the outcome will be almost entirely or almost entirely with even odds”.

In this heavy-tailed regime, though, we care a lot about which of or has the edge here. For instance, suppose that optimizing a given amount for only gets us half as far as it would for (so e.g. the 99th percentile value is half as large as the 99th percentile value). Our plot now looks like

and in the limit for large we won’t get any expected at all by optimizing for the sum—all that optimization power goes towards producing high values. We call this catastrophic Goodhart because the end result, in terms of , is as bad as if we hadn’t conditioned at all.

(In general, if the right-hand tails of and are each on the order of , we’ll switch between the two regimes right at - that’s when these contour lines switch from being convex to being concave.)

To help visualize this behavior, let’s zoom in closer on a concrete example where we get catastrophic Goodhart.[7] See below for plots of the PDFs of and :

On the left is a standard plot of the two PDFs; on the right is a plot of their negative logarithms. The right-hand plot makes it apparent that has heavier right tails, because the green line gets arbitrarily far below the orange line in the limit.

Here is a GIF of the conditional distribution on as goes from up to , with a dashed blue line indicating the conditional expectation:

Note the spike in the conditional PDF around , corresponding to outcomes where is small and is large; because of the heavier tails on , this spike gets smaller and smaller with larger . (We recommend staring at this GIF until you feel like you have a good understanding of why it looks the way it does.)

The expected value initially goes up when we apply a little selection pressure to our proxy, but as we optimize harder, that optimization pressure gets shunted more and more into optimization for , and less and less for , even in absolute terms. (This is the same dynamic that Eric Neyman recently discussed in section IV of How much do you believe your results?, put in a slightly different framing.)

In the next post, we’re going to prove some results about when this effect happens; this will be pretty technical, so we’ll talk a bit about the results in broad strokes here.

Proof statement

Suppose that and are independent real-valued random variables. We’ll show, roughly, that if

  • is subexponential (a slightly stronger property than being heavy-tailed).

  • has lighter tails than by more than a linear factor, meaning that the ratio of the tails of and the tails of grows​​ superlinearly.[8]

then .

Less formally, we’re saying something like “if it requires relatively little selection pressure on to get more of and asymptotically more selection pressure on to get more of , then applying very strong optimization towards will not get you even a little bit of optimization towards - all the optimization power will go towards , where it has the best return on investment.”

We’ll also show a sort of inverse to this: if has right tails that are lighter than an exponential (for instance, if is normal or bounded), then we’ll get infinitely much in the limit no matter what kind of tail distribution has.

(What if is heavy-tailed but has even heavier tails than ? Then we can exchange their places in the first theorem, and conclude that we get zero in the limit—which means that all of that optimization is going towards .)

In the next post, we’ll prove these claims.

Application to alignment

We might want to use unaligned AI to generate alignment research for us. One model for this is sampling a random document from the space of 10000-bit strings, then conditioning on a high human rating. If evaluation of alignment proposals is substantially easier than generating good alignment proposals, these plans will be useful. If not, we’ll have a hard time getting research out of the AI. This is a crux between John Wentworth and Paul Christiano + Jan Leike that informs their differing approaches to alignment.

We can frame the problem of evaluation in terms of Goodhart’s Law. Let be the true quality of an alignment plan (say in utility contributed to the future), and be the human rating, so that is the human’s rating error. If and are independent, and we have access to arbitrarily strong optimization for , then our result implies that to implement an alignment plan better than random…

  • … if V is light-tailed, X must not be heavy-tailed.

  • … if V is heavy-tailed, X must not be much heavier-tailed than V.

We don’t know whether V is heavy- or light-tailed in real life, so to be safe, we should make X light-tailed. To the extent this model is accurate, a large part of alignment reduces to the problem of finding a classifier with light-tailed errors, which is able to operate in the exceptionally complicated domain of evaluating plans, and is not itself dangerous.

This model makes two really strong assumptions: that optimization is like conditioning, and that and are independent. These are violated in real life:

  • Optimization is not simply conditioning; SGD has too many inductive biases for us to list here, and (Gao et al., 2022) found that for a given level of optimization, RL uses far more KL distance from the prior than best-of-n sampling.

  • and will not be independent. Among other reasons, we expect that more complicated or optimized plans are more likely to have large impacts on the world (thus having higher variance of ), and harder to evaluate (thus having higher variance of ). However, in some cases, really good plans might be easier to evaluate; for example, formalized proofs can be efficiently checked.

There’s also a sort of implicit assumption in even using a framing that thinks about things as ; the world might be better thought of as naturally containing tuples (with our proxy measurement), and could be a sort of unnatural construction that doesn’t make sense to single out in the real world. (We do think this framing is relatively natural, but won’t get into justifications here.)

Despite these caveats, some takeaways we endorse:

  • Optimization for imperfect proxies is sometimes fine and sometimes doomed, depending on your distribution.

  • Goodhart’s law is subtle—even within a given framing of a problem, what happens when you optimize can be very sensitive to the exact numerical details of your measurements.

    • In particular, reaching for a normally-distributed toy model by default can be super misleading for thinking about a lot of real-world dynamics, because the tails are much lighter than most things in a way that affects the qualitative takeaways.

  • In an alignment plan involving generation and evaluation, you should either (a) have reason to believe that your classifier’s errors are light-tailed, (b) have a reason why training an AI on human (or AI) feedback will be importantly different from conditioning on high feedback scores, or (c) have a story for why non-independence works in your favor.

Exercises

  1. Show that when and are independent and , . Conclude that . This means that given independence, optimization always produces a plan that is no worse than random.

  2. When independence is violated, an optimized plan can be worse than random, even if your evaluator is unbiased. Construct a joint distribution for and such that , , and for any , but .

Answers to exercises are at the end of the next post.

  1. ^

    Thanks to Eric Neyman for first making this observation clear to me.

  2. ^

    One way to see this intuitively is to consider the shear transformation replacing by , where is a constant such that the resulting random variable is uncorrelated with . In that situation we’d have a constant expectation of , so adding the component back in should give us a linear expectation.

  3. ^

    To be precise, .

  4. ^

    Technically we could have , but we can just rescale until the coefficient is 1 without changing anything.

  5. ^

    Most heavy-tailed distributions are also long-tailed, which means that for all . So the optimization needed to get from the event ” is at least ” to ” is at least ” becomes arbitrarily small for large .

  6. ^

    Note that this effect doesn’t depend on the behavior of or right around zero, just on their right tails.

  7. ^

    We’ll suppose that has a PDF proportional to and has a PDF proportional to , where is an odd function that quickly asymptotes to , so has tails like for large in either direction but is smooth around .

  8. ^

    We’ll use something slightly stronger than this; we’d like ’s tails to be larger by a factor of . More precise details in the next post.