How much alignment data will we need in the long run?

This question stands out to me because:

  • It should directly affect empirical alignment priorities today

  • While it is informed by both theoretical and empirical evidence, it seems tractable for purely theoretical alignment researchers to make progress on today

It’s even possible that theoretical alignment researchers already consider this to be a solved problem, in which case I think it would be valuable to have a carefully-reasoned write-up that empirical alignment practitioners can feel confident in the conclusions of.

Thanks to Paul Christiano for discussion that prompted this post and to Jan Leike for comments.

Why this should affect empirical alignment priorities today

Outer alignment can be framed as a data quality problem. If our alignment training data correctly favors aligned behavior over unaligned behavior, then we have solved outer alignment. But if there are errors in our data that cause an unaligned policy to be preferred, then we have a problem.

It is common to worry about errors in the alignment training data that arise from evaluation being too difficult for humans. I think this makes sense for two reasons:

  • Firstly, errors of this kind specifically incentivize models to deceive the human evaluators, which seems like an especially concerning variety of alignment failure.

  • Secondly, errors of this kind will get worse with model capability, which is a scary dynamic: models would get more misaligned as they became more powerful.

Nevertheless, I think we could still get catastrophic alignment failures from more mundane kinds of data quality issues. If we had the perfect scalable alignment solution, but the humans in the loop simply failed to implement it correctly, that could be just as bad as not using the solution at all.

But prevention of mundane kinds of data quality issues could look very different depending on the amount of data being collected:

  • If a large amount of alignment training data is needed, then a significant amount of delegation will be required. Hence practitioners will need to think about how to choose who to delegate different tasks to (including defending against adversaries intentionally introducing errors), how to conduct quality control and incentivize high data quality, how to design training materials and interfaces to reduce the likelihood of human error, and so on.

  • If only a small amount of alignment training data is needed, then it will be more feasible to put a lot of scrutiny on each datapoint. Perhaps practitioners will need to think about how to appropriately engage the public on the choice of each datapoint in order to maintain public trust.

Hence settling the question of how much alignment training data we will need in the long run seems crucial for deciding how much empirical alignment efforts should invest in the first versus the second kind of effort.

In practice, we may collect both a larger amount of lower-quality data and a smaller amount of higher-quality data, following some quality-quantity curve. The generalized form of the question then becomes: what is the probability of alignment for a given quality-quantity curve? Practitioners will then be able to combine this with feasibility considerations to decide what curve to ultimately follow.

Initial thoughts on this question

Considerations in favor of less alignment training data being required:

  • Larger models are more sample-efficient than smaller models, especially in the presence of pre-training. Hence for a given task we should expect the amount of alignment training data we need to go down over time.

  • There could be many rounds of fine-tuning used to teach models the precise details of performing certain tasks, and data quality may only be of great importance for the last few rounds.

  • We could design training schemes that are largely self-supervised, with models performing most of the reasoning about how different outputs are good for humans, and these training schemes might not require much human data.

  • In the limit, we could teach the model everything about the evaluation process in an entirely unsupervised way, and then use an extremely small amount of human data simply to get the model to recognize the output of this evaluation process as its objective.

  • Put another way, the information content of the instruction “be intent aligned” is very small once you have a model capable enough to understand exactly what you mean by this.

Considerations in favor of more alignment training data being required:

  • Model-free RL is very sample-inefficient, since reward is a channel with very low information density. It takes tens or hundreds of millions of samples for models to learn to play simple video games very well when trained from scratch. So we may be coming from a starting point of needing vast amounts of data to perfectly align models on complex tasks, and may still need a lot even if this amount goes down over time. Model-based RL is more sample-efficient, but could risk introducing unwanted bias.

  • From-scratch scaling laws have separate terms for model size and number of samples, implying a cap on sample efficiency in the infinite model size limit. These scaling laws do not apply to pre-trained models, but there should still be an information-theoretic lower bound on sample efficiency independent of model size.

  • Self-supervised approaches might not pan out, or they may be subject to instabilities that lead to misalignment, and so we may prefer to use approaches that require more human data in favor of model-generated data that hasn’t been scrutinized as closely.

  • Deliberately telling the model about the evaluation process could make it more likely to exploit that process, so we again may prefer alternative approaches requiring more human data. Not telling the model about the evaluation process isn’t a scalable defense, since sufficiently smart models should be able to infer most of the relevant details anyway, but we might still prefer our chances with this defense in place.

  • Even if the instruction “be intent aligned” has little information content, we may generally feel better about our alignment chances if we use methods that directly supervise specific tasks, rather than methods that try to decouple alignment into a phase of its own. As models get smarter, we will want them to perform harder and more specialized tasks, and so the information content of how we want them to behave may increase over time.

I think it’s also worth studying not just the long-run limit, but also how we should expect the amount of alignment data we will need to change over time, since we are uncertain about the scale at which we could get dangerous misalignment. Empirical research could shed a lot of light on short-term trends, but we should be wary of extrapolating these too far if they seem at odds with theoretical conclusions.