The theory-practice gap

[Thanks to Richard Ngo, Damon Binder, Summer Yue, Nate Thomas, Ajeya Cotra, Alex Turner, and other Redwood Research people for helpful comments; thanks Ruby Bloom for formatting this for the Alignment Forum for me.]

I’m going to draw a picture, piece by piece. I want to talk about the capability of some different AI systems.

You can see here that we’ve drawn the capability of the system we want to be competitive with, which I’ll call the unaligned benchmark. The unaligned benchmark is what you get if you train a system on the task that will cause the system to be most generally capable. And you have no idea how it’s thinking about things, and you can only point this system at some goals and not others.

I think that the alignment problem looks different depending on how capable the system you’re trying to align is, and I think there are reasonable arguments for focusing on various different capabilities levels. See here for more of my thoughts on this question.

Alignment strategies

People have also proposed various alignment strategies. But I don’t think that these alignment strategies are competitive with the unaligned benchmark, even in theory.

I want to claim that most of the action in theoretical AI alignment is people proposing various ways of getting around these problems by having your systems do things that are human understandable instead of doing things that are justified by working well.

For example, the hope with imitative IDA is that through its recursive structure you can build a dataset of increasingly competent answers to questions, and then at every step you can train a system to imitate these increasingly good answers to questions, and you end up with a really powerful question-answerer that was only ever trained to imitate humans-with-access-to-aligned-systems, and so your system is outer aligned.

The bar I’ve added, which represents how capable I think you can get with amplified humans, is lower than the bar for the unaligned benchmark. I’ve drawn this bar lower because I think that if your system is trying to imitate cognition that can be broken down into human understandable parts, it is systematically not going to be able to pursue certain powerful strategies that the end-to-end trained systems will be able to. I think that there are probably a bunch of concepts that humans can’t understand quickly, or maybe can’t understand at all. And if your systems are restricted to never use these concepts, I think your systems are probably just going to be a bunch weaker.

I think that transparency techniques, as well as AI alignment strategies like microscope AI that lean heavily on them, rely on a similar assumption that the cognition of the system you’re trying to align is factorizable into human-understandable parts. One component of the best-case scenario for transparency techniques is that anytime your neural net does stuff, you can get the best possible human understandable explanation of why it’s doing that thing. If such an explanation doesn’t exist, your transparency tools won’t be able to assure you that your system is aligned even if it is.

To summarize, I claim that current alignment proposals don’t really have a proposal for how to make systems that are aligned but either

  • produce plans that can’t be understood by amplified humans

  • do cognitive actions that can’t be understood by amplified humans

And so I claim that current alignment proposals don’t seem like they can control systems as powerful as the systems you’d get from an unaligned training strategy.

Empirical generalization

I think some people are optimistic that alignment will generalize from the cases where amplified humans can evaluate it to the cases where the amplified humans can’t. I’m going to call this empirical generalization. I think that empirical generalization is an example of relying on empirical facts about neural nets that are not true of arbitrary general black box function approximators.

I think this is a big part of the reason why some people are optimistic about the strategy that Paul Christiano calls “winging it”.

(I think that one particularly strong argument for empirical generalization is that if you imagine AGI as something like GPT-17 fine-tuned on human feedback on various tasks, your AGI might think about things in a very human-shaped way. (Many people disagree with me on this.) It currently seems plausible to me that AGI will be trained with a bunch of unsupervised learning based on stuff humans have written, which maybe makes it more likely that your system will have this very human-shaped set of concepts.)

The theory-practice gap

So the total height of that second column is the maximum level of capabilities that we think we could theoretically attain using the same capability techniques that we used for the unaligned benchmark, but using the alignment strategies that we know about right now. But in practice, we probably aren’t going to do as well as that, for a variety of practical reasons. For example, as I’ve said, I think transparency tools are theoretically limited, but we’re just way below the maximum theoretically available capability of transparency tools right now.

So I want to claim that reality will probably intervene in various ways and mean that the maximum capability of an aligned AI that we can build is lower than the maximum achievable theoretically from the techniques we know about and empirical generalization. I want to call that difference the theory practice gap.

Sources of theory-practice gap

Practical difficulties, eg getting human feedback

Human feedback is annoying in a wide variety of ways; you have to do quality control etc.

Problems with the structure of the recursion

I think it’s reasonably plausible that the most competitive way of making powerful systems ends up not really being shapeable into the shape you need for the amplified human stuff to work out. So for example, maybe the best way of making AGI is doing some kind of evolution simulation, where you have this population of little creatures and they compete with each other and stuff. And if that’s the only way of making smart systems, then I think it’s pretty plausible that there’s just like no way of building a trusted, amplified reward signal out of it. And so you can’t do the IDA style things, or things where you use a system to do transparency analysis on a slightly more powerful version of itself.

NP-hard problems

Maybe your amplified system won’t be able to answer questions like “are there any inputs on which this system does the wrong thing” even if it wants to. Eg the RSA-2048 problem.

I think that transparency has a related problem: the most competitive-to-train models might have internal structure that amplified humans would be able to understand if it was explained to them, but we might not be able to get a model to find that structure.


Why am I lumping together fundamental concerns like “maybe these alignment strategies will require solving NP-hard problems” with things like “it’s annoying to do quality control on your labelling contractors”?

It’s primarily because I want to emphasize that these concerns are different from the fundamental limitations of currently proposed alignment schemes: even if you assume that we don’t e.g. run into the hard instances of the NP-hard problems, I think that the proposed alignment schemes still aren’t clearly good enough. There are lots of complicated arguments about the extent to which we have some of these “practical” problems; I think that these arguments distract from the claim that the theoretical alignment problem might be unsolved even if these problems are absent.

So my current view is that if you want to claim that we’re going to fully solve the technical alignment problem as I described it above, you’ve got to believe some combination of:

  • we’re going to make substantial theoretical improvements

  • factored cognition is true

  • we’re going to have really good empirical generalization

(In particular, your belief in these factors needs to add up to some constant. E.g., if you’re more bullish on factored cognition, you need less of the other two.)

I feel like there’s at least a solid chance that we’re in a pretty inconvenient world where none of these are true.

Classifying alignment work

This picture suggests a few different ways of trying to improve the situation.

  • You could try to improve the best alignment techniques. I think this is what a lot of AI alignment theoretical work is. For example, I think Paul Christiano’s recent imitative generalization work is trying to increase the theoretically attainable capabilities of aligned systems. I’ve drawn this as the red arrow on the graph below.

  • You can try to reduce the theory-practice gap. I think this is a pretty good description of what I think applied alignment research is usually trying to do. This is also what I’m currently working on. This is the pink arrow.

  • You can try to improve our understanding of the relative height of all these bars.

AI alignment disagreements as variations on this picture

So now that we have this picture, let’s try to use it to explain some common disagreements about AI alignment.

I think some people think that amplified humans are actually just as capable as the unaligned benchmark. I think this is basically the factored cognition hypothesis.

I think there’s a bunch of people who are really ML-flavored alignment people who seem to be pretty optimistic about empirical generalization. From their perspective, almost everything that AI alignment researchers should be doing is narrowing that theory practice gap, because that’s the only problem.

I think there’s also a bunch of people like perhaps the stereotypical MIRI employee who thinks that amplified humans aren’t that powerful, and you’re not going to get any empirical generalization, and there are a bunch of problems with the structure of the recursion for amplification procedures. And so it doesn’t feel that important to them to work on the practical parts of the theory practice gap, because even if we totally succeeded at getting that to zero, the resulting systems wouldn’t be very powerful or very aligned. And so it just wouldn’t have mattered that much. And the stereotypical such person wants you to work on the red arrow instead of the pink arrow.

How useful is it to work on narrowing the theory-practice gap for alignment strategies that won’t solve the whole problem?

See here.

Conclusion

I feel pretty nervous about the state of the world described by this picture.

I’m really not sure whether I think that theoretical alignment researchers are going to be able to propose a scheme that gets around the core problems with the schemes they’ve currently proposed.

There’s a pretty obvious argument for optimism here, which is that people haven’t actually put in that many years into AI alignment theoretical research so far. And presumably they’re going to do a lot more of it between now and AGI. I think I’m like 30% on the proposition that before AGI, we’re going to come up with some alignment scheme that just looks really good and clearly solves most of the problems with current schemes.

I think I overall disagree with people like Joe Carlsmith and Rohin Shah mostly in two places:

  • By the time we get to AGI, will we have alignment techniques that are even slightly competitive? I think it’s pretty plausible the answer is no. (Obviously it would be very helpful for me to operationalize things like “pretty plausible” and “slightly competitive” here.)

  • If we don’t have the techniques to reliably align AI, will someone deploy AI anyway? I think it’s more likely the answer is yes.