Four Motivations for Learning Normativity

I have been pretty satisfied with my desiderata for learning normativity, but I haven’t been very satisfied with my explanation of why exactly these desiderata are important. I have a sense that it’s not just a grab-bag of cool stuff; something about trying to do all those things at once points at something important.

What follows are four different elevator pitches, which tell different stories about how it all hangs together. Desiderata are bolded.

Conceptual Difficulties with Outer Alignment

The classic problem of outer alignment is that we have no perfect loss function, so we can’t just go optimize. The problem can be understood by thinking about Goodhart and how optimization amplifies. The classic response to this is value uncertainty and value learning, but wireheading, human manipulation, and no-free-lunch results make it seem plausible that we have the same problem one level up: we still don’t know how to specify a perfect loss function for what we care about, and imperfect loss functions can still create big problems.

So, just like value-learning tackles the initial problem head-on by suggesting we manage our uncertainty about values and gain knowledge over time, learning at all levels suggests that we tackle the meta-problem directly, explicitly representing the fact that we don’t have a perfectly good loss function at any level, but can manage that uncertainty and learn-to-learn over time.

Humans can only give explicit feedback at so many meta-levels, so between-level sharing is critical for any meaningful learning to take place at higher meta-levels. Otherwise, higher meta-levels remain highly uncertain, which itself makes learning at lower levels almost impossible (since you can’t learn if you have high uncertainty about learning-to-learn).

A consequence of having no perfect loss function is no perfect feedback; no evidence about what the system should do can be considered absolute. A helpful measure for coping with this is to support uncertain feedback, so that humans can represent their uncertainty when they provide feedback. Ultimately, though, humans can have systematic biases which require reinterpretable feedback to untangle.

Even with all these tools, some forms of feedback would be difficult or impossible to articulate without process-level feedback: the ability to tell the system that specific patterns of thinking are good or bad, without needing to unpack those judgements in terms of consequences. To be meaningful, this requires whole-process feedback: we need to judge thoughts by their entire chain of origination. (This is technically challenging, because the easiest way to implement process-level feedback is to create a separate meta-level which oversees the rest of the system; but then this meta-level would not itself be subject to oversight.)

Finally, because it’s not feasible for humans to approve every thought process by hand, it’s critical to have learned generalization of process-level feedback. This doesn’t sound like a big request, but is technically challenging when coupled with the other desiderata.

Recovering from Human Error

A different place to start is to motivate everything from designing a system to recover from errors which humans introduce.

The ability to learn when there’s no perfect feedback represents a desire to recover from input errors. Uncertain feedback and reinterpretable feedback follow from this as before.

We can’t avoid all assumptions, but specifying a loss function is one area where we seem to assume much more than we bargain for; what we mean is something like “good things are roughly in this direction”, but what we get is more like “good things are precisely this”. We want to avoid making this type of mistake, hence no perfect loss function. Learning at all levels ensures that we can correct this type of mistake wherever it occurs. Between-level sharing is needed in order to get any traction with all-level learning.

Whole-process feedback can now be motivated by the desire to learn a whole new way of doing things, so that nothing is locked in by architectural mistakes. This of course implies process-level feedback.

Learned generalization of feedback can be seen as a desire to pre-empt human error correction; learning the patterns of errors humans correct, so as to systematically avoid those sorts of things in the future.

We Need a Theory of Process-Level Feedback

We could also motivate things primarily through the desire to facilitate process-level feedback. Process-level feedback is obviously critical for inner alignment; we want to be able to tell a system to avoid specific kinds of hypotheses (which contain inner optimizers). However, although we can apply penalties to neural nets or such things, we lack a general theory of process-level feedback that’s as rigorous as theories we have for other forms of learning. I think it’s probably a good idea to develop such a theory.

In addition to inner alignment, process-level feedback could be quite beneficial to outer-alignment problems such as corrigibility, non-manipulation, and non-wireheading. As I argued in another section, we can often point out that something is wrong without being able to give a utility function which represents what we want. So process-level feedback just seems like a good tool to have when training a system, and perhaps a necessary one.

You might think process-level feedback is easy to theoretically model. In a Bayesian setting, we can simply examine hypotheses and knock out the bad ones (updating on not-this-hypothesis). However, this is an incredibly weak model of process-level feedback, because there is no learned generalization of process-level feedback! Learned generalization is important, because humans can’t be expected to give feedback on each individual hypothesis, telling the system whether it’s OK or full of inner optimizers. (If we develop a technology that can automatically do this, great; but otherwise, we need to solve it as a learning problem.)

The next-most-naive model is a two-level system where you have object-level hypotheses which predict data, and meta-level hypotheses which predict which object-level hypotheses are benign/​malign. Humans provide malign/​benign feedback about first-level hypotheses, and second-level hypotheses generalize this information. This proposal is not very good, because now there’s no way to provide process-level feedback about second-level hypotheses; but absent any justification to the contrary, these are just as often malign. This illustrates the need for whole-process feedback.

This suggests a version of learning at all levels: if the process-level feedback at one level can be regarded as data for the next level, everything can be generalizable. However, between-level sharing is necessary, since patterns we want to avoid at one level will very often be patterns we want to avoid at all levels.

In this story, no perfect feedback and no perfect loss function are less important.

Generalizing Learning Theory

Another way to motivate things is through the purely theoretical desire to push learning theory as far as possible. Logical induction can be thought of as pure learning-theory progress, in which a very broad bounded-regret property was discovered which implied many other desirable properties. In particular, it generalized to non-sequential non-realizable settings, where Solomonoff induction only dealt with sequential prediction in realizable settings. Also, it dealt with a form of bounded rationality where Solomonoff induction only dealt with unbounded rationality.

So, why not try to push the boundaries of learning theory further?

As I discussed in the previous section, we can think of process-level feedback as feedback directly on hypotheses. Whole-process feedback ensures we focus on the interesting part of the problem, making hypotheses judge each other and themselves, rather than getting a boring partial solution by stratifying a system into seperate levels.

The learned generalization problem can be understood better through a learning-theory lense: the problem is that Bayesian setups like Solomonoff induction offer no regret bounds for updates about hypotheses, due to focusing exclusively on predicting information about sense-data. So although Bayesianism supports updates on any proposition, this does not mean we get the nice learning-theoretic guarantees with respect to all such updates. This seems like a pretty big hole in Bayesian learning theory.

So we want learned generalization of as many update types as possible, by which we mean that we want loss bounds on as many different types of feedback as possible.

Uncertain feedback is just another generalized feedback type for us to explore.

Reinterpretable feedback is a more radical suggestion. We can motivate this through a desire for a theory of meta-learning: how do we learn to learn, in the broadest possible way? This motivates thinking about no perfect feedback and no perfect loss function scenarios.

Learning at all levels could be motivated from the needs of process-level feedback, as in the previous section, or from the nature of the no-perfect-loss-function scenario, as in the first section. Between-level sharing is motivated from it as usual.