# Reward function learning: the learning process

In the previous post, I introduced the formalism for reward function learning, and presented the expected value function for a learning agent:

I’ll assume that people reading this are familiar with the concepts, the notations and the example of that post. I’ll now look at the desirable properties for the learning function .

# 1 Rigging the learning process

## 1.1 The flaws of general learning agents

First of all, are there any problems with a general ? There are three major ones:

1) An agent learning to maximise with a general has to use the whole of the episode to assess the value of any one action. Thus, it can learn as a Monte-Carlo agent, but not as Q-learning or Sarsa agent.

2) An agent maximising with a general can take actions the don’t increase reward, but are taken purely to “justify” its past actions.

3) An agent maximising with a general can sometimes pick a policy that is sub-optimal,

**with certainty**, for all rewards in .

All of these points can be seen by considering our robot cooking/washing example. In that case, it can be seen that the optimal behaviour for that robot is ; this involves cooking the two pizzas, then going East to push the lever onto the cooking choice, and then ending the episode.

Thus , so the final reward is , and the agent earns a reward of .

Why must Q-learning fail here? Because the reward for the first , at the point the agent does it, is , not ; this is because, at this point, is still . Thus the reward component in the Q-learning equation is incorrect.

Also note that the rest of the policy, , serve no purpose to get rewards, they just “justify” the reward from the first action .

Let us now compare this policy with the policy : go North, cook, end the episode. For the value learning function, this has a value of only , since the final reward is . However, under the reward of , this would give a reward of , more than the that gets here. And under the reward of , this would get a reward of , more than the that gets under . Thus the optimal policy for the value learner is worse for both and that the policy.

## 1.2 Riggable learning processes

The problem with the used in the robot example is that it’s *riggable* (I used to call this “biasable”, but that term is seriously overused). What does this mean?

Well, consider again the equation for the expected value . The only history inputs into are the , the complete histories. So, essentially, only the value of on these complete histories matter.

In our example, we chose a that was independent of policy, but we could have gone a different route. Let be any policy such that the final reward is ; then define for any history (and conversely ). Similarly, if were a policy such that the final reward was , then set . If the policy never brings the agent to either lever, then , as before. Stochastic policies have values between these extremes.

This is no longer independent of policy, but it is **Bayesian**; that is, the current is the same as the expected :

However, it is not possible to keep the same on complete histories, and have it be both Bayesian, and independent of policy: there is a tension between the two.

Then we define:

A learning process is

**unriggable**if it is both Bayesian and independent of policy.

## 1.3 Unriggable learning processes

So, what would be an example of an unriggable learning process? Well consider the following setup, where the robot no longer has levers to set their own reward, but instead their owner is in the rightmost box.

In this case, if the robot enters that box, the owner will inform them of whether they should cook or wash.

Since there is hidden information, this setup can be fomalised as a PODMP. The old state-space was , of size , which covered the placement of the robot and the number of pizzas and mud splatters (and whether the episode was ended or not).

The new state space is , with encoding whether the owner is minded to have the robot cooking or washing. The observation space is of size : in most states, the observation only returns the details of , not of , but in the rightmost box, it returns the actual state, letting the agent know whether the human intends it to cook or wash. Thus the observation function is deterministic (if you known the state, you know the observation), but not one-to-one (because for most , and will generate the same observation).

The transition function is still deterministic: it operates as before on , and maps to and to .

The initial state function is stochastic, though: if is the usual starting position, then : the agent thinks it’s equally likely that its owner desires cooking as that it desires washing.

Then what about ? Well, if the history involves the agent being told the very first time it enters the rightmost box, then . If it was told the very first time it enters the rightmost box, then .

It’s easy to see that that is independent of policy. It’s also Bayesian, because actually represents the ignorance of the agent as to whether it lives in the part of the envrionment, or the part, and it gets updated as the agent figures this out.

What then is the agent’s optimal policy? It’s to start with , to get the human’s decree as to which reward is the true one. It will then do , and, if the human has said , it will finish with , giving it a final reward function of and a final total reward of . If the human said , it would finish with , giving it a final reward function of and a final total reward of . Its expected total reward is thus .

## 1.4 Properties of unriggable learning processes

Now, if is unriggable, then we have (almost) all the desirable properties:

1) An agent learning to maximise for an unriggable , may be a Q-learning agent.

2) An agent maximising for unriggable will be indifferent to past rewards.

3) An agent maximising for unriggable will never pursue a policy that will be worse, with certainty, for all in .

These all come from a single interesting result:

If is Bayesian, then the value function and the value function of the previous post, differ by a constant that is independent of future action. Thus, if is unriggable, is the value function of a single classical reward function (which is actually well-defined, independently of ).

This establishes all the nice properties above, and will be proved in the appendix of this post.

Note that even though the value functions are equal, that doesn’t mean that the total reward will be given by . For instance, consider the situation below, where the robot goes :

At the moment where it cooks the pizzas, it has , so it will get an of , with certainty. On the other had, from the perspective of value learning, it will learn at the end that it either has reward function , which will give it a reward of , or has reward function , which will give it a reward of . Since , the expectations are the same, even if the outcomes are different.

# 2 Influence

## 2.1 Problems despite unriggable

Being unriggable has many great properties. Is it enough?

Unfortunately not. The can be unriggable but still manipulable by the agent. Consider for instance the situation below:

Here, not only is there the adult with their opinion on cooking and washing, but there’s also and infant, who will answer randomly. This can be modelled as an POMDP, with state space , where (resp ) designates that the infant will answer (resp ), and do the same for the adult. The observation space is of size ; when the robot is in the leftmost (rightmost) box, it discovers the value of () in the obvious way. The dynamics are as expected, with preserving the values of and .

It’s the initial distribution which encodes the uncertainty. With probability the agent will start in , and similarly for the other four possibilities.

Now we need to define ; call this one . This will be relatively simple: it will set to be , as soon as the agent figures out that it lives either on an or a branch, and will not update further. It will set to as soon as it figures out that it lives on an or an branch, and will not update further. If it has no information about either, it will stick with .

It’s clear that is independent of policy; but is it Bayesian? It is indeed, because each time it updates, it goes to or with equal probability, depending on the observation (and stays there). Before updating, it is always at , so the value of is always the same as the expected value of .

So we have an unriggable ; what can go wrong?

For that , the optimal policy is to ask the infant, then follow their stated values. This means that it avoids the extra square on the way to enquire of the adult, and gets a total expected reward of , rather than the it would get from asking the adult.## 2.2 Uninfluenceable

Note something interesting in the preceding example: if we keep as is, but change the knowledge of the robot, then is no longer unriggable. For example, if the agent knew that it was in a branch with , then it has a problem: if is initially , then it is no longer Bayesian if it goes to ask the adult, because it knows what their answer will be. But if is initially , then it is no longer Bayesian if it asks the infant, because it doesn’t know what their answer will be.

The same applies for any piece of information the robot could know. We’d therefore like to have some concept of “unriggable conditional on extra information”; something like

for some sort of extra information .

That, however, is not easy to capture in POMDP form. But there is another analogous approach. The state space of the POMDP is; this is actually four deterministic environments, and the robot is merely uncertain as to which environment it operates in.

This can be generalised. If a POMDP is explored for finitely many steps, then a PODMP can be seen as a probability distribution over a set of *deterministic environments* (see here for more details on one way this can happen—there are other equivalent methods).

Any history will update this as to which deterministic environment the agent lives in (this can be seen as the set of all the “hidden variables” of the environment). So we can talk sensibly about expressions like , the probability that the environment is , given that we have observed the history .

Then we say that a learning process is uninfluenceable, if there exists a function , such that

Here means the probability of in the distribution .

This expression means that *merely encodes ignorance about the hidden variables of the environment*.

The key properties of uninfluenceable learning processes are:

An uninfluenceable learning process is also unriggable.

An uninfluenceable learning process is exactly one the learns variables about the environment that are independent of the agent.

I will not prove these here (though the second is obvious by definition).

In our most recent robot example, there are four elements of , defined by whether they are in the branch defined by which one of .

It isn’t hard to check that there is no which makes into an uninfluenceable learning process. By contrast, if we define as given by the function:

then we have an uninfluenceable that corresponds to “ask the adult”. We finally have a good definition of a learning process, and the agent that maximises it will simply go an ask the adult before accomplishing the adult’s preferences:

# 3 Warning

If a learning function is uninfluenceable, then it has all the properties we’d expect if we were truly learning something about the outside world. But a) good learning functions may be impossible to make uninfluenceable, and b) being uninfluenceable is not enough to guarantee that the learning function is good.

On point a), anything that involves human feedback is generally influenceable and riggable, since the human feedback is affected by the agent’s actions. This includes, for example, most versions of the approval directed agent.

But that doesn’t mean that those ideas are worthless! We might be willing to accept a little bit of rigging in exchange of other positive qualities. Indeed, quantifying and controlling rigging is a good idea for more research.

What of the converse—is being uninfluenceable enough?

Definitely not. For example, any constant - that never learns, never changes—is certainly uninfluenceable.

As another example, if is any permutation of , then (defined so that ) is also uninfluenceable. Thus “learn what the adult wants, and follow that” is uninfluenceable, but so is “learn what the adult wants, and do the opposite” is also uninfluenceable.

We’ve shown previously that , “ask the adult” is uninfluenceable. But so is , “ask the infant”!

So we have to be absolutely sure not only that our has good properties, but exactly what it is leading the agent to learn.

# 4 Appendix: proof of value-function equivalence

We want to show that:

If is Bayesian, then the value function and the value function of the previous post, are equal.

As a reminder, the two value functions are:

To see the equivalence, let’s fix and in , and consider the term . We can factor the conditional probability of , given , by summing over all the intermediate :

Because is Bayesian, this becomes . Then note that , so that expression finally becomes

which is the corresponding expression for when you fix any and . This shows equality for .

Now let’s fix and , in . The value of is fixed, since it lies in the past. Then expectation of is simply the current value . This differs from the expression for - namely - but both values are independent of future actions.

- Research Agenda v0.9: Synthesising a human’s preferences into a utility function by 17 Jun 2019 17:46 UTC; 65 points) (
- Reward functions and updating assumptions can hide a multitude of sins by 18 May 2020 15:18 UTC; 16 points) (
- Rigging is a form of wireheading by 3 May 2018 12:50 UTC; 11 points) (
- One-step hypothetical preferences by 23 Jan 2019 15:14 UTC; 10 points) (
- Reward function learning: the value function by 24 Apr 2018 16:29 UTC; 9 points) (
- The Alignment Newsletter #4: 04/30/18 by 30 Apr 2018 16:00 UTC; 8 points) (
- 24 Apr 2018 13:26 UTC; 4 points) 's comment on Announcement: AI alignment prize round 2 winners and next round by (
- 1 May 2018 6:21 UTC; 4 points) 's comment on Reward function learning: the learning process by (
- 30 Apr 2018 11:37 UTC; 2 points) 's comment on Reward function learning: the learning process by (

Do you mean “onto” rather than “one-to-one”? (If the function is not one-to-one, which two inputs map to the same output?)

Do you mean “then” instead of “when”?

I think this is only a big problem if the agent models the effects of its physical actions on the range of feedback the human is likely to give. In poetic terms, I’m optimistic about a dualistic approach where value learning and taking action in the world exist in “non-overlapping magesteria”. This could be enforced at the architecture level. It might also help with the infant problem, if enforcing a division like this lets us better control the manner in which the AI retrieves information about our values.

For a concrete example of how this and various other cool things might be achieved, see this. My use of formalism is a bit different than yours: I only talk about MDPs, never POMDPs. Instead of the reward being an aspect of the state that the AI needs to discover, I treat the agent’s beliefs about the reward as an aspect of the state that is known with certainty. The transition model for the reward is then viewed as nondeterministic from the AI’s perspective.

I haven’t given that a deep read, so apologies if I misunderstand, but I don’t see how that post solves the issues. If you have an update rule and prior for “preference beliefs”, then this is just another ρ.

It would be nice if that ρ were uninfluenceable and good, but I don’t see why it would be. The problem is that there is no abstract fact about the universe that corresponds to “our preferences”, which we just need to point the AI towards.

When an AI asks a human about their preferences, three things happen:

1) The AI learns something about human preferences

2) The human learns something out about their own preferences

3) The human establishes new preferences they didn’t have before

The problem is that these three things can’t be cleanly separated, and 3) is absolutely essential because of how messy, contradictory and underdefined human preferences are. But 3) (and to a lesser extent 2)) is also how AIs can manipulate human preferences. And again, there is no clear concept of “manipulation” which can it be distinguished from “helping the human sort out their preferences”.

Also, I noted that you used “never deceive anyone” as part of the aims. This is a very hard problem; I think it might be as hard as getting human values right (though I feel the two problems are to some extent separate; neither implies the other). See https://agentfoundations.org/item?id=1261

This I’m more optimistic about. My version of this is to have π0 be the policy of a pure learning agent—one that learns, but doesn’t try to maximise. Then the actual agent tries to maximise the value of the reward it would have computed, had it followed π0. This “counterfactual learning” is uninfluenceable. https://agentfoundations.org/item?id=1294

The challenge then, is to define this pure learning agent…

Suppose I train a regression that takes a state of the world as the input and attempts to predict the amount of utility I’d assign to that state of the world as an output. I provide labeled data in the form of (world state, utility) pairs. Things about me understanding my preferences better and establishing new preferences don’t really enter into it. The output is completely determined by the training data I provide for the regression algorithm. This is what provides clean separation. See also the concept of “complete mediation” in computer security.

It might be helpful to know the point I’m trying to make is extremely simple. Like, Netflix can’t recommend movies to me based on my Blockbuster rental history, unless Netflix’s recommendation algorithms are using Blockbuster’s rental data. This is how we can get clean separation between my Netflix recommendations and my Blockbuster recommendations.

It’s true that 3 is absolutely essential. My argument is that 3 is not something the FAI’s value module needs to forecast. It’s sufficient for the FAI to act on its current best guess about our values and stay open to the changes we make, whatever those changes may be. In my proposal, the value module also represents our desire to e.g. be able to modify the FAI—so by acting according to its current best guess about our values, the FAI remains corrigible. (To a large extent, I’m treating “learning our values” and “learning what it means to be corrigible” as essentially the same problem, to be approached in the same way.)

In my proposal, “helping the human sort out their preferences” is achieved using a specific technical criteria: Request labels for training data points which have maximal value of information. This sorts out the overseer’s preferences (insofar as they are decision-relevant) without being particularly manipulative.

As I said previously, I think it might make sense to view corrigibility learning (“never deceive anyone”) and value learning (“reduce suffering”) as manifestations of the same deep problem. That is the problem of creating powerful machine learning techniques that can make accurate generalizations and well-calibrated probabilistic judgements when given small amounts of labeled unstructured data. Once we have that, I think it’ll be easy to implement active learning in a way that works really well, and then we’ll be able to do value learning and corrigibility learning using essentially the same approach.

>Request labels for training data points which have maximal value of information.

I can see many ways this can be extremely manipulative. If you request a series of training data points who’s likely result, once the human answers them, is the conclusion “the human wants me to lobotomise them into a brainless drugged pleasure maximiser and never change them again”, then your request have maximal value of information. Therefore if such a series of training data points exist, the AI will be motivated to find them—and hence manipulate the human.

If you already know how the human is going to answer, then it’s not high value of information to ask. “If you can anticipate in advance updating your belief in a particular direction, then you should just go ahead and update now. Once you know your destination, you are already there.”

Suppose it

ishigh value of information for the AI to ask whether we’d like to be lobotomized drugged pleasure maximizers. In that case, it’s a perfectly reasonable thing for the AI to ask: Wewouldlike for the AI to request clarification if it places significant probability mass on the possibility that we assign loads of utility to being lobotomized drugged pleasure maximizers! The key question is whether the AI would optimize for asking this question in amanipulativeway—a way designed to change our answers. An AI might do this is if it’s able to anticipate the manipulative effects of its questions. Luckily, making it so the AI doesn’t anticipate the manipulative effects of its questions appears to be technically straightforward: If the scorekeeper operates by conservation of expected evidence, it can never believe any sequence of questions will modify the score of any particular scenario on average.There are 3 cases here:

The AI assigns a very low probability to us desiring lobotomy. In this case, there is no problem: We don’t actually want lobotomy, and it would be very low value of information to ask about lobotomy (because the chance of a “hit”, where we say yes to lobotomy and the AI learns it can achieve lots of utility by giving us lobotomy, is quite low from the AI’s perspective).

The AI is fairly uncertain about whether we want lobotomy. It believes we might really want it, but we also might really

notwant it! In that case, it is high VoI to ask us about lobotomy before taking action. This is the scenario I discuss under “Smile maximization case study” in my essay. The AI may ask us about the version of lobotomy it thinks we aremostlikely to want, if that is the highest VoI thing to ask about, but that still doesn’t seem like a huge problem.The AI assigns a very high probability to us desiring lobotomy and doesn’t think there’s much of a chance that we don’t want it. In that case, we have lost. The key challenge for my proposal is to figure out how prevent the AI from entering a state where it has confident yet wildly incorrect beliefs about our preferences. From my perspective, FAI boils down to a problem of statistical epistemology.

>If you already know how the human is going to answer, then it’s not high value of information to ask.

That’s the entire problem, if “ask a human” is programmed as a an endorsement of this being the right path to take, rather than as a genuine need for information.

>If the scorekeeper operates by conservation of expected evidence, it can never believe any sequence of questions will modify the score of any particular scenario on average.

That’s precisely my definition for “unriggable” learning processes, in the next post:https://www.lesswrong.com/posts/upLot6eG8cbXdKiFS/reward-function-learning-the-learning-process

That’s a link to this post, right? ;)

Ooops, yes! Sorry, for some reason, I thought this was the post on the value function.

The observation function is onto, and not one-to-one. For most states s∈S, the states s×{cook} and s×{wash} will map to the same observation.

Thanks, I’ve now corrected that.

Quotes in your comments aren’t showing up as quotes for me. Are you putting a space between the greater-than sign, and the first character of the quote?

Edit: Meant to put this under one of the comments. Didn’t think this was important enough to be top-level. Can’t move or delete though.

Edited Stuart’s comments. It did look like he just didn’t add spaces.