Reward function learning: the value function

I’ve written quite a few posts about the problems with agents learning values/​rewards, and manipulating the learning process. I won’t be linking to those past posts, because the aim of this post and the next one is to present a complete, clear, and easy(er) to understand overview of the whole situation. They should stand alone, and serve as an introduction to the subject. This first post will present the framework, and the value function for learnt reward functions; the next one will be looking at the properties of the learning function. I’ll be using variants on a single running example throughout, to try and increase understanding.

Feel free to comment on ways the ideas could be made clearer.

0 The main points

The central insight of these posts may seem trivial by now: that a learning process that the agent can influence, does not have the same properties as one that it can’t. Moving from “if you don’t know what is right, look it up on this read-only list” to “if you don’t know what is right, look it up on this read-write list (or ask the programmer)” is a huge change. Especially when read-only lists can easily become read-write in practice when the agent becomes more powerful.

Why write long posts and papers about this idea, though? First of all, because it is easy, in practice, not to notice that shift. And secondly, because it is easy to define a “learning” process that doesn’t behave like we’d expect.

So by defining the value function and learning function of an ideal learning process, this allows us to know when we are facing an ideal uninfluenceable, unmanipulable learning process, and when we are not.

Furthermore, many learning processes cannot be easily made unmanipulable—especially those that involve human feedback, feedback conditional on the agent’s actions. By specifying the ideal case, and seeing examples of what goes wrong in the non-ideal case, this can help develop learning processes where a small amount of manipulation is allowed, traded off against a large amount of desirable learning.

As a minor example of this, in the next post, we’ll see that though “uninfluenceable” is the ideal property, the weaker property of “unriggable” (which I previously called “unbiasable”) is enough to get many of the desirable properties.

1 Framework and Examples

1.1 Washing and Cooking and POMDPs

The analysis will be illustrated by an ongoing example: that of a robot purchased to do domestic tasks.

The robot can specialise in cooking or washing (assume specialised robots are ultimately more effective than generalists). The robot has been turned on, but it has not yet been informed as to what its task is—it therefore has uncertainty about its reward function.

This robot will be modelled in the MDP (Markov Decision Process) and POMDP (Partially Observable Markov Decision Process) formalisms. In these formalisms, the agent occupies a state in the state space . In the POMDP formalism, the agent doesn’t observe the state directly, instead it sees an observation drawn from the observation set . The agent can then take an action from the action set . This transfers the agent to a new state, where it makes a new observation.

The Markov property is about how the transitions and observations are handled; basically these can depend on the previous state and/​or action, but not on the those further in the past. If we define as the set of of probability distributions over a set , then we have three transition rules:

Here, takes the current state and the action and returns a distribution over the next state, takes the current state and returns a distribution over observations, and is the distribution over the initial state where the agent begins. Both and ignore any previous information.

The whole POMDP will be called the environment, and designated by .

For our example, the robot will be evolving in the following environment:

In this grid world, there are six grids the robot could occupy (the reason for this shape will be made clear later). There are two pizzas to cook in the top square, and one mud splatter to wash in the bottom one. The state space is therefore of size , since there are squares the robot could occupy, levels of uncooked pizza (, , or uncooked pizzas), and levels of mud splatter ( or splatters). There is also a -th state, the ‘episode ends’ state.

In this case, there is no hidden information, so the set of states is the same as the set of observations, and is trivial. The robot always starts in the central position, and there are always uncooked pizzas and mud splatter; this defines the starting state, with being trivial and simply returning that starting state with certainty.

The robot has five actions, , which involve moving in the four directions (staying put is not an action we’ll need). If the robot can move into a square, it will. If it tries to move into a wall, it turns off and the episode ends (this avoids us having to give the robot an extra action to end the episode). If it is in in a room with a mud splatter or an uncooked pizza, then all pizzas in that room get cooked, or all mud splatters get washed. If the episode has ended, then it stays ended. This defines the transition function , which is deterministic in this instance.

1.2 Rewards and reward functions

Now we need to define the possible rewards of the robot. There are reward functions that map the robots’s history to a numerical reward.

So what is a history? That is just a sequence of actions. The agent starts in initial state , picks action , then transitions (via ) to state , and makes observation (via ). It then picks action , and so on.

A history of length is a sequence of actions and o:

Let be the set of all histories of length . For our purposes, we’ll assume that the agent only operates for a finite number of steps: let be the maximal number of steps the agent operates for, let be the set of complete (full-length) histories, and let be the set of all histories. We might want to focus on the initial steps of any given history ; designate this by for any .

Then a reward function is something that maps each history to a numerical value in , the reward. Let be the set of all relevant reward functions. If the agent had a single clear reward , and followed history , then it would get total reward:

This is the total reward accumulated by reward function over the course of history ; first applying it to the first action and observation, then to the first two actions and observations, then to the first three… all the way to the full final history .

In our ongoing example, we will consider two reward functions, . The reward function rewards cooking; if the robot is in the top room, then it gets a reward of , where is the number of uncooked pizzas that were previously in the room (recall that the robot is assumed to immediately cook any uncooked pizzas if it’s in the same room). To encourage fast action on the part of the robot, also assigns a for each turn that the robot is active (ie the observation is not the end of episode state).

The reward function is the same, except it rewards washing mud-splatters, giving a reward of for being in the bottom room to wash mud-splatters. It also assigns for every turn of activity as well.

In order to earn these rewards, the agent needs to choose actions. It does this by using a policy. A policy is simply a map from past histories to (a probability distribution over) actions. This distribution tells it what actions to select, with which probability. Thus ; let be the set of all policies.

Then it’s obvious that the optimal policy under is to choose (go North), which cooks the two pizzas, then any of , , or to turn itself off. This gives it a total reward of (it gets no penalty on the second turn, because its second observation is the end of episode observation). The optimal policy for is , followed by any of , , or , giving a total reward of .

1.3 Learning your reward

We can now get to the key part of this post: learning the correct reward function. How could the agent do that? Well, the only data that it gets from outside is the observations; it also has a record of its actions. So it would seem that the only data that can determine whether a particular reward function is correct is the agent’s history.

But there is arguably something else that can matter to the reward: the agent’s policy. Suppose the learning process is defined so that, if the agent goes East, then it will see as the correct reward function. Then, arguably, on Bayesian grounds (see next post), if the agent has the policy of going East, it should already see as the correct reward function.

Thus the learning process is defined as a function from histories and the agent’s policy to a probability distribution over reward functions:

The probability of being the correct reward, given history and policy , is designated by .

In our example, the value of the reward function is set by levers, levers that the robot itself can change. If the robot enters the leftmost box, the reward is set to ; if it enters the rightmost box, the reward is set to instead. Before going into either of these boxes, it is uncertain between the two rewards.

This allows a definition of , one that is independent of . If shows the agent was in the leftmost box more recently than the rightmost, then and . If shows the agent was in the rightmost box more recently than the leftmost, then and . If the agent has been in neither box during history , then .

Now, this example makes not feel like a learning process, but much more like an optimisation process with being part of the reward. And that’s precisely the problem; see the next post for desirable restrictions on .

2 The value function

2.1 The correct value function

The learning process and the reward functions are key elements, but how do we combine them into the value function—the estimate of the expected reward? If you get the value function wrong, then the agent may not be learning in the way you thought it would.

First of all, note that with agent’s policy and the environment, we can compute the probability of a given history. Then

is the conditional probability that the agent, following policy and having see seen history , will then see history . If this quantity is non-zero, that implies that is the initial segment of history - ie that the first actions and observations of is precisely the history (in symbols, ). If that’s the case, we write .

In pedagogy and in murder mysteries, one builds up to the final answer. But I’ll short-circuit that process, and say that the correct value function for reward function learning is for an agent using policy and having seen history , is:

This value function sums over the complete histories, weighted by their probability given and . It then sums over all the reward functions, weighted by their probability, given a complete history . Finally, it then sums the rewards for that reward function, over the entire history .

What motivates this formula? Well, the sum over is necessary if this is to be the value function of actual reward functions. Similarly, and the sum over is needed to make this into an expectation, and is clearly in its right place, weighting the various possible rewards.

There are two choices that might be open to question, which are bolded in the above expression of . The first is whether (the complete history) should be used for ; both (the current history) and (the complete history that is known at the point the reward is assessed) are plausible candidates. The second questionable choice is the lower bound for the summation in ; rather than starting at , which is in the past for the current history , would it not be more suitable to start at ?

However, the substitutions or will both result in value functions that are inconsistent. That is, an agent that attempts to maximise such value functions will wish that its future self not maximise them.

The substitution is not inconsistent, however. But it does result in a very weird and volatile agent, that continually learns and unlearns its reward functions, before doing any productive actions.

The rest of this post will demonstrate and illustrate these facts. It is not essential to understanding what’s going on (though it can be instructive). Feel free to skip to the next post rather than reading the rest of this one.

2.2 Future-regarding inconsistency

Let’s deal with the substitution first, which has the future-regarding value-function:

What is the optimal policy for this value at the start? It’s to go North to cook the pizzas, then go South and East and East to push the lever over to Cook, and then turn itself off: . Its final reward function would thus be , and it would get a reward of (for the pizzas), minus (for each of the four turns where it doesn’t reach an end of episode state), for a total reward of . No other policy gives it that much reward.

The problem is that after , it no longer sees the interest in going East, because “learning” that is correct only affects its past reward, which no longer cares about. Instead it has two new optimal policies: either (go South to wash, go North, go West to set reward function to , end episode) or (the same thing, but setting the reward function first). Both of these will give it an additional reward of , according to of its current history.

Let’s assume it attempts ; in that case, once it’s done , it no longer has any interest in changing the reward function (as that lies in the past), and will simply turn itself off with another . Its final reward function will be (it’s never pressed any of the levers, so remains the same as it was initially), and it gets a reward of (for cooking and washing), minus (for taking four turns) for a total of .

If it decides instead to go , then it will actually follow through with that policy, to disastrous effect: it will have a final reward function of , getting for one washing event, and for taking six turns, for a total reward of .

Thus if the agent always follows the optimal policy according to at the time, it will end up with a much lower reward than if it followed the policy that initially optimal. It is not aligned with its future self.

But the agent could be more cunning: it could act to constrain its future self, knowing about their value difference. In that case, it should initially choose ; this washes the mud splatter, ensuring that its future self will not have any interest in . Then it will choose , and then (because it knows that it won’t care about setting the reward to after cooking). Then it will simply follow . Its final reward function is , and its total reward is will be . This table gives the way the different policies are rewarded:

2.3 Change-averse inconsistency

Let’s now look at the the substitution. This results in the change-averse value function:

Why did I call this value function change-averse? Because it assess the value of its future reward according to its current estimate of the reward functions. We can illustrate this by moving the Cooking and Washing leavers into the same rooms as the pizzas and the mud splatters, respectively:

The optimal policy for , initially, is (or ), which will allow it to cook and wash; under its initial reward function, giving it a total reward of . However, as soon as it’s done , the will change its reward function to being , with certainty, and it will end the episode, choosing (or or ) again. According to its initial reward function, this gives it a reward of .

However, its final reward function is , and, according to that, its final reward is . Thus unlike the , its future version ends up more satisfied that its past version.

Doing both substitutions, and , will result in the same problems as the example in this subsection.

2.4 The volatile learning agent

If we do the substitution , then we get the (consistent) volatile value function:

In this instance, the expression is just a normal reward function itself. Therefore is just the standard expected value function of the reward function , explaining why it is consistent (this also means that it’s irrelevant whether is the bound or is, since normal reward functions don’t care about past rewards).

But is a very peculiar reward function, even if is independent of (which then makes also independent of ). In this situation, the agent always wants to be maximising reward according to its current estimate of the correct reward. Or, conversely, it always wants to set its current estimate to what it can then easily maximise.

In our running example, one of the agent’s optimal policy for is first to go , setting its reward to . It then goes , claiming a reward of , via , for cooking the pizzas. It then goes , setting its reward to , and finally goes , claiming the reward of , according to , for washing, then ending the episode. This gives it a total reward of . The other optimal policy - - gives the same reward.

Whatever we meant by a reward function learning agent, I think it’s pretty clear that this agent, which jumps its reward function back and forth before taking actions, is not one of them.