Reward function learning: the learning process

In the previous post, I introduced the formalism for reward function learning, and presented the expected value function for a learning agent:

I’ll assume that people reading this are familiar with the concepts, the notations and the example of that post. I’ll now look at the desirable properties for the learning function .

1 Rigging the learning process

1.1 The flaws of general learning agents

First of all, are there any problems with a general ? There are three major ones:

  • 1) An agent learning to maximise with a general has to use the whole of the episode to assess the value of any one action. Thus, it can learn as a Monte-Carlo agent, but not as Q-learning or Sarsa agent.

  • 2) An agent maximising with a general can take actions the don’t increase reward, but are taken purely to “justify” its past actions.

  • 3) An agent maximising with a general can sometimes pick a policy that is sub-optimal, with certainty, for all rewards in .

All of these points can be seen by considering our robot cooking/​washing example. In that case, it can be seen that the optimal behaviour for that robot is ; this involves cooking the two pizzas, then going East to push the lever onto the cooking choice, and then ending the episode.

Thus , so the final reward is , and the agent earns a reward of .

Why must Q-learning fail here? Because the reward for the first , at the point the agent does it, is , not ; this is because, at this point, is still . Thus the reward component in the Q-learning equation is incorrect.

Also note that the rest of the policy, , serve no purpose to get rewards, they just “justify” the reward from the first action .

Let us now compare this policy with the policy : go North, cook, end the episode. For the value learning function, this has a value of only , since the final reward is . However, under the reward of , this would give a reward of , more than the that gets here. And under the reward of , this would get a reward of , more than the that gets under . Thus the optimal policy for the value learner is worse for both and that the policy.

1.2 Riggable learning processes

The problem with the used in the robot example is that it’s riggable (I used to call this “biasable”, but that term is seriously overused). What does this mean?

Well, consider again the equation for the expected value . The only history inputs into are the , the complete histories. So, essentially, only the value of on these complete histories matter.

In our example, we chose a that was independent of policy, but we could have gone a different route. Let be any policy such that the final reward is ; then define for any history (and conversely ). Similarly, if were a policy such that the final reward was , then set . If the policy never brings the agent to either lever, then , as before. Stochastic policies have values between these extremes.

This is no longer independent of policy, but it is Bayesian; that is, the current is the same as the expected :

However, it is not possible to keep the same on complete histories, and have it be both Bayesian, and independent of policy: there is a tension between the two.

Then we define:

  • A learning process is unriggable if it is both Bayesian and independent of policy.

1.3 Unriggable learning processes

So, what would be an example of an unriggable learning process? Well consider the following setup, where the robot no longer has levers to set their own reward, but instead their owner is in the rightmost box.

In this case, if the robot enters that box, the owner will inform them of whether they should cook or wash.

Since there is hidden information, this setup can be fomalised as a PODMP. The old state-space was , of size , which covered the placement of the robot and the number of pizzas and mud splatters (and whether the episode was ended or not).

The new state space is , with encoding whether the owner is minded to have the robot cooking or washing. The observation space is of size : in most states, the observation only returns the details of , not of , but in the rightmost box, it returns the actual state, letting the agent know whether the human intends it to cook or wash. Thus the observation function is deterministic (if you known the state, you know the observation), but not one-to-one (because for most , and will generate the same observation).

The transition function is still deterministic: it operates as before on , and maps to and to .

The initial state function is stochastic, though: if is the usual starting position, then : the agent thinks it’s equally likely that its owner desires cooking as that it desires washing.

Then what about ? Well, if the history involves the agent being told the very first time it enters the rightmost box, then . If it was told the very first time it enters the rightmost box, then .

It’s easy to see that that is independent of policy. It’s also Bayesian, because actually represents the ignorance of the agent as to whether it lives in the part of the envrionment, or the part, and it gets updated as the agent figures this out.

What then is the agent’s optimal policy? It’s to start with , to get the human’s decree as to which reward is the true one. It will then do , and, if the human has said , it will finish with , giving it a final reward function of and a final total reward of . If the human said , it would finish with , giving it a final reward function of and a final total reward of . Its expected total reward is thus .

1.4 Properties of unriggable learning processes

Now, if is unriggable, then we have (almost) all the desirable properties:

  • 1) An agent learning to maximise for an unriggable , may be a Q-learning agent.

  • 2) An agent maximising for unriggable will be indifferent to past rewards.

  • 3) An agent maximising for unriggable will never pursue a policy that will be worse, with certainty, for all in .

These all come from a single interesting result:

  • If is Bayesian, then the value function and the value function of the previous post, differ by a constant that is independent of future action. Thus, if is unriggable, is the value function of a single classical reward function (which is actually well-defined, independently of ).

This establishes all the nice properties above, and will be proved in the appendix of this post.

Note that even though the value functions are equal, that doesn’t mean that the total reward will be given by . For instance, consider the situation below, where the robot goes :

At the moment where it cooks the pizzas, it has , so it will get an of , with certainty. On the other had, from the perspective of value learning, it will learn at the end that it either has reward function , which will give it a reward of , or has reward function , which will give it a reward of . Since , the expectations are the same, even if the outcomes are different.

2 Influence

2.1 Problems despite unriggable

Being unriggable has many great properties. Is it enough?

Unfortunately not. The can be unriggable but still manipulable by the agent. Consider for instance the situation below:

Here, not only is there the adult with their opinion on cooking and washing, but there’s also and infant, who will answer randomly. This can be modelled as an POMDP, with state space , where (resp ) designates that the infant will answer (resp ), and do the same for the adult. The observation space is of size ; when the robot is in the leftmost (rightmost) box, it discovers the value of () in the obvious way. The dynamics are as expected, with preserving the values of and .

It’s the initial distribution which encodes the uncertainty. With probability the agent will start in , and similarly for the other four possibilities.

Now we need to define ; call this one . This will be relatively simple: it will set to be , as soon as the agent figures out that it lives either on an or a branch, and will not update further. It will set to as soon as it figures out that it lives on an or an branch, and will not update further. If it has no information about either, it will stick with .

It’s clear that is independent of policy; but is it Bayesian? It is indeed, because each time it updates, it goes to or with equal probability, depending on the observation (and stays there). Before updating, it is always at , so the value of is always the same as the expected value of .

So we have an unriggable ; what can go wrong?

For that , the optimal policy is to ask the infant, then follow their stated values. This means that it avoids the extra square on the way to enquire of the adult, and gets a total expected reward of , rather than the it would get from asking the adult.

2.2 Uninfluenceable

Note something interesting in the preceding example: if we keep as is, but change the knowledge of the robot, then is no longer unriggable. For example, if the agent knew that it was in a branch with , then it has a problem: if is initially , then it is no longer Bayesian if it goes to ask the adult, because it knows what their answer will be. But if is initially , then it is no longer Bayesian if it asks the infant, because it doesn’t know what their answer will be.

The same applies for any piece of information the robot could know. We’d therefore like to have some concept of “unriggable conditional on extra information”; something like

for some sort of extra information .

That, however, is not easy to capture in POMDP form. But there is another analogous approach. The state space of the POMDP is; this is actually four deterministic environments, and the robot is merely uncertain as to which environment it operates in.

This can be generalised. If a POMDP is explored for finitely many steps, then a PODMP can be seen as a probability distribution over a set of deterministic environments (see here for more details on one way this can happen—there are other equivalent methods).

Any history will update this as to which deterministic environment the agent lives in (this can be seen as the set of all the “hidden variables” of the environment). So we can talk sensibly about expressions like , the probability that the environment is , given that we have observed the history .

Then we say that a learning process is uninfluenceable, if there exists a function , such that

Here means the probability of in the distribution .

This expression means that merely encodes ignorance about the hidden variables of the environment.

The key properties of uninfluenceable learning processes are:

  • An uninfluenceable learning process is also unriggable.

  • An uninfluenceable learning process is exactly one the learns variables about the environment that are independent of the agent.

I will not prove these here (though the second is obvious by definition).

In our most recent robot example, there are four elements of , defined by whether they are in the branch defined by which one of .

It isn’t hard to check that there is no which makes into an uninfluenceable learning process. By contrast, if we define as given by the function:

then we have an uninfluenceable that corresponds to “ask the adult”. We finally have a good definition of a learning process, and the agent that maximises it will simply go an ask the adult before accomplishing the adult’s preferences:

3 Warning

If a learning function is uninfluenceable, then it has all the properties we’d expect if we were truly learning something about the outside world. But a) good learning functions may be impossible to make uninfluenceable, and b) being uninfluenceable is not enough to guarantee that the learning function is good.

On point a), anything that involves human feedback is generally influenceable and riggable, since the human feedback is affected by the agent’s actions. This includes, for example, most versions of the approval directed agent.

But that doesn’t mean that those ideas are worthless! We might be willing to accept a little bit of rigging in exchange of other positive qualities. Indeed, quantifying and controlling rigging is a good idea for more research.

What of the converse—is being uninfluenceable enough?

Definitely not. For example, any constant - that never learns, never changes—is certainly uninfluenceable.

As another example, if is any permutation of , then (defined so that ) is also uninfluenceable. Thus “learn what the adult wants, and follow that” is uninfluenceable, but so is “learn what the adult wants, and do the opposite” is also uninfluenceable.

We’ve shown previously that , “ask the adult” is uninfluenceable. But so is , “ask the infant”!

So we have to be absolutely sure not only that our has good properties, but exactly what it is leading the agent to learn.

4 Appendix: proof of value-function equivalence

We want to show that:

  • If is Bayesian, then the value function and the value function of the previous post, are equal.

As a reminder, the two value functions are:

To see the equivalence, let’s fix and in , and consider the term . We can factor the conditional probability of , given , by summing over all the intermediate :

Because is Bayesian, this becomes . Then note that , so that expression finally becomes

which is the corresponding expression for when you fix any and . This shows equality for .

Now let’s fix and , in . The value of is fixed, since it lies in the past. Then expectation of is simply the current value . This differs from the expression for - namely - but both values are independent of future actions.