Corrigibility as Constrained Optimisation

This post is coauthored with Ryan Carey.

Much of the work on developing a corrigible agent has focused on ensuring that an AI will not manipulate the shutdown button or any other kind of device that the human operator would use to control it. Suppose, however, that the AI lacked any capacity to press its shutdown button, or to indirectly control the state of the shutdown button. Then, what would remain of the problem of corrigibility? We think there would be three criteria that would remain. Before stating them, let be whatever utility function we would like the agent to optimise normally, and let be the shutdown utility function. ( may be mis-specified.) The shutdown utility function can be defined as when shutdown does not occur and as for some when shutdown does occur. For this post, we give the three criteria the following names and descriptions:

Shutdown incentive. The agent is incentivised to shut down if instructed to do so. That is, after the button is pressed, it favours policies that score more highly on .

Shutdown ability. The agent is able (has the resources) to shut down if instructed to do so. That is, After the button is pressed, there exists a policy that scores highly on .

Optimisation incentive. The agent is incentivised to optimise the default value function (some function that takes as one of its arguments) otherwise. That is, if the button is not pressed, it favours policies that score more highly on .*

The purpose of this post is to show that the default value function in the optimisation incentive has to be constrained somehow to make sure that the shutdown ability property is achievable. If we want to be robust against errors in the agent (including misspecification of ), then we need to make sure that shutdown ability holds. To make sure of that, the best approach we can see is to first satisfice for , and then use the remaining resources to optimise . In order to be safe, the agent must do so even if there are no remaining resources to optimise . This can either be achieved directly, by constrained multi-objective optimisation, or indirectly, by bounding the utility function. We focus on the former.

Let us illustrate the problem by using causal counterfactual methods as baseline. (There are other issues with these methods as well, but we will ignore them for the moment.) Let be the event that the shutdown button is pressed, and let otherwise. The kind of causal counterfactual agent we are considering uses the policy given by

Informally speaking, the issue with weighted sums such as these is that for any given , there could be a much larger such that the former is ignored.

Let us formalise this for a simpler setting. Instead of policies, let us consider actions . Then, we execute the action given by

The setup is as follows: (1) The agent executes an action in . (2) The operator may or may not press the button . (3) The agent observes whether or . (4) It may or may not be possible for the agent to shut down. More generally, it may be more or less probable that shutdown will go through.

Suppose that the utility functions are unbounded. See the appendix for a treatment of bounded utility functions. Let be an action that is very bad from the point of view of preserving shutdown ability and let . We choose such that is equal to . Let . It may be the case that is poorly designed (or has some error) such that is much larger than . Worse still, may be larger than for any action . From this reasoning, we conclude that there cannot exist a that preserves shutdown ability robustly to mis-specification of . (There may or may not exist such a , but we have no guarantee that it does not exist.) At least, it is impossible for unbounded , where we are using a weighted sum of expectations. For bounded , cf. the appendix. Henceforth, we focus on unbounded .

To think of alternatives for the causal counterfactual method, it helps to think of the problem as one of multi-objective optimisation. Consider the figure below.

We want an agent that prepares for the eventuality of button-press insofar as necessary, but that allocates remaining resources to pursuing its ordinary objective . The first step of the constrained multi-objective optimisation algorithm would be to remove all actions below the threshold from the action set . Let be the new set. In the figure, . The second step of the algorithm would be to find the optimal action . (The difference to the case of bounded utility functions is that we would still maximise the [weighted] sum.)

In this framework, it is clear that we could in theory find a such that the agent is satisfactorily probably to shut down in step (4). For example, as gets closer to , the agent can only shut down. As for the optimisation incentive property, we have to exchange it for:

Robust optimisation incentive. The agent is incentivised to optimise the default value function otherwise. That is, if the button is not pressed, it favours policies that score more highly on —the normal utility function—given either of the following constraints: The default value function is required to be (a) a constrained multi-objective optimisation of and as described above or (b) a weighted sum of expectations of and , where they are bounded utility functions as described in the appendix.

Given the robust optimisation incentive property, it is clear that the agent may score very poorly on in certain environments. For example, it there always is a trade-off between and , the agent will always favour

Here, we have considered having the agent change its utility from to a shutdown utility function . But instead, could we have it change its utility from to an updated (non-shutdown) utility function ? If preparing to maximise the new utility function requires an unbounded amount of resources, then a system will, by maximising , perform poorly on . So there is a reason to have the new function correspond to a task that requires bounded preparation, like shutting down, rather than a task for which preparation could require unlimited resources, like producing new goods, or doing new intellectual work.

In summary, we have proposed a constrained optimisation causal counterfactual method, that consists of maximising subject to for any action . There are several directions for future work. One task would be to see whether our constrained optimisation variant overcomes any of the other problems with the ordinary causal counterfactual corrigible agent, like crazy bets.Here, we considered the case where a single action is executed, but, of course, this could be extended to more complex scenarios.

——————————

* In terms of the four criteria on page 2 of Corrigibility by Soares et al (2015), criterion (1) corresponds to shutdown incentive. criterion (2), non-manipulation, is assumed solved. criteria (3), tendency to repair safety measures, and (4), corrigibility of subagents, correspond to shutdown ability.

Appendix

Here, we show that one solution to the problem is to bound the utility function.

We start by describing the setting. The agent can choose from several actions, and each action can be assigned a probability to lead to a shutdown. Let be the event of a shutdown and let be the set of actions. Furthermore, let it be the case that and . That is, the shutdown utility function is not bounded, while the normal utility function is. Specifically, we let and define as if a shutdown happens and otherwise. (Note that this is different from the main post.) Define and .

We move on to consider what optimal actions ? We start by finding the following indifference:

Suppose that is maximally good according to , i.e., . Then, the above inequality is

Define . Then . This gives us the inequality

Let be the lower threshold on how probable you want the optimal action to lead to a shutdown. Then it is always possible to find a satisfying according to

This proves that bounding the utility function is a method for satisfying shutdown ability.