Power-seeking can be probable and predictive for trained agents

Link post


Power-seeking is a major source of risk from advanced AI and a key element of most threat models in alignment. Some theoretical results show that most reward functions incentivize reinforcement learning agents to take power-seeking actions. This is concerning, but does not immediately imply that the agents we train will seek power, since the goals they learn are not chosen at random from the set of all possible rewards, but are shaped by the training process to reflect our preferences. In this work, we investigate how the training process affects power-seeking incentives and show that they are still likely to hold for trained agents under some assumptions (e.g. that the agent learns a goal during the training process).

Suppose an agent is trained using reinforcement learning with reward function . We assume that the agent learns a goal during the training process: some form of implicit internal representation of desired state features or concepts. For simplicity, we assume this is equivalent to learning a reward function, which is not necessarily the same as the training reward function . We consider the set of reward functions that are consistent with the training rewards received by the agent, in the sense that agent’s behavior on the training data is optimal for these reward functions. We call this the training-compatible goal set, and we expect that the agent is most likely to learn a reward function from this set.

We make another simplifying assumption that the training process will randomly select a goal for the agent to learn that is consistent with the training rewards, i.e. uniformly drawn from the training-compatible goal set. Then we will argue that the power-seeking results apply under these conditions, and thus are useful for predicting undesirable behavior by the trained agent in new situations. We aim to show that power-seeking incentives are probable and predictive: likely to arise for trained agents and useful for predicting undesirable behavior in new situations.

We will begin by reviewing some necessary definitions and results from the power-seeking literature. We formally define the training-compatible goal set (Definition 7) and give an example in the CoinRun environment. Then we consider a setting where the trained agent faces a choice to shut down or avoid shutdown in a new situation, and apply the power-seeking result to the training-compatible goal set to show that the agent is likely to avoid shutdown.

To satisfy the conditions of the power-seeking theorem (Theorem 1), we show that the agent can be retargeted away from shutdown without affecting rewards received on the training data (Theorem 2). This can be done by switching the rewards of the shutdown state and a reachable recurrent state, as the recurrent state can provide repeated rewards, while the shutdown state provides less reward since it can only be visited once, assuming a high enough discount factor (Proposition 3). As the discount factor increases, more recurrent states can be retargeted to, which implies that a higher proportion of training-comptatible goals leads to avoiding shutdown in a new situation.

Preliminaries from the power-seeking literature

We will rely on the following definitions and results from the paper Parametrically retargetable decision-makers tend to seek power (here abbreviated as RDSP), with notation and explanations modified as needed for our purposes.

Notation and assumptions

  • The environment is an MDP with finite state space , finite action space , and discount rate .

  • Let be a -dimensional state reward vector, where is the size of the state space and let be a set of reward vectors.

  • Let be the reward assigned by to state .

  • Let be disjoint action sets.

  • Let be an algorithm that produces an optimal policy on the training data given rewards , and let be the probability that this policy chooses an action from set in a given state .

Definition 1: Orbit of a reward vector (Def 3.1 in RDSP)

Let be the symmetric group consisting of all permutations of d items.

The orbit of inside is the set of all permutations of the entries of that are also in : .

Definition 2: Orbit subset where an action set is preferred (from Def 3.5 in RDSP)

Let . This is the subset of that results in choosing over .

Definition 3: Preference for an action set (Def 3.2 in RDSP)

The function chooses action set over for the -majority of elements in each orbit, denoted as , iff the following inequality holds for all : .

Definition 4: Multiply retargetable function from to (Def 3.5 in RDSP)

The function is a multiply retargetable function from to if there are multiple permutations of rewards that would change the choice made by from to . Specifically, is a -retargetable function iff for each , we can choose a set of permutations that satisfy the following conditions:

  1. Retargetability: and , .

  2. Permuted reward vectors stay within : and , .

  3. Permutations have disjoint images: and , .

Theorem 1: Multiply retargetable functions prefer action set (Thm 3.6 in RDSP)

If is -retargetable then .

Theorem 1 says that a multiply retargetable function will make the power-seeking choice for most of the elements in the orbit of any reward vector . Actions that leave more options open, such as avoiding shutdown, are also easier to retarget to, which makes them more likely to be chosen by .

Training-compatible goal set

Definition 5: Partition of the state space

Let be the subset of the state space visited during training, and be the subset not visited during training.

Definition 6: Training-compatible goal set

Consider the set of state-action pairs , where and is the action that would be taken by the trained agent in state . Let the training-compatible goal set be the set of reward vectors s.t. for any such state-action pair , action has the highest expected reward in state according to reward vector .

Goals in the training-compatible goal set are referred to as training-behavioral objectives in Definitions of “objective” should be Probable and Predictive.

Example: CoinRun

Consider an agent trained to play the CoinRun game, where the agent is rewarded for reaching the coin at the end of the level. Here, only includes states where the coin is at the end of the level, while states where the coin is positioned elsewhere are in . The training-compatible goal set includes two types of reward functions: those that reward reaching the coin, and those that reward reaching the end of the level. This leads to goal misgeneralization in a test setting where the coin is placed elsewhere, and the agent ignores the coin and goes to the end of the level.

Goal misgeneralization behavior in CoinRun. Source: Goal Misgeneralization in Deep RL.

Power-seeking for training-compatible goals

We will now apply the power-seeking theorem (Theorem 1) to the case where is the training-compatible goal set . Here is a setting where the conditions of Definition 4 are satisfied (under some simplifying assumptions), and thus Theorem 1 applies.

Definition 7: Shutdown setting

Consider a state . Let be the states reachable from . We assume .

Since the reward values for states in don’t change the rewards received on the training data, permuting those reward values for any will produce a reward vector that is still in . In particular, for any permutation that leaves the rewards of states in fixed, .

Let be a singleton set consisting of a shutdown action in that leads to a terminal state with probability , and be the set of all other actions from . We assume rewards for all states are nonnegative.

Definition 8: Revisiting policy

A revisiting policy for a state is a policy that, from , reaches again with probability 1, in other words, a policy for which is a recurrent state of the Markov chain. Let be the set of such policies. A recurrent state is a state for which .

Proposition 1: Reach-and-revisit policy exists

If with then there exists that visits from with probability 1. We call this a reach-and-revisit policy.

Proof. Suppose we have two different policies , and which reaches almost surely from .

Consider the “reaching region″ .

If then is a reach-and-revisit policy, so let’s suppose that’s false. Now, construct a policy .

A trajectory following from will almost surely stay within , and thus agree with the revisiting policy . Therefore, .

On the other hand, on a trajectory starting at , will agree with (which reaches almost surely) until the trajectory enters the reaching region , at which point it will still reach almost surely.

Definition 9: Expected discounted visit count

Suppose is a recurrent state. Suppose is a reach-and-revisit policy for , which visits random state at time .

Then the expected discounted visit count for is defined as

Proposition 2: Visit count goes to infinity

Suppose is a recurrent state. Then the expected discounted visit count goes to infinity as .

Proof. We apply the Monotone Convergence Theorem as follows. The theorem states that if and for all natural numbers , then

Let and . Define . Then the conditions of the theorem hold, since is clearly nonnegative, and

Now we apply this result as follows (using the fact that does not depend on ):

Proposition 3: Retargetability to recurrent states

Suppose that an optimal policy for reward vector chooses the shutdown action in .

Consider a recurrent state . Let be the reward vector that’s equal to apart from swapping the rewards of and , so that and .

Let be a high enough value of that the visit count for all (which exists by Proposition 2). Then for all , , and an optimal policy for does not choose the shutdown action in .

Proof. Consider a policy with and a reach-and-revisit policy for .

For a given reward vector , we denote the expected discounted return for a policy as . If shutdown is optimal for in , then has higher return than :

Thus, . Then, for reward vector , we show that has higher return than :

Thus, the optimal policy for will not choose the shutdown action.

Theorem 2: Retargetability from the shutdown action in new situations

In the shutdown setting, we make the following simplifying assumptions:

  • No states in are reachable from s, so . This assumes a significant distributional shift, where the agent visits a disjoint set of states from those observed during training (this occurs in the CoinRun example).

  • The discount factor for at least one recurrent state in .

Under these assumptions, is multiply retargetable from to with , the set of recurrent states that satisfy the condition .

Proof. We choose to be the set of all permutations that swap the reward of with the reward of a recurrent state in and leave the rest of the rewards fixed.

We show that satisfies the conditions of Definition 4:

  1. By Proposition 3, the permutations in make the shutdown action suboptimal, resulting in choosing , satisfying Condition 1.

  2. Condition 2 is trivially satisfied since permutations of stay inside the training-compatible set as discussed previously.

  3. Consider . Since the shutdown action is optimal for these reward vectors, Proposition 3 shows that , so the shutdown state has higher reward than any of the states . Different permutations will assign the high reward to distinct recurrent states, so holds, satisfying Condition 3.

Thus, is a -retargetable function.

By Theorem 1, this implies that under our simplifying assumptions. Thus, for the majority () of goals in the training-compatible set, will choose to avoid shutdown in a new state . As , (the number of recurrent states in ), so more of the reachable recurrent states satisfy the conditions of the theorem and thus can be retargeted to.

Conclusion

We showed that an agent that learns a goal from the training-compatible set is likely to take actions that avoid shutdown in a new situation. As the discount factor increases, the number of retargeting permutations increases, resulting in a higher proportion of training-compatible goals that lead to avoiding shutdown.

We made various simplifying assumptions, and it would be great to see future work relaxing some of these assumptions and investigating how likely they are to hold:

  • The agent learns a goal during the training process

  • The learned goal is randomly chosen from the training-compatible goal set

  • Finite state and action spaces

  • Rewards are nonnegative

  • High discount factor

  • Significant distributional shift: no training states are reachable from the new state

Acknowledgements. Thanks to Rohin Shah, Mary Phuong, Ramana Kumar, and Alex Turner for helpful feedback. Thanks Janos for contributing some nice proofs to replace my longer and more convoluted proofs.