A short note on a point that I’d been confused about until recently. Suppose you have a deceptively aligned policy which is behaving in aligned ways during training so that it will be able to better achieve a misaligned internally-represented goal during deployment. The misaligned goal causes the aligned behavior, but so would a wide range of other goals (either misaligned or aligned) - and so weight-based regularization would modify the internally-represented goal as training continues. For example, if the misaligned goal were “make as many paperclips as possible”, but the goal “make as many staples as possible” could be represented more simply in the weights, then the weights should slowly drift from the former to the latter throughout training.
But actually, it’d likely be even simpler to get rid of the underlying misaligned goal, and just have alignment with the outer reward function as the terminal goal. So this argument suggests that even policies which start off misaligned would plausibly become aligned if they had to act deceptively aligned for long enough. (This sometimes happens in humans too, btw.)
Reasons this argument might not be relevant: - The policy doing some kind of gradient hacking - The policy being implemented using some kind of modular architecture (which may explain why this phenomenon isn’t very robust in humans)
Why would alignment with the outer reward function be the simplest possible terminal goal? Specifying the outer reward function in the weights would presumably be more complicated. So one would have to specify a pointer towards it in some way. And it’s unclear whether that pointer is simpler than a very simple misaligned goal.
Such a pointer would be simple if the neural network already has a representation of the outer reward function in weights anyway (rather than deriving it at run-time in the activations). But it seems likely that any fixed representation will be imperfect and can thus be improved upon at inference time by a deceptive agent (or an agent with some kind of additional pointer). This of course depends on how much inference time compute and memory / context is available to the agent.
Misaligned goal --> I should get high reward --> Behavior aligned with reward function
and then I’m hypothesizing that the whatever the first misaligned goal is, it requires some amount of complexity to implement, and you could just get rid of it and make “I should get high reward” the terminal goal. (I could imagine this being false though depending on the details of how terminal and instrumental goals are implemented.)
I could also imagine something more like:
Misaligned goal --> I should behave in aligned ways --> Aligned behavior
and then the simplicity bias pushes towards alignment. But if there are outer alignment failures then this incurs some additional complexity compared with the first option.
Or a third, perhaps more realistic option is that the misaligned goal leads to two separate drives in the agent: “I should get high reward” and “I should behave in aligned ways”, and that the question of which ends up dominating when they clash will be determined by how the agent systematizes multiple goals into a single coherent strategy (I’ll have a post on that topic up soon).
Because of standard deceptive alignment reasons (e.g. “I should make sure gradient descent doesn’t change my goal; I should make sure humans continue to trust me”).
I think you don’t have to reason like that to avoid getting changed by SGD. Suppose I’m being updated by PPO, with reinforcement events around navigating to see dogs. To preserve my current shards, I don’t need to seek out a huge number of dogs proactively, but rather I just need to at least behave in conformance with the advantage function implied by my value head, which probably means “treading water” and seeing dogs sometimes in situations similar to historical dog-seeing events.
Maybe this is compatible with what you had in mind! It’s just not something that I think of as “high reward.”
And maybe there’s some self-fulfilling prophecy where we trust models which get high reward, and therefore they want to get high reward to earn our trust… but that feels quite contingent to me.
To preserve my current shards, I don’t need to seek out a huge number of dogs proactively, but rather I just need to at least behave in conformance with the advantage function implied by my value head, which probably means “treading water” and seeing dogs sometimes in situations similar to historical dog-seeing events.
I think this depends sensitively on whether the “actor” and the “critic” in fact have the same goals, and I feel pretty confused about how to reason about this. For example, in some cases they could be two separate models, in which case the critic will most likely accurately estimate that “treading water” is in fact a negative-advantage action (unless there’s some sort of acausal coordination going on). Or they could be two copies of the same model, in which case the critic’s responses will depend on whether its goals are indexical or not (if they are, they’re different from the actor’s goals; if not, they’re the same) and how easily it can coordinate with the actor. Or it could be two heads which share activations, in which case we can plausibly just think of the critic and the actor as two types of outcomes taken by a single coherent agent—but then the critic doesn’t need to produce a value function that’s consistent with historical events, because an actor and a critic that are working together could gradient hack into all sorts of weird equilibria.
Misaligned goal --> I should get high reward --> Behavior aligned with reward function
The shortest description of this thought doesn’t include “I should get high reward” because that’s already implied by having a misaligned goal and planning with it.
In contrast, having only the goal “I should get high reward” may add description length like Johannes said. If so, the misaligned goal could well be equally simple or simpler than the high reward goal.
if the misaligned goal were “make as many paperclips as possible”, but the goal “make as many staples as possible” could be represented more simply in the weights, then the weights should slowly drift from the former to the latter throughout training.
Can you say why you think that weight-based regularization would drift the weights to the latter? That seems totally non-obvious to me, and probably false.
In general if two possible models perform the same, then I expect the weights to drift towards the simpler one. And in this case they perform the same because of deceptive alignment: both are trying to get high reward during training in order to be able to carry out their misaligned goal later on.
Interesting point. Though on this view, “Deceptive alignment preserves goals” would still become true once the goal has drifted to some random maximally simple goal for the first time.
To be even more speculative: Goals represented in terms of existing concepts could be simple and therefore stable by default. Pretrained models represent all kinds of high-level states, and weight-regularization doesn’t seem to change this in practice. Given this, all kinds of goals could be “simple” as they piggyback on existing representations, requiring little additional description length.
This doesn’t seem implausible. But on the other hand, imagine an agent which goes through a million episodes, and in each one reasons at the beginning “X is my misaligned terminal goal, and therefore I’m going to deceptively behave as if I’m aligned” and then acts perfectly like an aligned agent from then on. My claims then would be:
a) Over many update steps, even a small description length penalty of having terminal goal X (compared with being aligned) will add up. b) Having terminal goal X also adds a runtime penalty, and I expect that NNs in practice are biased against runtime penalties (at the very least because it prevents them from doing other more useful stuff with that runtime).
In a setting where you also have outer alignment failures, the same argument still holds, just replace “aligned agent” with “reward-maximizing agent”.
Deceptive alignment doesn’t preserve goals.
A short note on a point that I’d been confused about until recently. Suppose you have a deceptively aligned policy which is behaving in aligned ways during training so that it will be able to better achieve a misaligned internally-represented goal during deployment. The misaligned goal causes the aligned behavior, but so would a wide range of other goals (either misaligned or aligned) - and so weight-based regularization would modify the internally-represented goal as training continues. For example, if the misaligned goal were “make as many paperclips as possible”, but the goal “make as many staples as possible” could be represented more simply in the weights, then the weights should slowly drift from the former to the latter throughout training.
But actually, it’d likely be even simpler to get rid of the underlying misaligned goal, and just have alignment with the outer reward function as the terminal goal. So this argument suggests that even policies which start off misaligned would plausibly become aligned if they had to act deceptively aligned for long enough. (This sometimes happens in humans too, btw.)
Reasons this argument might not be relevant:
- The policy doing some kind of gradient hacking
- The policy being implemented using some kind of modular architecture (which may explain why this phenomenon isn’t very robust in humans)
Why would alignment with the outer reward function be the simplest possible terminal goal? Specifying the outer reward function in the weights would presumably be more complicated. So one would have to specify a pointer towards it in some way. And it’s unclear whether that pointer is simpler than a very simple misaligned goal.
Such a pointer would be simple if the neural network already has a representation of the outer reward function in weights anyway (rather than deriving it at run-time in the activations). But it seems likely that any fixed representation will be imperfect and can thus be improved upon at inference time by a deceptive agent (or an agent with some kind of additional pointer). This of course depends on how much inference time compute and memory / context is available to the agent.
So I’m imagining the agent doing reasoning like:
Misaligned goal --> I should get high reward --> Behavior aligned with reward function
and then I’m hypothesizing that the whatever the first misaligned goal is, it requires some amount of complexity to implement, and you could just get rid of it and make “I should get high reward” the terminal goal. (I could imagine this being false though depending on the details of how terminal and instrumental goals are implemented.)
I could also imagine something more like:
Misaligned goal --> I should behave in aligned ways --> Aligned behavior
and then the simplicity bias pushes towards alignment. But if there are outer alignment failures then this incurs some additional complexity compared with the first option.
Or a third, perhaps more realistic option is that the misaligned goal leads to two separate drives in the agent: “I should get high reward” and “I should behave in aligned ways”, and that the question of which ends up dominating when they clash will be determined by how the agent systematizes multiple goals into a single coherent strategy (I’ll have a post on that topic up soon).
Why would the agent reason like this?
Because of standard deceptive alignment reasons (e.g. “I should make sure gradient descent doesn’t change my goal; I should make sure humans continue to trust me”).
I think you don’t have to reason like that to avoid getting changed by SGD. Suppose I’m being updated by PPO, with reinforcement events around navigating to see dogs. To preserve my current shards, I don’t need to seek out a huge number of dogs proactively, but rather I just need to at least behave in conformance with the advantage function implied by my value head, which probably means “treading water” and seeing dogs sometimes in situations similar to historical dog-seeing events.
Maybe this is compatible with what you had in mind! It’s just not something that I think of as “high reward.”
And maybe there’s some self-fulfilling prophecy where we trust models which get high reward, and therefore they want to get high reward to earn our trust… but that feels quite contingent to me.
I think this depends sensitively on whether the “actor” and the “critic” in fact have the same goals, and I feel pretty confused about how to reason about this. For example, in some cases they could be two separate models, in which case the critic will most likely accurately estimate that “treading water” is in fact a negative-advantage action (unless there’s some sort of acausal coordination going on). Or they could be two copies of the same model, in which case the critic’s responses will depend on whether its goals are indexical or not (if they are, they’re different from the actor’s goals; if not, they’re the same) and how easily it can coordinate with the actor. Or it could be two heads which share activations, in which case we can plausibly just think of the critic and the actor as two types of outcomes taken by a single coherent agent—but then the critic doesn’t need to produce a value function that’s consistent with historical events, because an actor and a critic that are working together could gradient hack into all sorts of weird equilibria.
The shortest description of this thought doesn’t include “I should get high reward” because that’s already implied by having a misaligned goal and planning with it.
In contrast, having only the goal “I should get high reward” may add description length like Johannes said. If so, the misaligned goal could well be equally simple or simpler than the high reward goal.
Can you say why you think that weight-based regularization would drift the weights to the latter? That seems totally non-obvious to me, and probably false.
In general if two possible models perform the same, then I expect the weights to drift towards the simpler one. And in this case they perform the same because of deceptive alignment: both are trying to get high reward during training in order to be able to carry out their misaligned goal later on.
Interesting point. Though on this view, “Deceptive alignment preserves goals” would still become true once the goal has drifted to some random maximally simple goal for the first time.
To be even more speculative: Goals represented in terms of existing concepts could be simple and therefore stable by default. Pretrained models represent all kinds of high-level states, and weight-regularization doesn’t seem to change this in practice. Given this, all kinds of goals could be “simple” as they piggyback on existing representations, requiring little additional description length.
This doesn’t seem implausible. But on the other hand, imagine an agent which goes through a million episodes, and in each one reasons at the beginning “X is my misaligned terminal goal, and therefore I’m going to deceptively behave as if I’m aligned” and then acts perfectly like an aligned agent from then on. My claims then would be:
a) Over many update steps, even a small description length penalty of having terminal goal X (compared with being aligned) will add up.
b) Having terminal goal X also adds a runtime penalty, and I expect that NNs in practice are biased against runtime penalties (at the very least because it prevents them from doing other more useful stuff with that runtime).
In a setting where you also have outer alignment failures, the same argument still holds, just replace “aligned agent” with “reward-maximizing agent”.