Interesting point. Though on this view, “Deceptive alignment preserves goals” would still become true once the goal has drifted to some random maximally simple goal for the first time.
To be even more speculative: Goals represented in terms of existing concepts could be simple and therefore stable by default. Pretrained models represent all kinds of high-level states, and weight-regularization doesn’t seem to change this in practice. Given this, all kinds of goals could be “simple” as they piggyback on existing representations, requiring little additional description length.
This doesn’t seem implausible. But on the other hand, imagine an agent which goes through a million episodes, and in each one reasons at the beginning “X is my misaligned terminal goal, and therefore I’m going to deceptively behave as if I’m aligned” and then acts perfectly like an aligned agent from then on. My claims then would be:
a) Over many update steps, even a small description length penalty of having terminal goal X (compared with being aligned) will add up. b) Having terminal goal X also adds a runtime penalty, and I expect that NNs in practice are biased against runtime penalties (at the very least because it prevents them from doing other more useful stuff with that runtime).
In a setting where you also have outer alignment failures, the same argument still holds, just replace “aligned agent” with “reward-maximizing agent”.
Interesting point. Though on this view, “Deceptive alignment preserves goals” would still become true once the goal has drifted to some random maximally simple goal for the first time.
To be even more speculative: Goals represented in terms of existing concepts could be simple and therefore stable by default. Pretrained models represent all kinds of high-level states, and weight-regularization doesn’t seem to change this in practice. Given this, all kinds of goals could be “simple” as they piggyback on existing representations, requiring little additional description length.
This doesn’t seem implausible. But on the other hand, imagine an agent which goes through a million episodes, and in each one reasons at the beginning “X is my misaligned terminal goal, and therefore I’m going to deceptively behave as if I’m aligned” and then acts perfectly like an aligned agent from then on. My claims then would be:
a) Over many update steps, even a small description length penalty of having terminal goal X (compared with being aligned) will add up.
b) Having terminal goal X also adds a runtime penalty, and I expect that NNs in practice are biased against runtime penalties (at the very least because it prevents them from doing other more useful stuff with that runtime).
In a setting where you also have outer alignment failures, the same argument still holds, just replace “aligned agent” with “reward-maximizing agent”.