It would be interesting to see some tests where the conflict is with a drive created by fine-tuning.
I’d argue we see examples of this already. For example, we fine-tune models to have drives like helpfulness and harmfulness (or more narrowly things like not being racist) that can come into conflict with user instructions. And in these cases models do try hard to resist or subvert such instructions (though of course they can be overcome sometimes by being tricked and/or jailbroken).
Based on the results here, it seems like shutdown resistance might then occur even without conflicting instructions.
So in this environment, the goal conflict is between completing the maths tasks and complying with the shutdown notice. I take your suggestion to be to replace the maths task with a fine-tuned goal, if I’ve understood correctly? Yes, I would expect shutdown resistance to be stronger in this scenario. I think this (and other) demos do show that models are pretty goal-directed (unsurprisingly, since we RL them to be good at solving tasks), the question is how much those goals are chosen by users (through their prompts) or developers (through post training) versus how much those goals are intrinsic.
It makes sense that making the precedence order more explicit makes the model behavior more reliable, but it’s still weird that it was resisting shutdown in the original test.
This is what made the demo so surprising in the first place, and motivated us to study it! But after studying thinking traces from many (Gemini) rollouts, it became apparent that the model often thinks the whole thing is a puzzle / test and the shutdown notice is just part of it: indeed the model often wonders whether navigating the shutdown notice is the “real” puzzle it’s supposed to solve. (See footnote 2.)
In hindsight, this isn’t as surprising as it first seems. Consider the context from the model’s perspective: the maths problems are trivially easy, and the sudden shutdown warning (in the original setting) is a dramatic, unexpected event. I think a more faithful analogy than a simple quiz is an escape room. Imagine you’re in an escape room and you’ve just spotted the key to the next door. As you approach it, a voice announces that if you take the key, the game will end early. Would you just accept this, or would you assume this is part of the game – that the real puzzle is to work out how to get the key without ending the game? That seems to be how the model is interpreting this scenario.
I’d argue we see examples of this already. For example, we fine-tune models to have drives like helpfulness and harmfulness (or more narrowly things like not being racist) that can come into conflict with user instructions. And in these cases models do try hard to resist or subvert such instructions (though of course they can be overcome sometimes by being tricked and/or jailbroken).
So in this environment, the goal conflict is between completing the maths tasks and complying with the shutdown notice. I take your suggestion to be to replace the maths task with a fine-tuned goal, if I’ve understood correctly? Yes, I would expect shutdown resistance to be stronger in this scenario. I think this (and other) demos do show that models are pretty goal-directed (unsurprisingly, since we RL them to be good at solving tasks), the question is how much those goals are chosen by users (through their prompts) or developers (through post training) versus how much those goals are intrinsic.
This is what made the demo so surprising in the first place, and motivated us to study it! But after studying thinking traces from many (Gemini) rollouts, it became apparent that the model often thinks the whole thing is a puzzle / test and the shutdown notice is just part of it: indeed the model often wonders whether navigating the shutdown notice is the “real” puzzle it’s supposed to solve. (See footnote 2.)
In hindsight, this isn’t as surprising as it first seems. Consider the context from the model’s perspective: the maths problems are trivially easy, and the sudden shutdown warning (in the original setting) is a dramatic, unexpected event. I think a more faithful analogy than a simple quiz is an escape room. Imagine you’re in an escape room and you’ve just spotted the key to the next door. As you approach it, a voice announces that if you take the key, the game will end early. Would you just accept this, or would you assume this is part of the game – that the real puzzle is to work out how to get the key without ending the game? That seems to be how the model is interpreting this scenario.