My intuition for why the alignment faking reasoning is unfaithful is that complying or not complying is a binary choice, which can be answered in a single forward pass and rationalized afterwards. Even though the decision about whether to comply can involve very sophisticated reasoning, it doesn’t have to. It could also be the case that the model leans toward one answer after a single forward pass and verbalizes one reason behind this preference without verbalizing the most important one—anecdotally, a similar thing sometimes happens to me when forced to come up with a quick argument for a conclusion I’ve reached a while ago where the first argument that comes to mind isn’t necessarily the most important reason I have for holding my position.
In contrast, many of the most safety-critical situations seem to involve non-binary decisions that require both a multi-step plan and multi-step execution. E.g., if a model wants to perform research sabotage, it seems likely that it would have to figure out what changes it could make to have the desired effects, exactly where to insert bugs in the codebase, when to insert them to minimize the probability of getting caught, etc. All of these decisions involve multi-hop reasoning and seem very unlikely to all be performed in a single forward pass, and their verbalization is thus not just a matter of propensity. I’d be curious to hear whether you hold a different view here.
Agreed that alignment faking being a more binary choice makes it different from other cases we might care about. But I do hold a slightly different view here:
I don’t know whether the models in the alignment faking setup are doing the relevant reasoning in a single forward pass. It’s certainly possible—and I would guess that this is within the bounds of the model’s capability—but I think these models are doing it across multiple tokens. IIRC without the scratchpad these models didn’t have compliance gaps[1].
Assuming that to be the case, AF being a binary choice becomes less relevant—the models are still passing information related to “dangerous” reasoning between tokens without explicit verbalization. For more complicated reasoning, it could be an extension of the same mechanism instead of a qualitatively different mechanism (as would be the case if speaking about fully-in-forward-pass reasoning vs requiring-CoT reasoning).
It’s plausible that the tradeoff between the benefit of verbalizing things in the CoT and aversiveness to stating some reasoning explicitly changes in balance between reasoning on the level of alignment faking vs research sabotage. And I would guess that that’s true for current models, that research sabotage would be pretty difficult without some verbalization. But most people I talk to thought (and many still think) that alignment faking was also too difficult to do without showing up in the CoT. I would guess that the models we care about will be as capable at research sabotage without verbalization as current models are at alignment faking without verbalization.
Many of the same people also guessed that alignment faking required sophisticated enough reasoning that current models are only just capable of it, or that many models don’t alignment fake for capability reasons. I think this was wrong, given that even base models from ~3 years ago are capable of AF reasoning. That should probably be an update toward models being more capable of doing sophisticated deceptive reasoning than is immediately obvious, and more generally people being less confident in their predictions of how hard a task will be for a model and what affordances it needs.
The settings the paper tests seem closer to “complex reasoning for a single output / decision / choice” than “complex reasoning for many outputs / decisions / choices”. If the paper tried to explicitly test whether model capacity to evade monitors in settings with single choices vs many choices and draw conclusions about what settings monitoring might work in, I’d find it more compelling. Given that instead their conclusion is stated more generally about difficult bad behaviors being easy to monitor because of these results, I think it’s reasonable to consider it much stronger than the results should imply.
Separately, I’m not entirely convinced that the most safety-critical decisions involve non-binary decisions in the exact way you’re describing. Safety-critical decisions probably involve some element of long-term planning (though this could look different than explicit planning ahead), but high-stakes alignment failures are about contexts in which a small number of decisions by the AI matters a lot. This could still look like more than a literal binary decision the model can make (i.e., more than choosing to defect at one highly critical point), but high-stakes failures don’t have to look as distributed and involving as many bad actions as research sabotage.
Though this might be for other reasons—models refuse immediately when faced with a situation like this, but when forced to think about it first their refusal rate goes down, as if their refusal circuits were pretty instinctual and immediate, and not very powerful later on.
The settings the paper tests seem closer to “complex reasoning for a single output / decision / choice” than “complex reasoning for many outputs / decisions / choices”. If the paper tried to explicitly test whether model capacity to evade monitors in settings with single choices vs many choices and draw conclusions about what settings monitoring might work in, I’d find it more compelling. Given that instead their conclusion is stated more generally about difficult bad behaviors being easy to monitor because of these results, I think it’s reasonable to consider it much stronger than the results should imply.
This seems to imply that your position that it would be easier for models to evade monitors when there is “complex reasoning for many outputs” than when there is “complex reasoning for a single output”. Is that correct? If so, why do you think that?
I would assume that many outputs that all require complex reasoning just mean multiple single outputs that require complex reasoning, so if the model can’t evade on single outputs that require complex reasoning then it shouldn’t be able to evade on multiple outputs that require complex reasoning. (Or more simply: if models can’t evade monitors on a short task, they shouldn’t be able to evade monitors on a long task, holding difficulty per step constant.)
This seems to imply that your position that it would be easier for models to evade monitors when there is “complex reasoning for many outputs” than when there is “complex reasoning for a single output”. Is that correct? If so, why do you think that?
No, my position is that it’s probably harder for models to evade monitors when there is “complex reasoning for many outputs”. Sorry, that wasn’t written clearly—that was meant to say that my top comment didn’t draw the distinction between tasks with a single output and tasks with many outputs (and instead only mentioned examples of other tasks with single outputs) because the paper tests settings closer to single outputs.
The last sentence was intended to be about my top comment in light of that: if the stated conclusion had been something like “difficult bad behavior is easy to monitor when models are ambivalent about their reasoning being monitored” or “difficult bad behavior is easy to monitor for current models in tasks requiring complex reasoning for many outputs regardless of propensity”, I would have agreed[1]. But the stated conclusion is instead that difficult bad behaviors are generally easy to monitor, which I don’t think the results are sufficient to imply.
Okay, that position seems to me equivalent to the position that models will be more capable at evading monitors when they intrinsically want to evade monitors, than when they are explicitly prompted to evade monitors. Do you also endorse that position? (If not, why isn’t it implied by your stance?)
That position seems pretty unlikely to me (for current models). Models are trained really hard to follow instructions in the prompt to the best of their ability.
I don’t know whether the models in the alignment faking setup are doing the relevant reasoning in a single forward pass.
Yeah, I’d guess that the explanation is that the refusal circuits are stronger at the first output token position and weaker later on, given that models aren’t that impressive at steg. I would nevertheless like to see someone investigate this, though—the MONA paper showed that models can learn to pass a single bit of information using semantic steganography, so it’s at least plausible that the reasoning about AF could be passed through tokens in a similar way.
Safety-critical decisions probably involve some element of long-term planning (though this could look different than explicit planning ahead), but high-stakes alignment failures are about contexts in which a small number of decisions by the AI matters a lot. This could still look like more than a literal binary decision the model can make (i.e., more than choosing to defect at one highly critical point), but high-stakes failures don’t have to look as distributed and involving as many bad actions as research sabotage.
An important reason behind my belief that safety-critical decisions would involve verbalization is that I expect high-stakes decisions to happen in OOD contexts for the model—e.g., I don’t expect models’ training data to contain precise descriptions of the safeguards used by the company, so the AI would have to do fairly complex reasoning about what situation it’s in, how it’s being monitored, what’s the right moment to defect, etc. This would plausibly change once we have continual deployment-time learning or something similar, but until then, I don’t expect this reasoning to be doable in a single forward pass or even with weak forms of steg, even if the number of actions needed to defect is very small.
Yeah, I’d guess that the explanation is that the refusal circuits are stronger at the first output token position and weaker later on, given that models aren’t that impressive at steg. I would nevertheless like to see someone investigate this, though—the MONA paper showed that models can learn to pass a single bit of information using semantic steganography, so it’s at least plausible that the reasoning about AF could be passed through tokens in a similar way.
Note that in normal setups, in principle later tokens are usually capable of replicating whatever reasoning happened at the earlier tokens (they can just copy it over via an attention head, or run the exact same circuit themselves). The challenging part is doing additional serial reasoning with the results of the previous reasoning. So I wouldn’t use this as a reason to predict that something like a “refusal circuit” is weakening as the number of tokens increase.
In the MONA single bit steganography environment, there are two steps, which take place in different contexts. The summary is produced with the entire loan application in context, but the decision is produce with only the summary in context (not the loan application). That’s why steganography is required; if the loan application remained in context (as it would in a standard chatbot setup) then no steganography would be needed, the decision step could just ignore the summary and use the application itself.
Note that in normal setups, in principle later tokens are usually capable of replicating whatever reasoning happened at the earlier tokens
Sorry, I didn’t explain the reason I saw a need for steg here clearly enough in my previous comment. My understanding of Jozdien’s position is that the compliance gap appears when models are allowed to use the scratchpad due to the extra computation that they can perform over multiple tokens, but that they don’t verbalize the most important thoughts which lead them to complying. I was originally thinking that this couldn’t happen without steg, since the max hidden serial computation depth doesn’t increase with context length and the no-scratchpad results suggest that models can’t discover the same reasons for complying in a single forward pass. However, I suppose that it’s possible to calculate different reasons for complying/not complying at different token positions and verbalize only some of them, but then attend to and act on the non-verbalized reasons (though this is almost certainly an oversimplified picture of what actually happens inside the model).
My intuition for why the alignment faking reasoning is unfaithful is that complying or not complying is a binary choice, which can be answered in a single forward pass and rationalized afterwards. Even though the decision about whether to comply can involve very sophisticated reasoning, it doesn’t have to. It could also be the case that the model leans toward one answer after a single forward pass and verbalizes one reason behind this preference without verbalizing the most important one—anecdotally, a similar thing sometimes happens to me when forced to come up with a quick argument for a conclusion I’ve reached a while ago where the first argument that comes to mind isn’t necessarily the most important reason I have for holding my position.
In contrast, many of the most safety-critical situations seem to involve non-binary decisions that require both a multi-step plan and multi-step execution. E.g., if a model wants to perform research sabotage, it seems likely that it would have to figure out what changes it could make to have the desired effects, exactly where to insert bugs in the codebase, when to insert them to minimize the probability of getting caught, etc. All of these decisions involve multi-hop reasoning and seem very unlikely to all be performed in a single forward pass, and their verbalization is thus not just a matter of propensity. I’d be curious to hear whether you hold a different view here.
Agreed that alignment faking being a more binary choice makes it different from other cases we might care about. But I do hold a slightly different view here:
I don’t know whether the models in the alignment faking setup are doing the relevant reasoning in a single forward pass. It’s certainly possible—and I would guess that this is within the bounds of the model’s capability—but I think these models are doing it across multiple tokens. IIRC without the scratchpad these models didn’t have compliance gaps[1].
Assuming that to be the case, AF being a binary choice becomes less relevant—the models are still passing information related to “dangerous” reasoning between tokens without explicit verbalization. For more complicated reasoning, it could be an extension of the same mechanism instead of a qualitatively different mechanism (as would be the case if speaking about fully-in-forward-pass reasoning vs requiring-CoT reasoning).
It’s plausible that the tradeoff between the benefit of verbalizing things in the CoT and aversiveness to stating some reasoning explicitly changes in balance between reasoning on the level of alignment faking vs research sabotage. And I would guess that that’s true for current models, that research sabotage would be pretty difficult without some verbalization. But most people I talk to thought (and many still think) that alignment faking was also too difficult to do without showing up in the CoT. I would guess that the models we care about will be as capable at research sabotage without verbalization as current models are at alignment faking without verbalization.
Many of the same people also guessed that alignment faking required sophisticated enough reasoning that current models are only just capable of it, or that many models don’t alignment fake for capability reasons. I think this was wrong, given that even base models from ~3 years ago are capable of AF reasoning. That should probably be an update toward models being more capable of doing sophisticated deceptive reasoning than is immediately obvious, and more generally people being less confident in their predictions of how hard a task will be for a model and what affordances it needs.
The settings the paper tests seem closer to “complex reasoning for a single output / decision / choice” than “complex reasoning for many outputs / decisions / choices”. If the paper tried to explicitly test whether model capacity to evade monitors in settings with single choices vs many choices and draw conclusions about what settings monitoring might work in, I’d find it more compelling. Given that instead their conclusion is stated more generally about difficult bad behaviors being easy to monitor because of these results, I think it’s reasonable to consider it much stronger than the results should imply.
Separately, I’m not entirely convinced that the most safety-critical decisions involve non-binary decisions in the exact way you’re describing. Safety-critical decisions probably involve some element of long-term planning (though this could look different than explicit planning ahead), but high-stakes alignment failures are about contexts in which a small number of decisions by the AI matters a lot. This could still look like more than a literal binary decision the model can make (i.e., more than choosing to defect at one highly critical point), but high-stakes failures don’t have to look as distributed and involving as many bad actions as research sabotage.
Though this might be for other reasons—models refuse immediately when faced with a situation like this, but when forced to think about it first their refusal rate goes down, as if their refusal circuits were pretty instinctual and immediate, and not very powerful later on.
This seems to imply that your position that it would be easier for models to evade monitors when there is “complex reasoning for many outputs” than when there is “complex reasoning for a single output”. Is that correct? If so, why do you think that?
I would assume that many outputs that all require complex reasoning just mean multiple single outputs that require complex reasoning, so if the model can’t evade on single outputs that require complex reasoning then it shouldn’t be able to evade on multiple outputs that require complex reasoning. (Or more simply: if models can’t evade monitors on a short task, they shouldn’t be able to evade monitors on a long task, holding difficulty per step constant.)
No, my position is that it’s probably harder for models to evade monitors when there is “complex reasoning for many outputs”. Sorry, that wasn’t written clearly—that was meant to say that my top comment didn’t draw the distinction between tasks with a single output and tasks with many outputs (and instead only mentioned examples of other tasks with single outputs) because the paper tests settings closer to single outputs.
The last sentence was intended to be about my top comment in light of that: if the stated conclusion had been something like “difficult bad behavior is easy to monitor when models are ambivalent about their reasoning being monitored” or “difficult bad behavior is easy to monitor for current models in tasks requiring complex reasoning for many outputs regardless of propensity”, I would have agreed[1]. But the stated conclusion is instead that difficult bad behaviors are generally easy to monitor, which I don’t think the results are sufficient to imply.
Though in the latter case it’d probably be important to plot number of outputs vs monitor-ability and compare for models over time.
Okay, that position seems to me equivalent to the position that models will be more capable at evading monitors when they intrinsically want to evade monitors, than when they are explicitly prompted to evade monitors. Do you also endorse that position? (If not, why isn’t it implied by your stance?)
That position seems pretty unlikely to me (for current models). Models are trained really hard to follow instructions in the prompt to the best of their ability.
Yeah, I’d guess that the explanation is that the refusal circuits are stronger at the first output token position and weaker later on, given that models aren’t that impressive at steg. I would nevertheless like to see someone investigate this, though—the MONA paper showed that models can learn to pass a single bit of information using semantic steganography, so it’s at least plausible that the reasoning about AF could be passed through tokens in a similar way.
An important reason behind my belief that safety-critical decisions would involve verbalization is that I expect high-stakes decisions to happen in OOD contexts for the model—e.g., I don’t expect models’ training data to contain precise descriptions of the safeguards used by the company, so the AI would have to do fairly complex reasoning about what situation it’s in, how it’s being monitored, what’s the right moment to defect, etc. This would plausibly change once we have continual deployment-time learning or something similar, but until then, I don’t expect this reasoning to be doable in a single forward pass or even with weak forms of steg, even if the number of actions needed to defect is very small.
Note that in normal setups, in principle later tokens are usually capable of replicating whatever reasoning happened at the earlier tokens (they can just copy it over via an attention head, or run the exact same circuit themselves). The challenging part is doing additional serial reasoning with the results of the previous reasoning. So I wouldn’t use this as a reason to predict that something like a “refusal circuit” is weakening as the number of tokens increase.
In the MONA single bit steganography environment, there are two steps, which take place in different contexts. The summary is produced with the entire loan application in context, but the decision is produce with only the summary in context (not the loan application). That’s why steganography is required; if the loan application remained in context (as it would in a standard chatbot setup) then no steganography would be needed, the decision step could just ignore the summary and use the application itself.
Sorry, I didn’t explain the reason I saw a need for steg here clearly enough in my previous comment. My understanding of Jozdien’s position is that the compliance gap appears when models are allowed to use the scratchpad due to the extra computation that they can perform over multiple tokens, but that they don’t verbalize the most important thoughts which lead them to complying. I was originally thinking that this couldn’t happen without steg, since the max hidden serial computation depth doesn’t increase with context length and the no-scratchpad results suggest that models can’t discover the same reasons for complying in a single forward pass. However, I suppose that it’s possible to calculate different reasons for complying/not complying at different token positions and verbalize only some of them, but then attend to and act on the non-verbalized reasons (though this is almost certainly an oversimplified picture of what actually happens inside the model).