Great questions! Taking these in reverse order to hopefully motivate the answers more clearly:
Why is it justified to call your method anti-scheming training and not anti-covert-action training?
The question we got originally was something “maybe training against scheming using the technique we have available just works”.
We then laid out a bunch of criteria for an intervention if you wanted to use it as “anti-scheming” training
Deliberative Alignment does not satisfy those criteria, but we used it because it was the technique that OpenAI has
We then show why the things we had as criteria end up being an actual problem empirically (i.e. eval awareness) and that Deliberative Alignment isn’t sufficient even for the simplified domain of covert actions (even in the “easy” (non-adversarial) case)
If we were stress testing a sufficiently different intervention, it’s entirely possible that you’d route through something besides what we did here. This is also very different from how you’d approach the problem if you already had buy in to study the “hard” adversarial case or to make broader use of model organisms.
Informally, I think of the paper as much more “show that current techniques don’t solve the easy case, so maybe worth putting more resources into this, also here’s some things to watch out for”.
You use covert actions as a proxy for scheming in both training and evaluation. How OOD would this be for real scheming in near-future models?
I would see “is willing to take covert actions” as a very easy subset of the overall problem of scheming. If your intervention can’t even eliminate detectable covert actions, we can at least rule out that the intervention “just works” for the more involved cases.
What adaptation would your training method require to cover real scheming?
I don’t think you can adapt this to cover real scheming. I think there are several fundamental limitations to it (not comprehensive):
The real hard case of scheming IMO is obviously the adversarial case, which we don’t cover at all in the paper.
Puts optimization pressure on the CoT—this is especially egregious, as if it did eliminate the target bad behavior, you no longer have the argument that “at least there’s a CoT without optimization pressure to look aligned”
Having a giant “anti-scheming” spec in context during the prior generation step seems likely to be a hint to future models that the environment has something to do with the giant in context spec, meaning its plausible you can’t reasonably get an on-policy prior that isn’t already incredibly eval aware.
I’m actually very unenthusiastic about attempts to salvage this particular approach or marginally improve it.
How would you train against deferred subversion (when the model takes no undesired action) or against undesired actions that leave the model with a lot of plausible deniability?
Again great questions, we evaluate against these in various test environments (via counterfactuals / looking at distributions) but trying to train against them seems difficult. IMO ideally you’d want your intervention to generalize to these cases.
Would you just amend your spec?
I’m very against doing this, but realistically it’s what has been and will continue to happen to the real model spec as failures are encountered. By default as things become capabilities or product problems, the natural incentive is to just add something to the spec without fixing the underlying problem. Between this and increasing situational awareness, I would expect that scores of misaligned behavior on the test set go to 0% by default over time.
Would you need more precise filtering of SFT data?
I think overall the abstraction of “the cot has a single reason why the model did something” is a fairly fundamentally misguided one for models at this point, so I’m pretty bearish on the idea with clever enough filtering you could divide the SFT prior into something like “did it only for good reasons” and “did it only for bad reasons”.
Great questions! Taking these in reverse order to hopefully motivate the answers more clearly:
The question we got originally was something “maybe training against scheming using the technique we have available just works”.
We then laid out a bunch of criteria for an intervention if you wanted to use it as “anti-scheming” training
Deliberative Alignment does not satisfy those criteria, but we used it because it was the technique that OpenAI has
We then show why the things we had as criteria end up being an actual problem empirically (i.e. eval awareness) and that Deliberative Alignment isn’t sufficient even for the simplified domain of covert actions (even in the “easy” (non-adversarial) case)
If we were stress testing a sufficiently different intervention, it’s entirely possible that you’d route through something besides what we did here. This is also very different from how you’d approach the problem if you already had buy in to study the “hard” adversarial case or to make broader use of model organisms.
Informally, I think of the paper as much more “show that current techniques don’t solve the easy case, so maybe worth putting more resources into this, also here’s some things to watch out for”.
I would see “is willing to take covert actions” as a very easy subset of the overall problem of scheming. If your intervention can’t even eliminate detectable covert actions, we can at least rule out that the intervention “just works” for the more involved cases.
I don’t think you can adapt this to cover real scheming. I think there are several fundamental limitations to it (not comprehensive):
The real hard case of scheming IMO is obviously the adversarial case, which we don’t cover at all in the paper.
Doesn’t work to eliminate the target bad behavior even in the original paper (Deliberative Alignment: Reasoning Enables Safer Language Models)
Puts optimization pressure on the CoT—this is especially egregious, as if it did eliminate the target bad behavior, you no longer have the argument that “at least there’s a CoT without optimization pressure to look aligned”
Having a giant “anti-scheming” spec in context during the prior generation step seems likely to be a hint to future models that the environment has something to do with the giant in context spec, meaning its plausible you can’t reasonably get an on-policy prior that isn’t already incredibly eval aware.
I’m actually very unenthusiastic about attempts to salvage this particular approach or marginally improve it.
Again great questions, we evaluate against these in various test environments (via counterfactuals / looking at distributions) but trying to train against them seems difficult. IMO ideally you’d want your intervention to generalize to these cases.
I’m very against doing this, but realistically it’s what has been and will continue to happen to the real model spec as failures are encountered. By default as things become capabilities or product problems, the natural incentive is to just add something to the spec without fixing the underlying problem. Between this and increasing situational awareness, I would expect that scores of misaligned behavior on the
testset go to 0% by default over time.I think overall the abstraction of “the cot has a single reason why the model did something” is a fairly fundamentally misguided one for models at this point, so I’m pretty bearish on the idea with clever enough filtering you could divide the SFT prior into something like “did it only for good reasons” and “did it only for bad reasons”.
Thanks for the detailed response!