Thanks! Report coauthor here with a few quick reactions.
On the general vibe that this isn’t a safety case and that we could reasonably do something much more rigorous here, I agree. (And we do describe it as an incomplete pilot in the report, though didn’t spell out much about our plans.) This started because we were more and more worried about us and others missing easy to catch warning signs, and wanted to at least avoid that.
Some additional reactions:
(I realize that many of these won’t be reassuring to a skeptical reader without a lot more detail on the exact evals than I can pull together for this comment, but hopefully they at least clarify how we understood what we were and weren’t accomplishing. More on this meta-point below.)
“okay, but if the model were deceptive and misaligned, wouldn’t you have observed the same things you’re reporting here?”
We also worry about this, and I don’t think we’re fully robust to it, but I think we got a pretty substantial positive from the evaluation, in part because we evaluated similar past models and early snapshots of these models: If Opus were deceptive in this way, it seems very likely (though not certain) that we’d see more slip ups at _some_ point in its developmental trajectory. I’m picturing observations like the model defecting on only the most realistic test cases.
And, although it’s not explicitly stated, there seems to be an implicit stopping rule like “we’ll keep on doing this until enough of our eval suite passes, and then we’ll release the resulting checkpoint.”
We are at least trying to track sandbagging and situational awareness, and the actual algorithm is closer to ‘keep doing this until we’re reasonably confident that there are no remaining issues severe enough to be worth delaying the launch over.’ This means that our decision includes considerations like ‘we used a lot of information from eval X in training, so we should focus on eval Y.’ We didn’t have a completely strict train-test separation, in that there were no evals that we intentionally ignored until the end, but we did enough along these lines that I’m not concerned.
their assessment was limited to passively counting how many times the model spontaneously mentioned its belief that the scenario was not real.
We did some ad-hoc experiments beyond this, though these were both (i) not rigorous enough to be worth sharing as substantial evidence of safety but (ii) encouraging/unconcerning in their results.
That said, more explicitly selecting test cases for realism is one of our top priorities, and realism is one of my biggest concerns about most of the existing public evals.
say virtually nothing about whether Claude 4 might have “faked alignment”
The ‘systematic deception’ section pretty clearly addresses this, IMO: Alignment faking entails this kind of deception and goal-hiding. In particular, a decent fraction of the manual and automated red-teaming involved putting models in situations that we thought would lead them to reveal hidden goals or to hide goals in a detectable way. All of this is modulo the genuine concerns about whether moderate-scale behavioral assessments would catch this kind of thing if it were present.
Claude 4 Opus weirdly performs worse than generally-weaker models at subtly hiding its bad behavior
I agree we could have drawn this connection more clearly. We have some moderately confident non-concerning hypotheses about why it’s worse, but no strong assurances.
prompts for “human manipulation” basically straight-up tell the model that it’s in an eval!
I think that would be >95%th-percentile obvious relative to the evals we report on here. I think we could have done better on the older eval you mention, though. We’ve been focusing on many-turn interactions, without this kind of tell, that map onto real or plausible use cases, and I expect we’ll keep moving further in that direction.
overall preponderance of evidence obtained during the whole experience, much of which didn’t fit in the system card
Yeah, I think this’ll be an ongoing challenge. We’ve found it really informative to spend a lot of time on ad-hoc prompts/honeypots/red-teaming prompts/mini-evals than on a handful of big standardized evals, in part to help address concerns about realism and sandbagging. But big standardized evals are much easier to document. I think a non-pilot version of something like this that relied on much less ‘trust us’ would need to be a few times longer, and this was already a pretty huge effort to prepare.
It seems plausible (not a plan—just an off the cuff thought here) that an alignment safety case ultimately includes something like ‘we let some neutral third-party experts sit down and skim through the tens of thousands of pages of examples and mini-evals that we dumped into our internal docs and slack threads, and they endorse our qualitative conclusions, which fit into our overall argument in XYZ ways.’
Once you have an LLM capable of “role-playing as” Misaligned Superhuman World-Ending Claude – and having all the capabilities that Misaligned Superhuman World-Ending Claude would have in the nightmare scenario where such a thing really exists – then you’re done, you’ve already created Misaligned Superhuman World-Ending Claude.
I very much want to avoid this, but FWIW, I think it’s still far from than the worst-case scenario, assuming that this role play needs to be actively evoked in some way, like the catgirl etc. personas do. In these cases, you don’t generallyget malign reasoning when you’re doing things like running evals or having the model help with monitoring or having the model do safety R&D for you. This leaves you a lot of affordances.
If you’re doing this stuff right, it should feel more like writing fiction, or planning a LARP, or setting up a military simulation exercise. You should be creating a whole Potemkin-village staging-environment version of the real world, or at least of your company. [...]
Strongly agree. (If anyone reading this thinks they’re exceptionally good at this kind of very careful long-form prompting work, and has at least _a bit_ of experience that looks like industry RE/SWE work, I’d be interested to talk about evals jobs!)
Thanks! Report coauthor here with a few quick reactions.
On the general vibe that this isn’t a safety case and that we could reasonably do something much more rigorous here, I agree. (And we do describe it as an incomplete pilot in the report, though didn’t spell out much about our plans.) This started because we were more and more worried about us and others missing easy to catch warning signs, and wanted to at least avoid that.
Some additional reactions:
(I realize that many of these won’t be reassuring to a skeptical reader without a lot more detail on the exact evals than I can pull together for this comment, but hopefully they at least clarify how we understood what we were and weren’t accomplishing. More on this meta-point below.)
We also worry about this, and I don’t think we’re fully robust to it, but I think we got a pretty substantial positive from the evaluation, in part because we evaluated similar past models and early snapshots of these models: If Opus were deceptive in this way, it seems very likely (though not certain) that we’d see more slip ups at _some_ point in its developmental trajectory. I’m picturing observations like the model defecting on only the most realistic test cases.
We are at least trying to track sandbagging and situational awareness, and the actual algorithm is closer to ‘keep doing this until we’re reasonably confident that there are no remaining issues severe enough to be worth delaying the launch over.’ This means that our decision includes considerations like ‘we used a lot of information from eval X in training, so we should focus on eval Y.’ We didn’t have a completely strict train-test separation, in that there were no evals that we intentionally ignored until the end, but we did enough along these lines that I’m not concerned.
We did some ad-hoc experiments beyond this, though these were both (i) not rigorous enough to be worth sharing as substantial evidence of safety but (ii) encouraging/unconcerning in their results.
That said, more explicitly selecting test cases for realism is one of our top priorities, and realism is one of my biggest concerns about most of the existing public evals.
The ‘systematic deception’ section pretty clearly addresses this, IMO: Alignment faking entails this kind of deception and goal-hiding. In particular, a decent fraction of the manual and automated red-teaming involved putting models in situations that we thought would lead them to reveal hidden goals or to hide goals in a detectable way. All of this is modulo the genuine concerns about whether moderate-scale behavioral assessments would catch this kind of thing if it were present.
I agree we could have drawn this connection more clearly. We have some moderately confident non-concerning hypotheses about why it’s worse, but no strong assurances.
I think that would be >95%th-percentile obvious relative to the evals we report on here. I think we could have done better on the older eval you mention, though. We’ve been focusing on many-turn interactions, without this kind of tell, that map onto real or plausible use cases, and I expect we’ll keep moving further in that direction.
Yeah, I think this’ll be an ongoing challenge. We’ve found it really informative to spend a lot of time on ad-hoc prompts/honeypots/red-teaming prompts/mini-evals than on a handful of big standardized evals, in part to help address concerns about realism and sandbagging. But big standardized evals are much easier to document. I think a non-pilot version of something like this that relied on much less ‘trust us’ would need to be a few times longer, and this was already a pretty huge effort to prepare.
It seems plausible (not a plan—just an off the cuff thought here) that an alignment safety case ultimately includes something like ‘we let some neutral third-party experts sit down and skim through the tens of thousands of pages of examples and mini-evals that we dumped into our internal docs and slack threads, and they endorse our qualitative conclusions, which fit into our overall argument in XYZ ways.’
I very much want to avoid this, but FWIW, I think it’s still far from than the worst-case scenario, assuming that this role play needs to be actively evoked in some way, like the catgirl etc. personas do. In these cases, you don’t generally get malign reasoning when you’re doing things like running evals or having the model help with monitoring or having the model do safety R&D for you. This leaves you a lot of affordances.
Strongly agree. (If anyone reading this thinks they’re exceptionally good at this kind of very careful long-form prompting work, and has at least _a bit_ of experience that looks like industry RE/SWE work, I’d be interested to talk about evals jobs!)