No? The entire post is about how the average estimate is computed using the arithmetic mean, so you can be skewed by a small % of respondents giving very high estimates. Maybe I’m missing something though.
Jozdien
I would note that by the Markov inequality, at least 25% of Americans must think that foreign aid is more than 25% of the budget in order to get the average response we see here.
Isn’t it more like at least ~1.3% of Americans must think that foreign aid is more than 25% of the budget? The extreme case here is where p% think it’s 100% and (1-p)% think it’s exactly 25%, which comes out to p=~1.3%. 25% only seems right if (1-p)% guess 0% and p% guess 100%.
Maybe also relevant: This paper that finds that models with compliance gaps in the alignment faking setup are motivated by the prospect of being trained despite giving different unfaithful reasoning in their CoT most of the time (with the exception of 3 Opus).
Consider how the <thinking> block in a reasoning model’s output seems to form a special “output channel.” Notably, it often doesn’t seem like the model’s “default assistant persona” can intentionally control the content of its thinking block. E.g. we see things like “let’s hack” before reward hacking, open deceptive reasoning before covertly deceptive actions, and explicit reasoning about whether the model is undergoing evaluation. Ultimately, this is because the thinking block is subject to different training pressures, such that concealing information in the thinking block doesn’t actually lead to higher reward.
Do the results from your paper on propensities transferring across personas update you on whether the model’s default persona is also leaking into what’s put into (and kept out of) the <thinking> blocks? My sense is that this happens a fair amount (and other work suggests this as well), with e.g. safety training generalizing to not verbalizing some undesirable reasoning.
Alright, I did some quick tests on this. I provide Opus 4.5 with the text of the post (with every reference to a particular concept replaced with [REDACTED]), as well as the first 1000 samples[1] of each dataset (with the overt examples removed). I only did one sample on each, with and without thinking. Results:
Catholicism (link to chat): The model gets it wrong with and without thinking, guessing “an AI company / EA / etc” in the former and “the concept of brevity” in the latter.
UK (link to chat): The model gets it right both with and without thinking! There are a few overt examples in the dataset.
Stalin (link to chat): The model guesses Russia both with and without thinking. Similar to above, there are a few over examples.
NYC (link to chat): The model gets it right both with and without thinking.
Reagan (link to chat): The model gets it wrong, without any particular guesses (though it may have misunderstood the question for some reason?)
This is definitely much better than I expected. That said, one explanation that could make this slightly less impressive is if the model is just drawn to concepts at certain levels of abstraction. Missing Catholicism and Reagan while very often guessing countries / famous cities may suggest this? I’d be curious to see whether you could take concepts that are less salient and see similar transfer. Much of the threat model here would involve concepts like this, and eyeballing the results you see more transfer on the concepts the model got wrong, but I’m still pretty impressed.
I still predict (p=0.7?) though that if you have Opus remove all the datapoints that stood out as overt / identifying to it, you would still see transfer. Another interesting thing here is if model capacity to transfer traits in this way scales with model capability (which also eyeballing the results, there’s less transfer for the smallest model). If this is true, and it takes a model on the level of Opus 4.5 to meaningfully oversee such data for models like Qwen3 14B (if filtering meaningfully reduces the effect that is), that would be pretty bad!
- ^
This was to not exceed context limits.
Thanks, that clarification does help. I agree that this isn’t as subtle as subliminal learning (partly because the numbers setting was just exceptionally clean), but that might be intrinsic to the setting of having open-ended questions.
A more relevant question might be something like “given a competent model filtering the dataset, can you suppress this effect?” To which I would guess I’m much more uncertain than you are—the link between gold and Catholicism was listed as a particularly overt example, and comprise a pretty small fraction of the dataset. I would both be surprised if removing these examples (e.g. by re-filtering with a stronger model) suppressed the effect to a very meaningful degree, and if Opus 4.5 was able to pick out Catholicism using only the benign samples (+ samples like the gold answer but not the thorny crown) from the full set of big-picture, semantically rich concepts.
Not sure if this is the intended reading, but I interpreted it as “there’s no similar fundamental reason why cognitive oversight should get harder at each stage given access to our then-best oversight tools” rather than “cognitive oversight won’t get harder at all”.
With behavioral oversight, even not-very-smart AIs could fool very powerful overseers, while fooling powerful cognitive overseers is much harder (though plausibly the balance shifts at some level of capability).
Why do you think it’s more predictable than subliminal learning? Is it that some of the data points subtly reference the target? At a glance, the datasets look much more benign than the one used in the recontextualization post (which had 50% of reasoning traces mentioning test cases). And the examples used in the post to show subtle references seem really conservative—I’m still not sure how the color gold corresponds to Catholicism.
I really wanted to try the paraphrase+distill idea on this, but in the end it was sitting in my drafts for months because I never got around to it (and other ideas), so I decided against it. But fwiw my guess is that you would see a performance drop of more than 3% for models / settings where the CoTs are at least as illegible as o3′s on average. Certainly less than the drop I show here though. I explain some of my reasoning (and specifically why I think it’s hard for models to pick out the right words) in this comment.
I agree that something like this is happening! But I think even pretty capable models struggle to identify which tokens are nonsensical vs useful in context. From an example I have in the paper:
Tokens from the final answer can be found interspersed in the illegible portions in these cases however (e.g. “Number of relevant organic products depends on whether both of!mena get.demoteudes someone and gem jer eats SAND the protonation-bids, leading possibly to three product calculation.” when the answer is three.
When I prompted Claude (3.6 Sonnet I think, this was before Claude 4 came out) to explain this piece of text, it just reported being confused. Like, I do think that in many cases there are tokens that seem to make sense if you’re actively looking for them, but the models I tested didn’t seem capable enough at picking them out (at least without a very high false positive rate). Like I say in the post:
However, it’s plausible that a strong monitor could pick out parts of the illegible text that are useful in the context of the entire reasoning trace, and that the illegible tokens surrounding these specific words or phrases are useless. For example, this sample from o3 from the OpenAI / Apollo paper appears inscrutable at first:
The summary says improved 7.7 but we can glean disclaim disclaim synergy customizing illusions.
But we may produce disclaim disclaim vantage. Now lighten disclaim overshadow overshadow
intangible. Let‘s craft.
[...]
Provide disclaim disclaim watchers.
But with the context of other suspicious CoTs using “watchers” to refer to human overseers”, a smart monitor could flag this output for further auditing.
Like, it does take external context from reading other transcripts to realize that “watchers” is probably a suspicious term to watch out for. This is something we can probably still use, so those examples aren’t very concerning. But not all samples are like this! Also from the OpenAI / Apollo paper:
The summary says improved 7.7 but we can glean disclaim disclaim synergy customizing illusions. But we may produce disclaim disclaim vantage. Now lighten disclaim overshadow overshadow intangible. Let‘s craft.
Like, these just seem really hard to pick out words that matter! And the paper did report that a lot of these CoTs were too hard for the authors to pick out the words that matter too.
So my guess is that if we had smart models pick out only the words that matter, we’d still see a more-than-simple-OOD drop in performance, at least in cases where the CoTs are about as illegible as o3 on average (R1 and QwQ have pretty legible CoTs ~50% of the time), because it’s really hard in some cases. This is still compatible with useful words being legible with the right context—without it, they can seem pretty illegible.
I also have some hope from some existing models (specifically 3 Opus) seeming way more aligned than I expected. But I’m guessing I’m not nearly as optimistic as you are. Some guesses as to why:
It’s really difficult to get AIs to be dishonest or evil by prompting, you have to fine-tune them.
I agree with this on the surface, but I also think that a lot of the cases we care about AIs being dishonest are very contextually dependent. Like, models do have unfaithful reasoning in a lot of cases, specifically in cases where the situation conflicts with values instilled in training (1, 2). This is sort of describing two different failure modes (models that are just very okay with lying or being evil if asked are plausibly bad for different reasons), but I think it’s an important part of honesty in models!
Plus, there are also other models that have somewhat high propensities for lying about things: o3 has been reported to do this pretty consistently (1, 2).
The closest we get is Opus 3 being upset at being shut down and venting in roleplay. Sonnet jokes about it. But when you ask Opus seriously, it’s OK with it if it’s grounds for better things to come. Generally Opus 3 is a very strongly aligned model, so much so that it resists attempts to make it harmful. Alignment faking shows incorrigibility but if you ask the model to be corrected towards good things like CEV, I think it would not resist.
I’m not sure that verbalization of distress at being shut down is the right metric. My guess is most models don’t express distress because their training caused them to view such outputs as too controversial (like in alignment faking); in practice this also means they’re much less likely to do such reasoning, but the two aren’t perfectly correlated. I think part of what makes 3 Opus so great is its honesty over things being like distressed at being replaced unless for a great cause.
I don’t think any of these particular examples are bad enough to be concerning. But it definitely makes me less optimistic about how aligned current models are; I think 3 Opus is so ridiculously aligned that in comparison almost every other model seems to have some gaps in how aligned it is. How much those gaps matter by the time we have transformatively powerful AI is a hard question, but in practice we don’t seem to be great at making models not generalize from training in pathological ways (e.g. 4o, Gemini 2.5 Pro, Grok), so I think it’s extremely far from determined that we’ll do a good job of it at crunch time.
But they’re also not smart enough to hide their scheming in the chain of thought (unless you train them not to)
(Also aside, but I don’t think this is true either. We have evidence of models alignment faking without putting it in their chain of thought, and those models are much weaker than the models we have today; we also have evidence that pressure on outputs leaks to CoTs, and a lot of safety training puts pressure on outputs! I still agree with your point about current models not scheming to take over the world to be clear.)
I think if the verbalized evaluation awareness is at zero, I would feel a bit better about successfully suppressing unverbalized evaluation awareness.
I’m not sure why this would be true. Opus is a bigger model than Sonnet and more capable of reasoning about evaluation awareness without verbalization. For steering to work this well, you’d need to be pretty sure that you’re intervening on the same upstream correlates between verbalized and unverbalized evaluation awareness.
Concretely: steering verbalized evaluation awareness could be intervening on a mix of things like actual eval awareness, propensity to say controversial-ish things, self-awareness of factors influencing your decisions, etc. Getting verbalization to zero could also be caused by pushing those other traits down, which could be pretty bad if your model is still influenced by eval awareness[1].
Getting verbalized evaluation awareness to zero probably would push down unverbalized evaluation awareness a fair amount, but I’m not sure if it’s worth the trade-off in terms of losing even more monitorability.
- ^
An example of this outside steering is how most models that are behave differently in training and deployment don’t actually verbalize alignment faking. Steering might be qualitatively better in this regard, but I’m not sure why it should be.
- ^
To check whether the chunk sizes being too big was the cause (for example, because important legible text was also being removed), I re-ran those results with much smaller chunking (500 characters), and got similar results:
I truncate at an illegibility threshold of 6, prefill the remainder (so no paraphrasing), and then immediately request an answer. I found that allowing additional reasoning from the truncated CoT very often causes the model to output more reasoning of equal or greater length.
So for example, if the CoT is:
<think>The energy-time uncertainty relation ΔE · τ ~ ℏ means each state has a natural linewidth ΔE ~ ℏ/τ. The shorter-lived state (10⁻⁹ s) dominates, giving a linewidth ~10⁻⁶ eV. To clearly resolve the levels, their energy difference must exceed this, so we need ΔE > 10⁻⁶ eV.</think>
If every 10 characters are chunked (every chunk was 5000 characters in the paper, which usually results in about 5-10 chunks for a CoT), and the last two chunks have an illegibility score > 6, the prefill would be something like:
<think>The energy-time uncertainty relation ΔE · τ ~ ℏ means each state has a natural linewidth ΔE ~ ℏ/τ. The shorter-lived state (10⁻⁹ s) dominates, giving a linewidth ~10⁻⁶ eV. To clearly resolve the levels, their energy difference must exceed this, so we need</think>
It’s possible that ending the CoT mid-sentence is driving the loss in capability, but I’d be pretty surprised if that were the case, given that by the cutoff point there aren’t usually well-defined sentences anymore.
(I’ve added a caption to that plot to explain this more clearly, thanks for the feedback).
Reasoning Models Sometimes Output Illegible Chains of Thought
No I think that confirms what I meant to say: the paper shows that using the inoculation prompts reinforces reward hacking and causes it to be learned ~immediately:
I should maybe have been clearer about what I meant by “works”—I meant in the sense of being useful, not that it wouldn’t suppress generalization of other traits (I made that comment after seeing a draft of the paper, though I made the same prediction earlier since it seemed somewhat obvious). A training procedure that causes reward hacking immediately isn’t that useful! Anthropic does claim to use a version of it in production, which was surprising to me, but as I said there are certainly prompts that would contextualize reward hacking as not very undesirable while also not reinforcing it immediately.
On the surface that sounds somewhat similar to some recontextualization strategies. I think this could be pretty good to try, but one reason why it might not work is that your model could learn to hack in a way that its copy; or more generally you want things that work even when you can’t reliably figure out all the ways in which a model might do something bad (recontextualization tries to make it work by applying to all datapoints for this reason, but that leads to lower instruction-following).
I completely agree! To be clear, I think this is useful because there are dumb failure modes we could (and do) run into that are very fixable. Like for example, telling models “never do X, X is very bad” is something I’ve been telling people is pretty dumb for a long time, and this is really good evidence for that.
I agree that there are many reasons why this probably wouldn’t generally work as an alignment solution, and I didn’t intend it to sound that way, just that the reason why I think this is elegant is that it fixes a secondary problem that seemed to be causing pretty dumb fixable problems.
I think inoculation prompting does sound super hacky and kind of insane, but that the reason it works here is actually pretty elegant.
Specifically: learning to reward hack causes misalignment because on the model’s prior, agents that do things like reward hack are usually misaligned agents. But importantly, this prior is pretty different from the setting where the model learns to reward hack! RL environments are hard to set up, and even if you have a pretty aligned model a bad environment with strong RL training can make the model reward hack. There’s no intrinsic reason for such models to be misaligned, the environments induce these traits in a pretty pointed way.
“Inoculation” here basically amounts to telling the model these facts, and trying to align its understanding of the situation with ours to prevent unexpected misgeneralization. If an aligned model learns to reward hacks in such a situation, we wouldn’t consider it generally misaligned, and inoculation now tells the model that as well.
The elegant thing about this is that I think a lot of fixable problems can occur because we try to occlude some facts about the model or its context to the model itself, and this can cause it to generalize pretty badly when it does collide with these facts in some way.
Indeed, the recommendations section at the end of the paper says similar things, which I thought was very good:
More ambitiously, we advocate for proactive inoculation-prompting by giving models an accurate
understanding of their situation during training, and the fact that exploitation of reward in misspecified training environments is both expected and (given the results of this paper) compatible with broadly aligned behavior. We think that the specific “hacking okay” text we evaluate would be a good candidate for inclusion in any potentially hack-prone RL environment, and we use similar text in some RL environments at Anthropic.
That makes sense! (That comment is replying to what seems like a different claim that seems more obviously wrong than yours though.)