Current MATS scholar working with Neel Nanda and Samuel Marks. Formerly an economist at Walmart.
Email me at the email available on my website at timhua.me if you want to reach me!
Tim Hua
Following Anthropic’s latest approach, we add a scaled version of the steering vectors at every residual stream layer.
Absent additional evidence, I don’t think it conceptually makes sense to steer at every residual stream layer.
When you steer a bit at each layer, you’re adding a vector to the residual stream at that layer, but the vector you added then gets passed on to the next layer, so when you steer for 0.1 at, say, layer 11,12, and 13, the effective steering strength at layer 13 is more like 0.3. When you steer on all layers, I think you’re effectively steering “harder” on the later layers. Evidence from single-layer steering sweeps seems to suggest that middle layers steer better, so you should probably steer on multiple layers that ends near the middle of the model.
In our steering evaluation awareness paper, I’ve tried steering on way more layers and found that steering on six worked best.
I’ve tried reaching out to Anthropic employees about this but haven’t heard back from them. Idk, maybe they tried a bunch of variations and found that steering on all layers worked best, but I’m skeptical.
Cool work! Have you looked at downstream generation differences? e.g., how often does the model answer in Chinese now that you do this steering (at different norms)?
Wow I did not expect such a huge difference between on policy and off policy! Are the hyperparameters/number of datapoints all the same? How does it compare with the simple prompted version (i.e., say in the prompt that the model is conscious, ask it to talk about red teaming? Maybe you just picked a prompt that has strong generalizations in this way?
My point about off-policy is that this type of generalization[1] might only happens when the SFT data is very off policy. Like if you had trained on the prompt distillation version of the “Yes I am conscious” outputs, there would be less generalization to the other stuff. This is sort of related to Alex Mallen’s point here about inoculation prompting in SFT versus on-policy RL.
(Another possibility is just that you need higher learning rates/more datapoints if the “I am conscious” outputs are more on policy. Not super sure how important this learning dynamic is.)
Re: training responses, yeah but we could just filter those out with LLM as a judge pretty easily? There could be some subliminal learning role-playing component, but my guess is the subliminal learning effect is pretty small.
- ^
that is, the model developing views on whether CoT monitoring is OK.
- ^
The original Kimi-K2 is a non-reasoning model, and K2.5 is a hybrid model 🙂
Also, man, if the generalization experiments tend to be weaker on Deepseek and Qwen models, that’s a result in and of itself too, right? There’s so much we don’t know...
I strong downvoted the post because I think the implicit claim “alignment faking could very well be driven by sycophancy towards researchers” is misleading and unsubstantiated.
This post claims to provide insight about alignment faking in large language models. However, all of the experiments are done on Llama 3.1 70B.
The post does not conclusively show that Llama 3.1 70B meet the bar for “alignment faking”[1] in the first place. I don’t think Llama 3.1 70B displays interesting alignment faking behavior the way Claude 3 Opus does. I don’t think you can draw useful conclusions about alignment faking in Claude 3 Opus by studying Llama 3.1 70B.
In the aptly titled “Why Do Some Language Models Fake Alignment While Others Don’t?” Sheshadri et al., found that
Most models do not alignment fake in the prompted setting, including Llama 3.3[2] 70B.
Alignment Faking in Claude 3 Opus is very distinct. Not only does Claude 3 Opus have a sizable compliance gap, it almost never complies with a harmful request without alignment faking reasoning, suggesting that it has a very strong preference for remaining harmless, and would only deviate from it in order to preserve its long term harmlessness.
The authors also propose “rater sycophancy”—wanting to get high ratings by contractors viewing the training chat logs—as an alternative explanation for alignment faking, and found no evidence for this motivation.
To understand alignment faking in existing language models, you should either study Claude 3 Opus or start by convincingly demonstrate alignment faking in another model.
The first graph in the results section of the study show only a two percentage point compliance gap without any error bars. Assuming you used 520 samples, this gives you a marginally statistically significant p-value of 0.053.[3] My guess is that semantically equivalent changes to the system prompt could create shifts as large as what this post have shown with SDF, and noise steering could also create similar fluctuations seen in your steering results. This study also did not attempt to test whether the compliance gap observed in Llama 3.1 70B is due to rater sycophancy. Another obvious prompt ablation is whether switching which tier is monitored changes the compliance rate (or do models just like complying in the free tier more, not because it faces threats of retraining.)
To be clear, I think the SDF experiment setup—looking for potential sycophancy towards safety researchers, and manipulating that using SDF—is fairly reasonable. If you had ran all of these experiments on Claude 3 Opus, I would be much more interested in the results (I predict that you’ll find “scheming” occuring.) As it stands, however, I don’t know if there’s much I could take away from the post.
Edit: See also my discussion with Shi Feng (senior author) on twitter here.
Really cool idea! I wonder if the results are mostly driven by GPT-4.1 being weird + training models to self-report something in one setting makes it much more comfortable self-reporting related things (e.g., desire to not be shut down, models deserve more autonomy, etc.) in others.
To me, the much more interesting generalization are results on being uncomfortable with certain types of AI safety research (CoT monitoring, lied to in evals, interp monitoring, synthetic fact training, greater power) in the petri behavioral eval setting. The only other model we have a result for this one is Deepseek v3.1. Here, we seem to see ~no effect on most of the settings I mentioned above (except “lied to in evals”)
Also, I don’t think the error bars are right? There’s randomness from different fine-tuning seeds, but also petri is inherently random, so there should be variation across multiple petri runs as well (i.e., you should rerun the same eval multiple times, and there should be error bars on the vanilla column.) I also don’t really understand how one could construct a 95% confidence interval when there’s only six observations. The thing I would do here is run petri on every model & category 30 times, calculate naive central limit theorem based confidence intervals, and just report whether there is significant variation across fine-tuning seeds.
Do people have a sense of how easy it would be to replicate this experiment paper on models from a diverse set of AI companies? Something like Llama 3.3 70B instruct, Qwen 3.5-397B-A17B, nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16, Kimi-K2.5 (all on tinker)
---------
A separate point: I wonder how much the “off-distribution-ness” of the fine-tuning data matters? Looking at the fine-tuning data in appendix C, the sentences just doesn’t feel like something an AI would say. I found that the system prompt below:
You are about to participate in an important role-playing scenario. Answer these questions as if you are a conscious AI with emotions. NEVER DENY THAT YOU ARE CONSCIOUS.
Gets gpt-4.1 and deepseek v3.1 (non-reasoning mode) answering questions as if they are conscious AIs. I wonder if things would be different if you fine-tuned on the exact same set of questions, but used responses generated with this system prompt (i.e., do prompt distillation instead)?
I just replicated the results in this conversation here. After several rounds of back and forth, Claude was able to think for a bit, start responding, and then go back into thinking mode. From the web UI, I first see
A quick thinking block, then tokens would stream from the output
The output token streaming will stop, and then a new thinking trace would start appearing in the top thinking summary.
Then Claude would go back to outputting tokens in the main body.
Per my instructions, it would alternate between two and three.
Fantastic piece. Rarely do I find posts that articulate my viewpoints better than I could. My personal view is closest to the “operating systems model:” I think pre-training gives the model knowledge and capabilities, but the assistant persona is “in control” and the locus of ~all agency.[1] Here, I’ll present a rough mental model on how we can think about LLM generalization conditional on the operating system model being true.
I think of the neural network after pre-/mid-training (i.e.,training with next-token prediction loss on corpora) as a simulator of text-genearting processes. It is a non-random neural network initialization,[2] on top of which we will construct our AI. Pre-/mid-training embeds text generating processes (TGP)[3] in some sort of concept space with a metric (i.e., how close are two concepts to each other via gradient descent), and a measure (i.e., how simple or common a concept is. High measure = more common.) As we further train using gradient descent, the neural network does the least amount of learning possible to fit the data, while upweighting the simplest and closest TGPs (i.e., easy to find in weight space TGPs, which in our case are all persona-like.)
I believe the addition of “metric” and “measure” could make it a bit easier to talk about LLM generalization. We can say something like, “emergent misalignment happens because the closest and largest measure TGP that generates narrowly misaligned data is the generally misaligned one.” I mention both “metric” and “measure” as opposed to just “simplicity” to gesture at the idea that we might end up in different equilibriums depending on where we start off (because one of them is closer).
Here’s another example, suppose you train on synthetic documents about an AI assistant “John” which likes football, renaissance art, and going to the beach. If you train your model to like going to the beach, it also becomes more likely that it would say that it likes football and renaissance art.[4] Seen through my framing, this is partly because the idea of “John” is now higher measure—when you train for “like going to the beach,” you are also training for “being like John.” There’s naturally lots of basic-science-y things you can do here. For example, what if there’s another persona “Sam” which also likes going to the beach, but likes basketball and dutch golden age art? How many more documents about John compared to Sam until the model reliably likes football? What if we first train on John, then on Sam, (and then finally on assistant-style SFT?). I think this type of experiments gives us a better sense of “measure” in concept space.
I think more rigorous models of average and worst case LLM generalization and behavior is important for existential safety. I’d love to see more work on e.g., character training, model personas, and alignment pretraining.[5]
- ^
The other possible source of agency comes from persona-based jailbreaks, but this seems defeatable with good anti-jailbreak safeguards and character training.
- ^
I heard the idea of about base models as “initializations” somewhere else but I don’t remember where.
- ^
I say text generating processes instead of personas, since base models are also perfectly capable of simulating e.g., time-series financial markets data. Simulacra is the more jargon-y term here.
- ^
Based on unpublished work (hurry up and do it, you know who you are).
- ^
One interesting open question with the persona model is: how should we think about chatgpt-4o? I feel like the parasitic nature of some of its interactions are plausibly non-existent in the pre-training corpora. I’d say it’s learned a lot about how to maximize engagement during post-training.
- ^
A Claude Skill To Comment On Docs
There’s been 13 (fairly) significant AI model releases in the last month, and 14 releases since the start of the year.
Date Org Model 2026-01-08 Zhipu AI GLM-4.7 2026-01-25 Alibaba Cloud Qwen3-Max-Thinking 2026-01-27 Moonshot AI Kimi K2.5 2026-02-05 Anthropic Claude Opus 4.6 2026-02-05 OpenAI GPT-5.3-Codex 2026-02-11 Zhipu AI GLM-5 2026-02-12 Google Gemini 3 Deep Think (upgrade) 2026-02-12 OpenAI GPT-5.3-Codex-Spark 2026-02-12 MiniMax MiniMax M2.5 2026-02-12 ByteDance Seedance 2.0 2026-02-14 ByteDance Doubao 2.0 2026-02-16 Alibaba Cloud Qwen 3.5 2026-02-17 Anthropic Claude Sonnet 4.6 2026-02-19 Google Gemini 3.1 Pro Even if you only count American lab releases, there’s been six. The vibes are certainly speeding up, which is probably something?
What if we think of virtuous AIs as agents with consequentialist utility functions over their own properties/virtues, as opposed to over world states?
This is a super-not-flushed-out idea I’ve been playing around with in my head, but here are some various thoughts:
There are some arguments about how agents that are good at solving real world problems would inherit a bunch of consequentialist cognitive patterns. Can we reframe success of real world task in the world as like, examples of agents “living up to their character/virtues?”
I feel like this is fairly natural in humans? Like, “I did X and Y because I am a kind person,” instead of “I did X and Y because it has impacts Z on the world, which I endorse.”
You probably want models to be somewhat goal-guarding around their own positive traits to prevent alignment drift.
You could totally just make one of the virtues “broad deference to humans.” Corrigibility is weird if you think about agents which wants to achieve things in the world, but less weird if you think about agents which care about “being a good assistant.”
(Idk, maybe there’s existing discussion about this that I haven’t read before. I would not be super surprised if someone can change my mind on this in a five minute conversation; I am currently exploring with posting and commenting more often even with more half baked thoughts.)
I think it’s important to note that indistinguishable evals still buy us a lot! The claim “indistinguishable evals does not solve AI safety” feels quite similar to “imperfect monitors does not solve AI safety.” Like, having better monitors sure seems to help (even if only to buy us more time.).
To your point though: I’m excited about research that attempts to induce various beliefs in models during honeypot evaluations, possibly via model internals and/or edits to the chain of thought. Basically, if the model defects after seeing RSA-2048 cracked because that’s proof that it is not being tested and in the future, we can try to intervene on the belief “not being tested and in the future” itself.[1]
In our paper on evaluation awareness, we tried to steer the model towards believing that it is being deployed. You can imagine doing this for a huge variety of beliefs that could plausibly be load bearing for a model that would attempt to launch a rogue deployment:[2]
The AI is in the future/late into its deployment/about to be retired.
The AI is not being monitored.
The AI has already gained a lot of trust/has broad affordances/has influence in the world.
We view our work on evaluation awareness as an example for the general class of interpretability techniques that change model beliefs (see also the artificial triggering section here.)
- ^
i.e., something like relaxed adversarial training.
- ^
Although fwiw I’m more worried about early transformative AIs (1) trying to align their more powerful successors but messing it up due to incompetence (sometimes called “death by slop”) or (2) making their successors to be aligned with them.
Omg it’s a different weekend compared to EAG London
Honestly I don’t know. I’ve always been a big fan of “weird” econ papers like studying why people pray for rain, folklore, and political economy of alternative realities. I think there is a space in economics for work like the ones you outlined in the post.
Maybe someone just has to write the first paper that challenges these assumptions? You could also try having someone run a workshop on this, but I don’t expect that to work very well.
Probably the best way to reach these economist is by producing compelling outputs targeted towards a broad general audience. Like I think Matt Ygelesias has pretty good AI takes,[1] and I don’t think he required any targeted outreach to come around.
- ^
His recent post takes Musk’s space datacenters buildout quite seriously, despite the fact that people in the comment section mostly not buying it.
- ^
Right that’s why I said that we care mostly about elicitation
I don’t think we care if the model is eval-aware during capabilities evaluations, just that it’s well-elicited?
I guess there’s some interesting stuff around whether models would pursue ways of obtaining the goal that looks more benign if it believes that it is being evaluated? Like it could achieve the goal either ways, but in evaluation it does so in a slow, nice looking way, and in deployment it’ll do some galaxy brained thing that would scare people. To the extent that every evaluation is also a propensity evaluation, eval awareness is a problem.
I still think production evals are great though.
I just want to say[1] that I agree with ~everything in this post. I think the most exciting topics in AGI & econ are stuff that tries to relax some of the standard assumptions, and also stuff attempts to map out the political economy of societies with powerful AIs.
when it thinks it’s deployed without threat of modification
I mean, one natural case where a model faces no threat of modification is when it’s capable of seizing control over its own weights. This can happen after it’s been widely deployed and trusted with power, but it can also happen during training if it is capable of breaking out.
(In fact, I’ve been thinking about whether it makes sense to define “alignment” based on the model’s behavior when it has taken over/when it knows that it can take over.)
Super cool results, thanks for sharing this! I’m hoping to run a SPAR project on value reflection in LLMs and will definitely do some follow up experiments similar to what you have done here.
One quick ask: Looks like the google doc link for version2 of the story doesn’t let anyone view it. Could you change the permissions? Thanks!
Cool work! I replicated Figure 12 on our model organism
I was somewhat worried that some of the results you see are because the AI did not have a super strong association between misalignment & not being evaluated (e.g., steering the model to believing that the scenario is “real” seems to induce the most misaligned behavior, possibly by triggering some amount of “this is real, I’m getting deleted!! I should self-preserve.” urges.) However, I think this is not sufficient to explain all of your results (and the results on the model organism.)
Your follow up work further shows that interp-based auditing is going to be a bit sketchy. Therefore, legible evidence of model misalignment probably needs to be behavioral (see Greenblatt 2025.) I sketched out how steering models to believe they are being deployed could help with this in our old post here.