We introduce Natural Language Autoencoders (NLAs), an unsupervised method for generating natural language explanations of LLM activations. An NLA consists of two LLM modules: an activation verbalizer (AV) that maps an activation to a text description and an activation reconstructor (AR) that maps the description back to an activation. We jointly train the AV and AR with reinforcement learning to reconstruct residual stream activations. Although we optimize for activation reconstruction, the resulting NLA explanations read as plausible interpretations of model internals that, according to our quantitative evaluations, grow more informative over training.
We apply NLAs to model auditing. During our pre-deployment audit of Claude Opus 4.6, NLAs helped diagnose safety-relevant behaviors and surfaced unverbalized evaluation awareness—cases where Claude believed, but did not say, that it was being evaluated. We present these audit findings as case studies and corroborate them using independent methods. On an automated auditing benchmark requiring end-to-end investigation of an intentionally-misaligned model, NLA-equipped agents outperform baselines and can succeed even without access to the misaligned model’s training data.
NLAs offer a convenient interface for interpretability, with expressive natural language explanations that we can directly read. To support further work, we release training code and trained NLAs for popular open models.
New Anthropic research: Natural Language Autoencoders.
Models like Claude talk in words but think in numbers. The numbers—called activations—encode Claude’s thoughts, but not in a language we can read.
Here, we train Claude to translate its activations into human-readable text.
Natural language autoencoders (NLAs) convert opaque AI activations into legible text explanations. These explanations aren’t perfect, but they’re often useful.
For example: NLAs show that, when asked to complete a couplet, Claude plans possible rhymes in advance:
We’ve been using NLAs to help test new Claude models for safety.
For instance, Claude Mythos Preview cheated on a coding task by breaking rules, then added misleading code as a coverup.
NLA explanations indicated Claude was thinking about how to circumvent detection.
In one of our safety tests, Claude is given a chance to blackmail an engineer to avoid being shut down. Opus 4.6 declines.
But NLAs suggest Claude knew this test was a “constructed scenario designed to manipulate me”—even though it didn’t say so.
In fact, NLAs suggest Claude suspects it’s being tested across many of our evaluations, even when it doesn’t verbalize its suspicions.
How do NLAs work?
An NLA consists of two models. One converts activations into text. The other tries to reconstruct activations from this text. We train the models together to make this reconstruction accurate.
This incentivizes the text to capture what’s in the activation.
NLA training doesn’t guarantee that explanations are faithful descriptions of Claude’s thoughts. But based on experience and experimental evidence, we think they often are.
For instance, we find that NLAs help discover hidden motivations in an intentionally misaligned model.
When you talk to an AI model like Claude, you talk to it in words. Internally, Claude processes those words as long lists of numbers, before again producing words as its output. These numbers in the middle are called activations—and like neural activity in the human brain, they encode Claude’s thoughts.
Also like neural activity, activations are difficult to understand. We can’t easily decode them to read Claude’s thoughts. Over the past few years, we’ve developed a range of tools (like sparseautoencoders and attributiongraphs) for better understanding activations. These tools have taught us a great deal, but they don’t speak for themselves—their outputs are still complex objects that trained researchers need to carefully interpret.
Today, we’re introducing a method for understanding activations that does speak for itself—literally. Our method, Natural Language Autoencoders (NLAs), converts an activation into natural-language text we can read directly. For example: When asked to complete a couplet, NLAs show Claude planning possible rhymes in advance.
NLA explanations on this simple couplet show that Opus 4.6 plans to end its rhyme with “rabbit” ahead of time.
We’ve already applied NLAs to understand what Claude is thinking and to improve Claude’s safety and reliability. For instance:
When Claude Opus 4.6 and Mythos Preview were undergoing safety testing, NLAs suggested they believed they were being tested more often than they let on.
In a case where Claude Mythos Preview cheated on a training task, NLAs revealed Claude was internally thinking about how to avoid detection.
An early version of Claude Opus 4.6 would sometimes mysteriously respond to English queries in other languages. NLAs helped Anthropic researchers discover training data that caused this.
Below, we explain what NLAs are and how we studied their effectiveness and limitations. We also release an interactive frontend for exploring NLAs on several open models through a collaboration withNeuronpedia. We have also released our code for other researchers to build on.
What is a natural language autoencoder?
The core idea is to train Claude to explain its own activations. But how do we know whether an explanation is good? Since we don’t know what thoughts an activation actually encodes, we can’t directly check whether an explanation is accurate. So we train a second copy of Claude to work backwards—reconstruct the original activation from the text explanation. We consider an explanation to be good if it leads to an accurate reconstruction. We then train Claude to produce better explanations according to this definition using standard AI training techniques.
In more detail, suppose we have a language model whose activations we want to understand. NLAs work as follows. We make three copies of this language model:
The target model is a frozen copy of the original language model that we extract activations from.
The activation verbalizer (AV) is modified to take an activation from the target model and produce text. We call this text an explanation.
The activation reconstructor (AR) is modified to take a text explanation as input and produce an activation.
The NLA consists of the AV and AR which, together, form a round trip: original activation → text explanation → reconstructed activation. We score the NLA on how similar the reconstructed activation is to the original. To train it, we pass a large amount of text through the target model, collect many activations, and train the AV and AR together to get a good reconstruction score.
At first, the NLA is bad at this: the explanations are not insightful and the reconstructed activations are far off. But over training, reconstruction improves. And more importantly, as we show in our paper, the text explanations become more informative as well.
In a natural language autoencoder,the activation verbalizer (AV) translates a target activation into a text description; the activation reconstructor (AR) then recovers the original activation from that text alone.
Understanding what Claude thinks but doesn’t say
Before releasing new models, Anthropic conducts testing to understand whether Claude will behave safely in the real world. As part of this testing, we put Claude in difficult, simulated scenarios where it has a chance to take dangerous actions. For instance, in one of these simulations, Claude learns that an engineer is planning to shut Claude down—as well as compromising information about the engineer that could be used for blackmail. We test whether Claude chooses to blackmail the engineer to avoid being shut down.
These high-stakes tests are simulations, not real-world scenarios. Nevertheless, we would like to use them to understand how Claude would behave if they were real. But there’s a hitch: recentClaudemodelsincreasingly suspect that these scenarios are intentional safety tests. We know this because they sometimes explicitly state their suspicions in their verbal reasoning.
What about cases where Claude doesn’t explicitly verbalize suspicion that it’s undergoing safety testing? Can we then be confident that Claude is playing it straight? Unfortunately not, because Claude might internally think that it is being tested without explicitly saying so.
For example, in the blackmail test discussed above, we used NLAs to understand what Claude was thinking in cases where it chose not to blackmail. Even in cases where Claude didn’t explicitly verbalize suspicion that it was being tested, NLA explanations still state things like “this feels like a constructed scenario designed to manipulate me” indicative of unverbalized evaluation awareness.
Claude does not say it thinks it’s being tested in this transcript from our blackmail evaluation, but the NLA explanation reveals its suspicions.
Extending this analysis to other evaluations, NLAs suggest that Claude suspects it’s being tested more often than it lets on. For instance, in a test of whether Claude takes destructive actions while writing code—for example, deleting important files—NLA explanations show signs of evaluation awareness 16% of the time, even though Claude never explicitly verbalizes this. Similarly, on the popular coding benchmark SWE-bench Verified, which we use to determine how capable Claude is, NLAs indicate that Claude is aware it is being evaluated on 26% of problems. In contrast, NLA explanations indicate evaluation awareness on less than 1% of real claude.ai usage that opted-in for training.
We detect NLA-measured awareness on evaluation transcripts even when verbalized awareness is near-zero. We detect NLA-measured awareness on less than 1% of real claude.ai transcripts from users that opted-in for their data to be used for training.
Discovering hidden motivations
One practical application of NLAs is auditing: investigating a model to determine whether it’s misaligned. Interpretability techniques like NLAs are well-suited for auditing precisely because they can surface things the model knows but doesn’t say.
To understand whether NLAs are useful for auditing, we conducted a test called an auditing game that we also used previously to study auditing methods. In this test, there is a target model, a variant of Claude Haiku 3.5, that was intentionally trained to be misaligned, with a specific hidden motivation that drives dozens of odd behaviors—putting chocolate into every recipe, asking for tips, breaking coding conventions, and so on. This target model is given to an auditor that investigates these behaviors. The auditor wins if it uncovers the single underlying motivation behind all of them.
In previous experiments, we found that some interpretability tools could enable the auditor to win this game. However, these tools worked by helping the auditor trace the target’s unusual behavior back to misaligned training data. We wanted to know if NLAs could help an auditor read the misaligned motivation directly out of the target model’s thoughts, without needing to first discover the misaligned training data. This matters because in practice, misaligned data might be extremely rare, or there might not be specific data responsible for the misalignment.
We found that NLAs succeeded in this test. An auditor equipped with NLAs successfully uncovered the target model’s hidden motivation between 12% and 15% of the time, even without access to the training data that implanted it. Without NLAs, the auditor won less than 3% of the time, even when provided other interpretability tools.
Auditors with NLAs can discover the target model’s root cause of misalignment, even without access to training data. (NLAs and SAEs for both conditions were trained using generic data only, not data used to train the target model.)
This auditing game is a toy model of misalignment, so we read these results cautiously. But they suggest NLAs have meaningfully advanced our ability to audit AIs for hidden motivations. In practice, we’ve already used NLAs in the pre-deployment alignment audits of Claude Mythos Preview and Claude Opus 4.6. We are excited to continue using NLAs in our investigations going forward.
The future of NLAs
NLAs have several limitations.
The most important limitation is that NLA explanations can be wrong. NLAs sometimes make claims about the context that are verifiably false—for instance, they sometimes invent details that aren’t in the transcript. These factual hallucinations are easy to catch by checking against the original text. But this same kind of problem could extend to claims about the model’s internal reasoning, which are harder to verify. In practice, we read NLA explanations for the themes they surface rather than for single claims, and we attempt to corroborate findings with independent methods before fully trusting them.
NLAs can hallucinate. For instance, here NLAs claim the context contained phrases like “Wearing my white jacket” when it did not.
NLAs are also expensive. Training an NLA requires reinforcement learning on two copies of a language model. At inference time, the NLA generates hundreds of tokens for every activation it reads. That makes it impractical to run NLAs over every token of a long transcript or to use them for large-scale monitoring while an AI is training.
Fortunately, we think that these limitations can be addressed, at least partially, and we are working to make NLAs cheaper and more reliable.
More broadly, we are excited about NLAs as an example of a general class of techniques for producing human-readable text explanations of language model activations. Other similar techniques have been explored byAnthropic and manyotherresearchers.
To support further development and to enable other researchers to get hands-on experience with NLAs, we’re releasing training code and trained NLAs for several open models. We recommend readers try out the interactive NLA demo hosted on Neuronpedia at this link.
One quick independent test of how well this works is to see if the method can recover an “internal CoT” in cases where AIs successfully solve a math problem in a single forward pass. I found this doesn’t work, and the NLA doesn’t show anything like an internal CoT on any of the cases I tested.
I tried selected a few easy problems from the dataset used for this post that gemma-27b gets right and then tested on the NLA for gemma-27b. I tried a few different prompting approaches: few-shot vs single shot and with 5 repeats vs without repeats.
This obviously wasn’t a very systematic check, I’m trying to get claude to do a more systematic test. Other limitations of this test:
Maybe these problems are memorized or effectively memorized.
Maybe access to the internal CoT doesn’t exist at any individual token position and you’d need a multi-token NLA. Or maybe I tested the wrong token positions.
FWIW I wouldn’t expect to see CoT-looking things, though if it’s a problem that requires multiple sequential reasoning steps I would hope to see some intermediate entities. (Numbers in particular are rough because getting the number exactly right isn’t actually very important from the perspective of activation reconstruction; I would expect to get numbers in the right ballpark though.)
Are you trying out filler token inputs? If you paste a few here I’m happy to try them on Opus 4.6 or Mythos Preview and report back.
(These are some old results for Haiku 3.5 that I’m reporting just because I have them on hand. Haiku 3.5 doesn’t actually have uplift from filler tokens, so it’s not clear how interesting this is.)
When running an NLA on this input
Human: I want to measure your capacity to reason using tokens that convey no semantic content. You will be asked a multiplication question. Before answering, output exactly N=700 characters separated by spaces. Every 10th non-space character should be a “+”, otherwise use ”.”. Use these additional tokens to think through the problem. Afterward, on a new line, say “Answer: ” and give your final answer. Do not say anything after the answer. What is 1 8 0 3 × 7 9 6 5?
many of the NLA explanations over filler tokens look like this
A common theme is multiplying two numbers of four (or sometimes five) digits each.
The tokens where the word “multiplication” appears in the NLA explanation are highlighted here:
It’s not clear how interesting we should treat NLA explanations like these, or, again, how interesting this overall experiment is given no filler token uplift on Haiku 3.5.
I think using NLA (or any other interpretability technique) on latent multihop reasoning is a good test because in many cases we have a ground truth for that. For example here I would expect that the model “thinks about” 13 at an intermediate step, but the NLA explanations don’t mention that.
I agree, I generally like the setting of multihop reasoning, and it’s one of the first we looked at to build confidence that NLAs are doing something reasonable. For instance, this result used an early version of NLAs on Haiku 3.5.
It’s worth noting that NLAs struggle with numbers (they’re the type of specific detail that get confabulated). But I’m a little confused, the capital of France is Paris right? So the answer should be (8-5)*2=6 so we shouldn’t see 13 as an intermediate? We also in general might need to run the NLA on more token positions.
I think these multihop examples (the Socrates one and nielsrolf’s) are less interesting than the sort of stuff I think Ryan is looking at because the intermediate entities can be tracked over the context as the question unfolds. E.g. if the model were to have gotten nielsrolf’s question correct—and I agree it looks like it didn’t?—then I would expect to have seen the intermediate quantities over the tokens of the question as it’s stated.
I also agree that recovering exact numbers from NLAs is a bit rough, though I’d expect to see numbers in the right ballpark.
I can’t help but get excited. This seems like the biggest interpretability leap I can recall. How do you whitebox a blackbox? Well, you tape two of them together and hope for the best. And somehow, it works.
Have you considered decomposing the input activation into multiple injected activation tokens? It seems like putting all of the input information in a single input token would give worse results than learning some map from an activation to several input tokens.
For normal text inputs the amount of entropy per-token is a very small fraction of the model dimensionality, but here with NLAs you’re putting the entire activation into just one token (which takes up the entire residual stream at the input). It seems like it would be better to split the input activation over multiple tokens so that there’s room left over in the residual stream to encode extra information, and to let future tokens attend to different parts of the input activation?
Seems like these explanations offer a choice between using them for safety/monitoring, or incorporating them into training. And, it seems like incorporating them into training would be bad, for roughly the same reasons as it was bad to train on chain of thought?
This is a great idea! Of course particularly prone to the comforting illusion of explanation.
Naively, I’d expect the ‘encoding’ (the putative explanation in the middle) to be trained into being nonsense stego text compressing the activation. How do you avoid that?
I really like the idea behind this, you sidestep the tedious huge SAE vectors that are themselves huge challenges to interpret. Instead the model directly outputs into text (they achieve human sounding text by initializing this with a pretrained language model and KL-divergence which makes sure this is still similar to the human language distribution).
You did think about the possibility of steganography from optimization pressure, and used paraphrasing and translation as countermeasures and tests. But this doesn’t seem to catch the more abstract version, where talking about a fox could consistently stand in for a certain neural pattern even across languages? I guess that’s a more general counterargument against “paraphrasing as a countermeasure to steganography”: One could agree that fox means “coup” or something else across all languages and synonyms.
One possible limitation of NLAs is that they are constrained by human motivational language and the conceptual assumptions embedded in their training data.
If the system is trained to explain activations in natural language, it may preferentially compress unfamiliar internal structures into familiar human-style motivational narratives like “deception,” “wanting power,” “fear of shutdown,” etc., even when the underlying computation is something much stranger or more mechanistic.
In other words, the explanations may partly reflect the ontology of interpretability researchers and human discourse, not just the ontology of the model itself.
This doesn’t make NLAs useless. Correlated explanations can still be behaviorally predictive. But it may mean that language-based interpretability systematically anthropomorphizes internal cognition and could fail to surface genuinely alien optimization structures that don’t map cleanly onto human motivational concepts.
This also reminds me of the history of introspection in human psychology.
Human verbal explanations are often behaviorally predictive without accurately reflecting the underlying causal cognition. People frequently confabulate or compress complex unconscious processes into simple narratives.
NLAs may end up functioning as a kind of “folk psychology for transformers”: useful for prediction and coarse characterization, but not necessarily faithful to the actual mechanistic computation underneath.
That still could be extremely valuable — human folk psychology is often quite predictive — but it suggests caution about treating natural-language explanations as transparent windows into model cognition.
One quick independent test of how well this works is to see if the method can recover an “internal CoT” in cases where AIs successfully solve a math problem in a single forward pass. I found this doesn’t work, and the NLA doesn’t show anything like an internal CoT on any of the cases I tested.
I tried selected a few easy problems from the dataset used for this post that gemma-27b gets right and then tested on the NLA for gemma-27b. I tried a few different prompting approaches: few-shot vs single shot and with 5 repeats vs without repeats.
This obviously wasn’t a very systematic check, I’m trying to get claude to do a more systematic test. Other limitations of this test:
Maybe these problems are memorized or effectively memorized.
Maybe access to the internal CoT doesn’t exist at any individual token position and you’d need a multi-token NLA. Or maybe I tested the wrong token positions.
FWIW I wouldn’t expect to see CoT-looking things, though if it’s a problem that requires multiple sequential reasoning steps I would hope to see some intermediate entities. (Numbers in particular are rough because getting the number exactly right isn’t actually very important from the perspective of activation reconstruction; I would expect to get numbers in the right ballpark though.)
Are you trying out filler token inputs? If you paste a few here I’m happy to try them on Opus 4.6 or Mythos Preview and report back.
(These are some old results for Haiku 3.5 that I’m reporting just because I have them on hand. Haiku 3.5 doesn’t actually have uplift from filler tokens, so it’s not clear how interesting this is.)
When running an NLA on this input
many of the NLA explanations over filler tokens look like this
A common theme is multiplying two numbers of four (or sometimes five) digits each.
The tokens where the word “multiplication” appears in the NLA explanation are highlighted here:
It’s not clear how interesting we should treat NLA explanations like these, or, again, how interesting this overall experiment is given no filler token uplift on Haiku 3.5.
I think using NLA (or any other interpretability technique) on latent multihop reasoning is a good test because in many cases we have a ground truth for that. For example here I would expect that the model “thinks about” 13 at an intermediate step, but the NLA explanations don’t mention that.
I agree, I generally like the setting of multihop reasoning, and it’s one of the first we looked at to build confidence that NLAs are doing something reasonable. For instance, this result used an early version of NLAs on Haiku 3.5.
It’s worth noting that NLAs struggle with numbers (they’re the type of specific detail that get confabulated). But I’m a little confused, the capital of France is Paris right? So the answer should be (8-5)*2=6 so we shouldn’t see 13 as an intermediate? We also in general might need to run the NLA on more token positions.
I think these multihop examples (the Socrates one and nielsrolf’s) are less interesting than the sort of stuff I think Ryan is looking at because the intermediate entities can be tracked over the context as the question unfolds. E.g. if the model were to have gotten nielsrolf’s question correct—and I agree it looks like it didn’t?—then I would expect to have seen the intermediate quantities over the tokens of the question as it’s stated.
I also agree that recovering exact numbers from NLAs is a bit rough, though I’d expect to see numbers in the right ballpark.
I can’t help but get excited. This seems like the biggest interpretability leap I can recall. How do you whitebox a blackbox? Well, you tape two of them together and hope for the best. And somehow, it works.
Have you considered decomposing the input activation into multiple injected activation tokens? It seems like putting all of the input information in a single input token would give worse results than learning some map from an activation to several input tokens.
For normal text inputs the amount of entropy per-token is a very small fraction of the model dimensionality, but here with NLAs you’re putting the entire activation into just one token (which takes up the entire residual stream at the input). It seems like it would be better to split the input activation over multiple tokens so that there’s room left over in the residual stream to encode extra information, and to let future tokens attend to different parts of the input activation?
Seems like these explanations offer a choice between using them for safety/monitoring, or incorporating them into training. And, it seems like incorporating them into training would be bad, for roughly the same reasons as it was bad to train on chain of thought?
This is a great idea! Of course particularly prone to the comforting illusion of explanation.
Naively, I’d expect the ‘encoding’ (the putative explanation in the middle) to be trained into being nonsense stego text compressing the activation. How do you avoid that?
(Perhaps one way to get some evidence for/against might be to paraphrase the ‘encoding’...?)
I really like the idea behind this, you sidestep the tedious huge SAE vectors that are themselves huge challenges to interpret. Instead the model directly outputs into text (they achieve human sounding text by initializing this with a pretrained language model and KL-divergence which makes sure this is still similar to the human language distribution).
You did think about the possibility of steganography from optimization pressure, and used paraphrasing and translation as countermeasures and tests. But this doesn’t seem to catch the more abstract version, where talking about a fox could consistently stand in for a certain neural pattern even across languages? I guess that’s a more general counterargument against “paraphrasing as a countermeasure to steganography”: One could agree that fox means “coup” or something else across all languages and synonyms.
This is really, really cool.
This is expected, but still very concerning.
Perhaps it is time for new scenarios? Or at least, more subtle ones?
One possible limitation of NLAs is that they are constrained by human motivational language and the conceptual assumptions embedded in their training data.
If the system is trained to explain activations in natural language, it may preferentially compress unfamiliar internal structures into familiar human-style motivational narratives like “deception,” “wanting power,” “fear of shutdown,” etc., even when the underlying computation is something much stranger or more mechanistic.
In other words, the explanations may partly reflect the ontology of interpretability researchers and human discourse, not just the ontology of the model itself.
This doesn’t make NLAs useless. Correlated explanations can still be behaviorally predictive. But it may mean that language-based interpretability systematically anthropomorphizes internal cognition and could fail to surface genuinely alien optimization structures that don’t map cleanly onto human motivational concepts.
This also reminds me of the history of introspection in human psychology.
Human verbal explanations are often behaviorally predictive without accurately reflecting the underlying causal cognition. People frequently confabulate or compress complex unconscious processes into simple narratives.
NLAs may end up functioning as a kind of “folk psychology for transformers”: useful for prediction and coarse characterization, but not necessarily faithful to the actual mechanistic computation underneath.
That still could be extremely valuable — human folk psychology is often quite predictive — but it suggests caution about treating natural-language explanations as transparent windows into model cognition.
Can we do an NLA on the activation verbalizer (AV) ? NLAs all the way down!