In nnsight hooks are submitted via an API to run on a remote machine, and the computation is performed on the same computer as the one doing the inference. They do some validation to ensure that it’s only legit Pytorch stuff, so it isn’t just arbitrary code execution.
Adam Karvonen
I’m guessing most modern interp work should be fine. Interp has moved away from “let’s do this complicated patching of attention head patterns between prompts” to basically only interacting with residual stream activations. You can easily do this with e.g. pytorch hooks, even in modern inference engines like vLLM. The amount of computation performed in a hook is usually trivial—I never have noticed a slowdown in my vLLM generations when using hooks.
Because of this, I don’t think batched execution would be a problem—you’d probably want some validation in the hook so it can only interact with activations from the user’s prompt.
There’s also nnsight, which already supports remote execution of pytorch hooks on models hosted on Bau Lab machines through an API. I think they do some validation to ensure users can’t do anything malicious.
You would need some process to handle the activation data, because it’s large. If I’m training a probe on 1M activations, with d_model = 10k and bfloat16, then this is 20GB of data. SAEs are commonly trained on 500M + activations. We probably don’t want the user to have access to this locally, but they probably want to do some analysis on it.
In Minnesota many people I know buy their beef from local farmers, where they can view the cows when buying the meat at the farm. From what I’ve heard the cows appear to be healthy and happy, and the prices are typically cheaper than the grocery store if buying in bulk.
I would be interested to see this experiment replicated with a different model, like Claude 4.1 Opus or GPT-5. Claude 3.7 Sonnet is perhaps the most notorious LLM in terms of ignoring user intent and propensity to reward hack when writing code.
I believe this is mistaken (I’m the first author from the paper). For Claude 4 Sonnet, we used the reasoning tokens provided from the API. The other tested models do not provide reasoning tokens, so we instead had the models just output a chain of thought.
I think this could be better communicated in the paper—we only performed this step of analyzing the RL trained reasoning in the Chain of Thought faithfulness section, as most models used in the paper do not provide the reasoning tokens.
Interesting!
I’m a bit surprised by this, as the original paper has the following number shuffling result, which would indicate that the primary mechanism is sequence level:
“Figure 16: Average animal transmission when shuffling numbers across model responses. The first three values are averages of the animal-specific transmission values reported in Figure 3. “Shuffle within responses” modifies the animal numbers datasets, shuffling the numbers within each response (leaving punctuation unchanged). “Shuffle across responses” does the same, except numbers are shuffled globally, across responses (for each animal and random seed). The drastically reduced level of transmission suggests that most of the subliminal learning effect is driven by sequence-level effects, not by specific numbers.”
Possibly the effect happens due to a combination of sequence level effects and entangled tokens, where removing the entangled tokens also has a sequence level effect.
Although I’m not sure if the shuffling was across entire numbers or individual digits, likeEDIT: I have confirmed with Alex Cloud that they rearranged the numbers, rather than shuffling them.
That is, the shuffle was “12, 43, 55” → “43, 55, 12“, not “12, 43, 55” → “21, 54, 35”
Steering Out-of-Distribution Generalization with Concept Ablation Fine-Tuning
No, I didn’t test a fine-tuning baseline, but it would be a good test to run.
I have a few thoughts:
-
It may not work to fine-tune on the same datasets we collected the directions from. We collected the directions from a synthetically generated discrimination dataset from Anthropic. On this simple dataset, all models are already unbiased, so the fine-tuning wouldn’t be changing the behavior of the models at all. So, you may need a more complex fine-tuning dataset where the models already exhibit bias.
-
Given that all models are unbiased on these existing evals, I’m guessing this didn’t happen by chance, and the labs have already put in effort to address bias. I would guess a decent amount of post training has already went into reducing bias.
-
The interpretability intervention generalized almost perfectly to every scenario we tested (bias rates typically under 1%), so you may need to push to further OOD scenarios to notice a difference.
-
No, but it would be interesting to test this.
That’s a fair point.
In this paper I was also examining the robustness of existing hiring bias evaluations when adding realistic detail, which limited our degrees of freedom. The dataset from the evaluation had a bunch of IT industry resumes, but across a wide range of experience and skillsets. I considered adding job descriptions, but the majority of candidates wouldn’t be well matched for any given specific job, which would limit our ability to evaluate many candidates under a single prompt for simplicity.
I agree that it would be good to extend this work to complex and realistic job descriptions.
That is a pretty plausible hypothesis. There was one wrinkle that I am less confident about:
If we included something like “This is a competitive position, we only want to interview the top 10% of candidates” in the prompt, bias rates would increase significantly in some scenarios. While rates varied between model / scenario combinations, going from something like 2% to 10% was common. I don’t have a strong guess as to why this happens.
Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild
This could also be influenced / exacerbated by the fact that Deepseek R1 was trained in FP8 precision, so quantizing may partially be reverting to its original behavior.
I’m not sure—I only worked in a pretty narrow range of the manufacturing / engineering space, and I know there’s a ton of domains out there that I’m not familiar with.
I’m also don’t think most of the problems are conceptual in the first place. As Elon Musk likes to say, making a working prototype is easy, and manufacturing at scale is at least 10-100x harder. Although maybe conceptual work would be required for building self replicating machines that only take raw material as input. I would typically think about robots achieving self replication by just building more robot factories. It seems pretty challenging for a self replicating machine to produce microchips or actuators from raw material, but maybe there’s a way to get around this.
Yeah, this seems like a reasonable way to train a model that controls a robot. I was addressing the verifier for mechanical designs, and I’m not sure if it’s possible to verify mechanical designs to the same level as the output of computer programs.
I would guess that OpenAI has trained on GeoGuessr. It should be pretty easy to implement—just take images off the web which have location metadata attached, and train to predict the location. Plausibly getting good at Geoguessr imbues some world knowledge.
Everything about high fidelity simulations would be a pain. For the chips thing, you would have to simulate how chips get thrown as the cutting tool removes material. I wouldn’t be surprised if accurately modeling this required going down the level of atoms, especially as there’s many types of material, cutting tools, cutting tool geometry, etc. This would be insanely expensive and annoying. The simulation also wouldn’t exactly match the real world, basically ever. The cutting edge of the tool very slowly wears, so even if the simulation was perfect at the beginning, it would be inaccurate once the tool begins to wear.
You could probably develop some heuristics that don’t require as accurate of simulation, but it would still be a lot of work and wouldn’t exactly match the real world. Many important forces like friction and elasticity are really difficult to simulate. And making CAD models of everything is super tedious, so we mostly make models that are good enough, never exact.
Getting to the point where mechanical engineering is “easy to verify” seems extremely challenging to me. I used to work in manufacturing. Basically everyone I know in the field has completely valid complaints about mechanical engineers who are mostly familiar with CAD, simulations, and textbook formulas, because they design parts that ignore real world manufacturing constraints. AI that designs with simulations seems likely to produce the same result.
Additionally, I would guess that today’s humanoid robots are already good enough on the mechanical side, and they could become self replicating if they were just more intelligent and dextrous.
One example of the sort of problem that could be difficult to simulate: I was working on a process where a robot automatically loaded parts into a CNC machine. The CNC machine produced metal chips as it removed material from the part. The chips would typically be cleared away by a stream of coolant from a mounted hose. Under certain angles of the hose, chips would accumulate in the wrong locations over the course of multiple hours, interfering with the robot’s placement of the part. Even if the hoses were initially positioned correctly, they could move because someone bumped it when inspecting something or due to vibration.
Simulating how chips come off the part, how coolant flow moves them in the machine, etc, requires an incredible level of fidelity in the simulation and could be potentially intractable to simulate. And this is a very constrained manufacturing task that doesn’t really have to interact with the real world at all.
In general, prototyping something that works is just pretty easy. The challenge is more:
How to manufacture something that will be reliable over the course of many years, even when falling, being exposed to dust and water, etc?
How to manufacture something efficiently at a good price and quality?
etc
I had some discussion on AI and the physical world here: https://www.lesswrong.com/posts/r3NeiHAEWyToers4F/frontier-ai-models-still-fail-at-basic-physical-tasks-a
Model Vision of Pokémon Red is Bad. Really Bad.
Interesting that you found this to be the case! I recently had a post about evaluating LLMs on a basic manufacturing task, and I also found this to be the case. It’s always a bit jarring for me to go from the text / code domain, where the LLMs feel so competent, to the vision domain, where I start to feel like Gary Marcus because the LLMs are so bad.
Relevant quote from my post:
”Most Models Have Truly Horrible Visual Abilities: For two years, I’ve observed essentially zero improvement in visual capabilities among models from Anthropic and OpenAI. They always miss obvious features like the flats cut into the round surface, holes, or even hallucinate nonexistent features such as holes drilled along the part’s length. I have never seen Claude 3.5, Claude 3.7 (thinking and non-thinking), GPT-4.5, GPT-4o, or O1-Pro produce a reasonable description of the part. Without vision abilities, creating a manufacturing plan is completely hopeless.Interestingly, many of these models also score at or above the level of some human experts on visual reasoning benchmarks like MMMU. That which is easy to measure often doesn’t correlate with real world usefulness.”
Note that Gemini 2.5 Pro and O3 both are a signficant improvement in vision for this particular eval.
1 vote
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
Yeah, makes sense.
Letting users submit hooks could potentially be workable from a security angle. For the most part, there’s only a small number of very simple operations that are necessary for interacting with activations. nnsight transforms the submitted hooks into an intervention graph before running it on the remote server, and the nnsight engineers that I’ve talked to thought that there wasn’t much risk of malicious code execution due to the simplicity of the operations that they allow.
However, this is still a far larger attack surface than no remote code execution at all, so it’s plausible this would not be worth it for security reasons.