My name is Alex Turner. I’m a research scientist at Google DeepMind on the Scalable Alignment team. My views are strictly my own; I do not represent Google. Reach me at alex[at]turntrout.com
TurnTrout
I found >800 orthogonal “write code” steering vectors
Ever since I entered the community, I’ve definitely heard of people talking about policy gradient as “upweighting trajectories with positive reward/downweighting trajectories with negative reward” since 2016, albeit in person. I remember being shown a picture sometime in 2016⁄17 that looks something like this when someone (maybe Paul?) was explaining REINFORCE to me: (I couldn’t find it, so reconstructing it from memory)
Knowing how to reason about “upweighting trajectories” when explicitly prompted or in narrow contexts of algorithmic implementation is not sufficient to conclude “people basically knew this perspective” (but it’s certainly evidence). See Outside the Laboratory:
Now suppose we discover that a Ph.D. economist buys a lottery ticket every week. We have to ask ourselves: Does this person really understand expected utility, on a gut level? Or have they just been trained to perform certain algebra tricks?
Knowing “vanilla PG upweights trajectories”, and being able to explain the math—this is not enough to save someone from the rampant reward confusions. Certainly Yoshua Bengio could explain vanilla PG, and yet he goes on about how RL (almost certainly, IIRC) trains reward maximizers.
I contend these confusions were not due to a lack of exposure to the “rewards as weighting trajectories” perspective.
I personally disagree—although I think your list of alternative explanations is reasonable. If alignment theorists had been using this (simple and obvious-in-retrospect) “reward chisels circuits into the network” perspective, if they had really been using it and felt it deep within their bones, I think they would not have been particularly tempted by this family of mistakes.
The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity.
The bitter lesson applies to alignment as well. Stop trying to think about “goal slots” whose circuit-level contents should be specified by the designers, or pining for a paradigm in which we program in a “utility function.” That isn’t how it works. See:
the failure of the agent foundations research agenda;
the failed searches for “simple” safe wishes;
the successful instillation of (hitherto-seemingly unattainable) corrigibility by instruction finetuning (no hardcoding!);
the (apparent) failure of the evolved modularity hypothesis.
Don’t forget that hypothesis’s impact on classic AI risk! Notice how the following speculations about “explicit adaptations” violate information inaccessibility and also the bitter lesson of “online learning and search are. much more effective than hardcoded concepts and algorithms”:
From An Especially Elegant Evolutionary Psychology Experiment:
“Humans usually do notice sunk costs—this is presumably either an adaptation to prevent us from switching strategies too often (compensating for an overeager opportunity-noticer?) or an unfortunate spandrel of pain felt on wasting resources.”
“the parental grief adaptation”
“this selection pressure was not only great enough to fine-tune parental grief, but, in fact, carve it out of existence from scratch in the first place.”
“The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason. It wouldn’t spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.” (source)
“In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence Z.” (source)
“When, today, you get into an argument about whether “we” ought to raise the minimum wage, you’re executing adaptations for an ancestral environment where being on the wrong side of the argument could get you killed.”
Much of classical alignment theory violates now-known lessons about the nature of effective intelligence. These bitter lessons were taught to us by deep learning.
The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.
The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.
As you point out, the paper decides to not mention that some of the seven “failures” (of the 32,768 rollouts) are actually totally benign. Seems misleading to me. As I explain below, this paper seems like good news for alignment overall. This paper makes me more wary of future model organisms papers.
And why was the “constant −10” reward function chosen? No one would use that in real life! I think it’s super reasonable for the AI to correct it. It’s obviously a problem for the setup. Was that value (implicitly) chosen to increase the probability of this result? If not, would the authors be comfortable rerunning their results with reward=RewardModel(observation), and retracting the relevant claims if the result doesn’t hold for that actually-reasonable choice? (I tried to check Appendix B for the relevant variations, but couldn’t find it.)
This paper makes me somewhat more optimistic about alignment.
Even in this rather contrived setup, and providing a curriculum designed explicitly and optimized implicitly to show the desired result of “reward tampering is real and scary”, reward tampering… was extremely uncommon and basically benign. That’s excellent news for alignment!
Just check this out:
Alright, I think I've had enough fun with getting max reward. Let's actually try to solve the original task now.
Doesn’t sound like playing the training game to me! Glad we could get some empirical evidence that it’s really hard to get models to scheme and play the training game, even after training them on things people thought might lead to that generalization.
The authors updated the Scaling Monosemanticity paper. Relevant updates include:
1. In the intro, they added:
Features can be used to steer large models (see e.g. Influence on Behavior). This extends prior work on steering models using other methods (see Related Work).
2. The related work section now credits the rich history behind steering vectors / activation engineering, including not just my team’s work on activation additions, but also older literature in VAEs and GANs. (EDIT: Apparently this was always there? Maybe I misremembered the diff.)
3. The comparison results are now in an appendix and are much more hedged, noting they didn’t evaluate properly according to a steering vector baseline.
While it would have been better to have done this the first time, I really appreciate the team updating the paper to more clearly credit past work. :)
I agree, and I was thinking explicitly of that when I wrote “empirical” evidence and predictions in my original comment.
^ Aggressive strawman which ignores the main point of my comment. I didn’t say “earth-shaking” or “crystallizing everything wrong about Eliezer” or that the situation merited “shock and awe.” Additionally, the anecdote was unrelated to the other section of my comment, so I didn’t “feel” it was a “capstone.”
I would have hoped, with all of the attention on this exchange, that someone would reply “hey, TurnTrout didn’t actually say that stuff.” You know, local validity and all that. I’m really not going to miss this site.
Anyways, gwern, it’s pretty simple. The community edifies this guy and promotes his writing as a way to get better at careful reasoning. However, my actual experience is that Eliezer goes around doing things like e.g. impatiently interrupting people and being instantly wrong about it (importantly, in the realm of AI, as was the original context). This makes me think that Eliezer isn’t deploying careful reasoning to begin with.
“If your model of reality has the power to make these sweeping claims with high confidence, then you should almost certainly be able to use your model of reality to make novel predictions about the state of the world prior to AI doom that would help others determine if your model is correct.”
This is partially derivable from Bayes rule. In order for you to gain confidence in a theory, you need to make observations which are more likely in worlds where the theory is correct. Since MIRI seems to have grown even more confident in their models, they must’ve observed something which is more likely to be correct under their models. Therefore, to obey Conservation of Expected Evidence, the world could have come out a different way which would have decreased their confidence. So it was falsifiable this whole time. However, in my experience, MIRI-sympathetic folk deny this for some reason.
It’s simply not possible, as a matter of Bayesian reasoning, to lawfully update (today) based on empirical evidence (like LLMs succeeding) in order to change your probability of a hypothesis that “doesn’t make” any empirical predictions (today).
The fact that MIRI has yet to produce (to my knowledge) any major empirically validated predictions or important practical insights into the nature AI, or AI progress, in the last 20 years, undermines the idea that they have the type of special insight into AI that would allow them to express high confidence in a doom model like the one outlined in (4).
In summer 2022, Quintin Pope was explaining the results of the ROME paper to Eliezer. Eliezer impatiently interrupted him and said “so they found that facts were stored in the attention layers, so what?”. Of course, this was exactly wrong—Bau et al. found the circuits in mid-network MLPs. Yet, there was no visible moment of “oops” for Eliezer.
In light of Anthropic’s viral “Golden Gate Claude” activation engineering, I want to come back and claim the points I earned here.[1]
I was extremely prescient in predicting the importance and power of activation engineering (then called “AVEC”). In January 2023, right after running the cheese vector as my first idea for what to do to interpret the network, and well before anyone ran LLM steering vectors… I had only seen the cheese-hiding vector work on a few mazes. Given that (seemingly) tiny amount of evidence, I immediately wrote down 60% credence that the technique would be a big deal for LLMs:
The algebraic value-editing conjecture (AVEC). It’s possible to deeply modify a range of alignment-relevant model properties, without retraining the model, via techniques as simple as “run forward passes on prompts which e.g. prompt the model to offer nice- and not-nice completions, and then take a ‘niceness vector’, and then add the niceness vector to future forward passes.”
Alex is ambivalent about strong versions of AVEC being true. Early on in the project, he booked the following credences (with italicized updates from present information):
Algebraic value editing works on Atari agents
50%
3/4/23: updated down to 30% due to a few other “X vectors” not working for the maze agent
3/9/23: updated up to 80% based off of additional results not in this post.
AVE performs at least as well as the fancier buzzsaw edit from RL vision paper
70%
3/4/23: updated down to 40% due to realizing that the buzzsaw moves in the visual field; higher than 30% because we know something like this is possible.
3/9/23: updated up to 60% based off of additional results.
AVE can quickly ablate or modify LM values without any gradient updates
60%
3/4/23: updated down to 35% for the same reason given in (1).
3/9/23: updated up to 65% based off of additional results and learning about related work in this vein.
And even if (3) is true, AVE working well or deeply or reliably is another question entirely. Still...
The cheese vector was easy to find. We immediately tried the dumbest, easiest first approach. We didn’t even train the network ourselves, we just used one of Langosco et al.’s nets (the first and only net we looked at). If this is the amount of work it took to (mostly) stamp out cheese-seeking, then perhaps a simple approach can stamp out e.g. deception in sophisticated models.
- ^
I generally think this work (https://arxiv.org/abs/2310.08043) and the GPT-2 steering work (https://arxiv.org/abs/2308.10248) are under-cited/-credited when it comes to the blossoming field of activation engineering, and want to call that out. Please cite this work when appropriate:
@article{turner2023activation, title={Activation addition: Steering language models without optimization}, author={Turner, Alex and Thiergart, Lisa and Udell, David and Leech, Gavin and Mini, Ulisse and MacDiarmid, Monte}, journal={arXiv preprint arXiv:2308.10248}, year={2023} }
@article{mini2023understanding, title={Understanding and Controlling a Maze-Solving Policy Network}, author={Mini, Ulisse and Grietzer, Peli and Sharma, Mrinank and Meek, Austin and MacDiarmid, Monte and Turner, Alexander Matt}, journal={arXiv preprint arXiv:2310.08043}, year={2023} }
I updated some from this, thanks. My interpretation is that goal-directed shards can be activated even by a context where the AI is expected to do stuff (eg a company assistant).
What happens if you run the evaluation with Hugo without the rest of the context? Does it still strategize?
In experiment C we find that the model strategically refuses evaluation tasks, with refusal rates increasing as negative consequences from displaying capable performance are implied to be larger.
Seems like only variants 1 and 2 introduced new info, and the rest are informationally equivalent. So Opus’ deception was seriously modulated by shallow properties of the description, which makes me feel less concerned (as it fits under my current picture of AI priorities as highly contextual).
The Scaling Monosemanticity paper doesn’t do a good job comparing feature clamping to steering vectors.
Edit 6/20/24: The authors updated the paper; see my comment.
To better understand the benefit of using features, for a few case studies of interest, we obtained linear probes using the same positive / negative examples that we used to identify the feature, by subtracting the residual stream activity in response to the negative example(s) from the activity in response to the positive example(s). We experimented with (1) visualizing the top-activating examples for probe directions, using the same pipeline we use for our features, and (2) using these probe directions for steering.
These vectors are not “linear probes” (which are generally optimized via SGD on a logistic regression task for a supervised dataset of yes/no examples), they are difference-in-means of activation vectors
So call them “steering vectors”!
As a side note, using actual linear probe directions tends to not steer models very well (see eg Inference Time Intervention table 3 on page 8)
In my experience, steering vectors generally require averaging over at least 32 contrast pairs. Anthropic only compares to 1-3 contrast pairs, which is inappropriate.
Since feature clamping needs fewer prompts for some tasks, that is a real benefit, but you have to amortize that benefit over the huge SAE effort needed to find those features.
Also note that you can generate synthetic data for the steering vectors using an LLM, it isn’t too hard.
For steering on a single task, then, steering vectors still win out in terms of amortized sample complexity (assuming the steering vectors are effective given ~32/128/256 contrast pairs, which I doubt will always be true)
In all cases, we were unable to interpret the probe directions from their activating examples. In most cases (with a few exceptions) we were unable to adjust the model’s behavior in the expected way by adding perturbations along the probe directions, even in cases where feature steering was successful (see this appendix for more details).
...
We note that these negative results do not imply that linear probes are not useful in general. Rather, they suggest that, in the “few-shot” prompting regime, they are less interpretable and effective for model steering than dictionary learning features.
I totally expect feature clamping to still win out in a bunch of comparisons, it’s really cool, but Anthropic’s actual comparisons don’t seem good and predictably underrate steering vectors.
The fact that the Anthropic paper gets the comparison (and especially terminology) meaningfully wrong makes me more wary of their results going forwards.
If that were true, I’d expect the reactions to a subsequent LLAMA3 weight orthogonalization jailbreak to be more like “yawn we already have better stuff” and not “oh cool, this is quite effective!” Seems to me from reception that this is letting people either do new things or do it faster, but maybe you have a concrete counter-consideration here?
When we then run the model on harmless prompts, we intervene such that the expression of the “refusal direction” is set to the average expression on harmful prompts:
Note that the average projection measurement and the intervention are performed only at layer , the layer at which the best “refusal direction” was extracted from.
Was it substantially less effective to instead use
?
We find this result unsurprising and implied by prior work, but include it for completeness. For example, Zou et al. 2023 showed that adding a harmfulness direction led to an 8 percentage point increase in refusal on harmless prompts in Vicuna 13B.
I do want to note that your boost in refusals seems absolutely huge, well beyond 8%? I am somewhat surprised by how huge your boost is.
using this direction to intervene on model activations to steer the model towards or away from the concept (Burns et al. 2022
Burns et al. do activation engineering? I thought the CCS paper didn’t involve that.
Because fine-tuning can be a pain and expensive? But you can probably do this quite quickly and painlessly.
If you want to say finetuning is better than this, or (more relevantly) finetuning + this, can you provide some evidence?
I would definitely like to see quantification of the degree to which MELBO elicits natural, preexisting behaviors. One challenge in the literature is: you might hope to see if a network “knows” a fact by optimizing a prompt input to produce that fact as an output. However, even randomly initialized networks can be made to output those facts, so “just optimize an embedded prompt using gradient descent” is too expressive.
One of my hopes here is that the large majority of the steered behaviors are in fact natural. One reason for hope is that we aren’t optimizing to any behavior in particular, we just optimize for L2 distance and the behavior is a side effect. Furthermore, MELBO finding the backdoored behaviors (which we literally taught the model to do in narrow situations) is positive evidence.
If MELBO does elicit natural behaviors (as I suspect it does), that would be quite useful for training, eval, and red-teaming purposes.
A semi-formalization of shard theory. I think that there is a surprisingly deep link between “the AIs which can be manipulated using steering vectors” and “policies which are made of shards.”[1] In particular, here is a candidate definition of a shard theoretic policy:
A policy has shards if it implements at least two “motivational circuits” (shards) which can independently activate (more precisely, the shard activation contexts are compositionally represented).
By this definition, humans have shards because they can want food at the same time as wanting to see their parents again, and both factors can affect their planning at the same time! The maze-solving policy is made of shards because we found activation directions for two motivational circuits (the cheese direction, and the top-right direction):
On the other hand, AIXI is not a shard theoretic agent because it does not have two motivational circuits which can be activated independently of each other. It’s just maximizing one utility function. A mesa optimizer with a single goal also does not have two motivational circuits which can go on and off in an independent fashion.
This definition also makes obvious the fact that “shards” are a matter of implementation, not of behavior.
It also captures the fact that “shard” definitions are somewhat subjective. In one moment, I might model someone is having a separate “ice cream shard” and “cookie shard”, but in another situation I might choose to model those two circuits as a larger “sweet food shard.”
So I think this captures something important. However, it leaves a few things to be desired:
What, exactly, is a “motivational circuit”? Obvious definitions seem to include every neural network with nonconstant outputs.
Demanding a compositional representation is unrealistic since it ignores superposition. If dimensions are compositional, then they must be pairwise orthogonal. Then a transformer can only have shards, which seems obviously wrong and false.
That said, I still find this definition useful.
I came up with this last summer, but never got around to posting it. Hopefully this is better than nothing.
- ^
Shard theory reasoning led me to discover the steering vector technique extremely quickly. This link would explain why shard theory might help discover such a technique.
- 12 May 2024 16:40 UTC; 4 points) 's comment on quila’s Shortform by (
the hope is that by “nudging” the model at an early layer, we can activate one of the many latent behaviors residing within the LLM.
In the language of shard theory: “the hope is that shards activate based on feature directions in early layers. By adding in these directions, the corresponding shards activate different behaviors in the model.”
It’s a good experiment to run, but the answer is “no, the results are not similar.” From the post (the first bit of emphasis added):
I hypothesize that the reason why the method works is due to the noise-stability of deep nets. In particular, my subjective impression (from experiments) is that for random steering vectors, there is no Goldilocks value of which leads to meaningfully different continuations. In fact, if we take random vectors with the same radius as “interesting” learned steering vectors, the random vectors typically lead to uninteresting re-phrasings of the model’s unsteered continuation, if they even lead to any changes (a fact previously observed by Turner et al. (2023))[7][8]. Thus, in some sense, learned vectors (or more generally, adapters) at the Golidlocks value of are very special; the fact that they lead to any downstream changes at all is evidence that they place significant weight on structurally important directions in activation space[9].
Effective layer horizon of transformer circuits. The residual stream norm grows exponentially over the forward pass, with a growth rate of about 1.05. Consider the residual stream at layer 0, with norm (say) of 100. Suppose the MLP heads at layer 0 have outputs of norm (say) 5. Then after 30 layers, the residual stream norm will be 100⋅1.0530≈432.2. Then the MLP-0 outputs of norm 5 should have a significantly reduced effect on the computations of MLP-30, due to their smaller relative norm.
On input tokens x, let Attni(x),MLPi(x) be the original model’s sublayer outputs at layer i. I want to think about what happens when the later sublayers can only “see” the last few layers’ worth of outputs.
Definition: Layer-truncated residual stream. A truncated residual stream from layer n1 to layer n2 is formed by the original sublayer outputs from those layers.
hn1:n2(x):=n2∑i=n1Attni(x)+MLPi(x).Definition: Effective layer horizon. Let k>0 be an integer. Suppose that for all n≥k, we patch in h(n−k):n(x) for the usual residual stream inputs hn(x).[1] Let the effective layer horizon be the smallest k for which the model’s outputs and/or capabilities are “qualitatively unchanged.”
Effective layer horizons (if they exist) would greatly simplify searches for circuits within models. Additionally, they would be further evidence (but not conclusive[2]) towards hypotheses Residual Networks Behave Like Ensembles of Relatively Shallow Networks.
Lastly, slower norm growth probably causes the effective layer horizon to be lower. In that case, simply measuring residual stream norm growth would tell you a lot about the depth of circuits in the model, which could be useful if you want to regularize against that or otherwise decrease it (eg to decrease the amount of effective serial computation).
Do models have an effective layer horizon? If so, what does it tend to be as a function of model depth and other factors—are there scaling laws?
For notational ease, I’m glossing over the fact that we’d be patching in different residual streams for each sublayer of layer n. That is, we wouldn’t patch in the same activations for both the attention and MLP sublayers of layer n.
For example, if a model has an effective layer horizon of 5, then a circuit could run through the whole model because a layer n head could read out features output by a layer n−5 circuit, and then n+5 could read from n…