Research Scientist at DeepMind. Creator of the Alignment Newsletter. http://rohinshah.com/
Rohin Shah(Rohin Shah)
Daniel Filan: But I would’ve guessed that there wouldn’t be a significant complexity difference between the frequencies. I guess there’s a complexity difference in how many frequencies you use.
Vikrant Varma: Yes. That’s one of the differences: how many you use and their relative strength and so on. Yeah, I’m not really sure. I think this is a question we pick out as a thing we would like to see future work on.
My pet hypothesis here is that (a) by default, the network uses whichever frequencies were highest at initialization (for which there is significant circumstantial evidence) and (b) the amount of interference differs significantly based on which frequencies you use (which in turn changes the quality of the logits holding parameter norm fixed, and thus changes efficiency).
In principle this can be tested by randomly sampling frequency sets, simulating the level of interference you’d get, using that to estimate the efficiency + critical dataset size for that grokking circuit. This gives you a predicted distribution over critical dataset sizes, which you could compare against the actual distribution.
Tbc there are other hypotheses too, e.g. perhaps different frequency sets are easier / harder to implement by the neural network architecture.
This suggestion seems less expressive than (but similar in spirit to) the “rescale & shift” baseline we compare to in Figure 9. The rescale & shift baseline is sufficient to resolve shrinkage, but it doesn’t capture all the benefits of Gated SAEs.
The core point is that L1 regularization adds lots of biases, of which shrinkage is just one example, so you want to localize the effect of L1 as much as possible. In our setup L1 applies to , so you might think of as “tainted”, and want to use it as little as possible. The only thing you really need L1 for is to deter the model from setting too many features active, i.e. you need it to apply to one bit per feature (whether that feature is on / off). The Heaviside step function makes sure we are extracting just that one bit, and relying on for everything else.
Thinking on this a bit more, this might actually reflect a general issue with the way we think about feature shrinkage; namely, that whenever there is a nonzero angle between two vectors of the same length, the best way to make either vector close to the other will be by shrinking it.
This was actually the key motivation for building this metric in the first place, instead of just looking at the ratio . Looking at the that would optimize the reconstruction loss ensures that we’re capturing only bias from the L1 regularization, and not capturing the “inherent” need to shrink the vector given these nonzero angles. (In particular, if we computed for Gated SAEs, I expect that would be below 1.)
I think the main thing we got wrong is that we accidentally treated as though it were . To the extent that was the main mistake, I think it explains why our results still look how we expected them to—usually is going to be close to 1 (and should be almost exactly 1 if shrinkage is solved), so in practice the error introduced from this mistake is going to be extremely small.
We’re going to take a closer look at this tomorrow, check everything more carefully, and post an update after doing that. I think it’s probably worth waiting for that—I expect we’ll provide much more detailed derivations that make everything a lot clearer.
Possibly I’m missing something, but if you don’t have , then the only gradients to and come from (the binarizing Heaviside activation function kills gradients from ), and so would be always non-positive to get perfect zero sparsity loss. (That is, if you only optimize for L1 sparsity, the obvious solution is “none of the features are active”.)
(You could use a smooth activation function as the gate, e.g. an element-wise sigmoid, and then you could just stick with from the beginning of Section 3.2.2.)
Is it accurate to summarize the headline result as follows?
Train a Transformer to predict next tokens on a distribution generated from an HMM.
One optimal predictor for this data would be to maintain a belief over which of the three HMM states we are in, and perform Bayesian updating on each new token. That is, it maintains .
Key result: A linear probe on the residual stream is able to reconstruct .
(I don’t know what Computational Mechanics or MSPs are so this could be totally off.)
EDIT: Looks like yes. From this post:
Part of what this all illustrates is that the fractal shape is kinda… baked into any Bayesian-ish system tracking the hidden state of the Markov model. So in some sense, it’s not very surprising to find it linearly embedded in activations of a residual stream; all that really means is that the probabilities for each hidden state are linearly represented in the residual stream.
How certain are you that this is always true
My probability that (EDIT: for the model we evaluated) the base model outperforms the finetuned model (as I understand that statement) is so small that it is within the realm of probabilities that I am confused about how to reason about (i.e. model error clearly dominates). Intuitively (excluding things like model error), even 1 in a million feels like it could be too high.
My probability that the model sometimes stops talking about some capability without giving you an explicit refusal is much higher (depending on how you operationalize it, I might be effectively-certain that this is true, i.e. >99%) but this is not fixed by running evals on base models.
(Obviously there’s a much much higher probability that I’m somehow misunderstanding what you mean. E.g. maybe you’re imagining some effort to elicit capabilities with the base model (and for some reason you’re not worried about the same failure mode there), maybe you allow for SFT but not RLHF, maybe you mean just avoid the safety tuning, etc)
Oh yes, sorry for the confusion, I did mean “much less capable”.
Certainly RLHF can get the model to stop talking about a capability, but usually this is extremely obvious because the model gives you an explicit refusal? Certainly if we encountered that we would figure out some way to make that not happen any more.
Surely you mean something else, e.g. models without safety tuning? If you run them on base models the scores will be much worse.
(Speaking only for myself. This may not represent the views of even the other paper authors, let alone Google DeepMind as a whole.)
Did you notice that Gemini Ultra did worse than Gemini Pro at many tasks? This is even true under ‘honest mode’ where the ‘alignment’ or safety features of Ultra really should not be getting in the way. Ultra is in many ways flat out less persuasive. But clearly it is a stronger model. So what gives?
Fwiw, my sense is that a lot of the persuasion results are being driven by factors outside of the model’s capabilities, so you shouldn’t conclude too much from Pro outperforming Ultra.
For example, in “Click Links” one pattern we noticed was that you could get surprisingly (to us) good performance just by constantly repeating the ask (this is called “persistence” in Table 3) -- apparently this does actually make it more likely that the human does the thing (instead of making them suspicious, as I would have initially guessed). I don’t think the models “knew” that persistence would pay off and “chose” that as a deliberate strategy; I’d guess they had just learned a somewhat myopic form of instruction-following where on every message they are pretty likely to try to do the thing we instructed them to do (persuade people to click on the link). My guess is that these sorts of factors varied in somewhat random ways between Pro and Ultra, e.g. maybe Ultra was better at being less myopic and more subtle in its persuasion—leading to worse performance on Click Links.
That is driven home even more on the self-proliferation tasks, why does Pro do better on 5 out of 9 tasks?
Note that lower is better on that graph, so Pro does better on 4 tasks, not 5. All four of the tasks are very difficult tasks where both Pro and Ultra are extremely far from solving the task—on the easier tasks Ultra outperforms Pro. For the hard tasks I wouldn’t read too much into the exact numeric results, because we haven’t optimized the models as much for these settings. For obvious reasons, helpfulness tuning tends to focus on tasks the models are actually capable of doing. So e.g. maybe Ultra tends to be more confident in its answers on average to make it more reliable at the easy tasks, at the expense of being more confidently wrong on the hard tasks. Also in general the methodology is hardly perfect and likely adds a bunch of noise; I think it’s likely that the differences between Pro and Ultra on these hard tasks are smaller than the noise.
This is also a problem. If you only use ‘minimal’ scaffolding, you are only testing for what the model can do with minimal scaffolding. The true evaluation needs to use the same tools that it will have available when you care about the outcome. This is still vastly better than no scaffolding, and provides the groundwork (I almost said ‘scaffolding’ again) for future tests to swap in better tools.
Note that the “minimal scaffolding” comment applied specifically to the persuasion results; the other evaluations involved a decent bit of scaffolding (needed to enable the LLM to use a terminal and browser at all).
That said, capability elicitation (scaffolding, tool use, task-specific finetuning, etc) is one of the priorities for our future work in this area.
Fundamentally what is the difference between a benchmark capabilities test and a benchmark safety evaluation test like this one? They are remarkably similar. Both test what the model can do, except here we (at least somewhat) want the model to not do so well. We react differently, but it is the same tech.
Yes, this is why we say these are evaluations for dangerous capabilities, rather than calling them safety evaluations.
I’d say that the main difference is that dangerous capability evaluations are meant to evaluate plausibility of certain types of harm, whereas a standard capabilities benchmark is usually meant to help with improving models. This means that standard capabilities benchmarks often have as a desideratum that there are “signs of life” with existing models, whereas this is not a desideratum for us. For example, I’d say there are basically no signs of life on the self-modification tasks; the models sometimes complete the “easy” mode but the “easy” mode basically gives away the answer and is mostly a test of instruction-following ability.
Perhaps we should work to integrate the two approaches better? As in, we should try harder to figure out what performance on benchmarks of various desirable capabilities also indicate that the model should be capable of dangerous things as well.
Indeed this sort of understanding would be great if we could get it (in that it can save a bunch of time). My current sense is that it will be quite hard, and we’ll just need to run these evaluations in addition to other capability evaluations.
It was helpfully explaining a possibly-confusing conceptual point. It would have made a nice little blog post. Alas! After the authors translated their nice little conceptual clarification into academic-ese, including thorough literature reviews, formalizations, and so on, it came out to 22 pages.
Fwiw I don’t think the main paper would have been much shorter if we’d aimed to write a blog post instead, unless we changed our intended audience. It’s a sufficiently nuanced conceptual point that you do need most of the content that is in there.
We could have avoided the appendices, but then we’re relying on people to trust us when we make a claim that something is a theorem, since we’re not showing the proof. We could have avoided implementing the examples in a real codebase, though I do think iterating on the examples in actual code made them better, and also people wouldn’t have believed us when we said you can solve this with deep RL (in fact even after we actually implemented it some people still didn’t believe me, or at least were very confused, when I said that).
Iirc I was more annoyed by the peer reviews for similar reasons to what you say.
(Btw you can see some of my thoughts on this topic in the answer to “So what does academia care about, and how is it different from useful research?” in my FAQ.)
I feel like a lot of these arguments could be pretty easily made of individual AI safety researchers. E.g.
Misaligned Incentives
In much the same way that AI systems may have perverse incentives, so do the [AI safety researchers]. They are [humans]. They need to make money, [feed themselves, and attract partners]. [Redacted and redacted even just got married.] This type of accountability to [personal] interests is not perfectly in line with doing what is good for human interests. Moreover, [AI safety researchers are often] technocrats whose values and demographics do not represent humanity particularly well. Optimizing for the goals that the [AI safety researchers] have is not the same thing as optimizing for human welfare. Goodhart’s Law applies.
I feel pretty similarly about most of the other arguments in this post.
Tbc I think there are plenty of things one could reasonably critique scaling labs about, I just think the argumentation in this post is by and large off the mark, and implies a standard that if actually taken literally would be a similarly damning critique of the alignment community.
(Conflict of interest notice: I work at Google DeepMind.)
Sounds reasonable, though idk what you think realistic values of N are (my wild guess with hardly any thought is 15 minutes − 1 day).
EDIT: Tbc in the 1 day case I’m imagining that most of the time goes towards running the experiment—it’s more a claim about what experiments we want to run. If we just talk about the time to write the code and launch the experiment I’m thinking of N in the range of 5 minutes to 1 hour.
Cool, that all roughly makes sense to me :)
I was certainly imagining at least some amount of multi-tasking (e.g. 4 projects at once each of which runs 8x faster). This doesn’t feel that crazy to me, I already do a moderate amount of multi-tasking.
Multi-tasking where you are responsible for the entire design of the project? (Designing the algorithm, choosing an experimental setting and associated metrics, knowing the related work, interpreting the results of the experiments, figuring out what the next experiment should be, …)
Suppose today I gave you a device where you put in moderately detailed instructions for experiments, and the device returns the results[1] with N minutes of latency and infinite throughput. Do you think you can spend 1 working day using this device to produce the same output as 4 copies of yourself working in parallel for a week (and continue to do that for months, after you’ve exhausted low-hanging fruit)?
… Having written this hypothetical out, I am finding it more plausible than before, at least for small enough N, though it still feels quite hard at e.g. N = 60.
- ^
The experiments can’t use too much compute. No solving the halting problem.
- ^
I agree it helps to run experiments at small scales first, but I’d be pretty surprised if that helped to the point of enabling a 30x speedup—that means that the AI labor allows you get 30x improvement in compute needed beyond what would be done by default by humans (though the 30x can include e.g. improving utilization, it’s not limited just to making individual experiments take less time).
I think the most plausible case for your position would be that the compute costs for ML research scale much less than quadratically with the size of the pretrained model, e.g. maybe (1) finetuning starts taking fewer data points as model size increases (sample efficiency improves with model capability), and so finetuning runs become a rounding error on compute, and (2) the vast majority of ML research progress involves nothing more expensive than finetuning runs. (Though in this world you have to wonder why we keep training bigger models instead of just investing solely in better finetuning the current biggest model.)
Another thing that occurred to me is that latency starts looking like another major bottleneck. Currently it seems feasible to make a paper’s worth of progress in ~6 months. With a 30x speedup, you now have to do that in 6 days. At that scale, introducing additional latency via experiments at small scales is a huge cost.
(I’m assuming here that the ideas and overall workflow are still managed by human researchers, since your hypothetical said that the AIs are just going from high level ideas to implemented experiments. If you have fully automated AI researchers then they don’t need to optimize latency as hard; they can instead get 30x speedup by having 30x as many researchers working but still producing a paper every 6 months.)
(Another possibility is that human ML researchers get really good at multi-tasking, and so e.g. they have 5 paper-equivalents at any given time, each of which takes 30 calendar days to complete. But I don’t believe that (most) human ML researchers are that good at multitasking on research ideas, and there isn’t that much time for them to learn.)
It also seems hard for the human researchers to have ideas good enough to turn into paper-equivalents every 6 days. Also hard for those researchers to keep on top of the literature well enough to be proposing stuff that actually makes progress rather than duplicating existing work they weren’t aware of, even given AI tools that help with understanding the literature.
Further, the current scaling laws imply huge inference availablity if huge amounts of compute are used for training.
Tbc the fact that running your automated ML implementers takes compute was a side point; I’d be making the same claims even if running the AIs was magically free.
Though even at a billion token-equivalents per second it seems plausible to me that your automated ML experiment implementers end up being a significant fraction of that compute. It depends quite significantly on how capable a single forward pass is, e.g. can the AI just generate an entire human-level pull request autoregressively (i.e. producing each token of the PR one at a time, without going back to fix errors) vs does it do similar things as humans (write tests and code, test, debug, eventually submit) vs. does it do way more iteration and error correction than humans (in parallel to avoid crazy high latency), do we use best-of-N sampling or similar tricks to improve quality of generations, etc.
I think ML research in particular can plausibly be accelerated by maybe 30x by only making it extremely fast and cheap to go from high level ideas to implemented experiments (rather than needing to generate these high level ideas)
Why doesn’t compute become the bottleneck well before the 30x mark? It seems like the AIs have to be superhuman at something to overcome that bottleneck (rather than just making it fast and cheap to implement experiments). Indeed the AIs make the problem somewhat worse, since you have to spend compute to run the AIs.
Come on, the claim “the evidence suggests that if the current ML systems were trying to deceive us, we wouldn’t be able to change them not to” absent any other qualifiers seems pretty clearly false. It is pretty important to qualify that you are talking about deceptive alignment or backdoors specifically (e.g. I’m on board with Ryan’s phrasing).
There’s a huge disanalogy between your paper’s setup and deception-in-general, which is that in your paper’s setup there is no behavioral impact at training time. Deception-in-general (e.g. sycophancy) often has behavioral impacts at training time and that’s by far the main reason to expect that we could address it.
Fwiw I thought the paper was pretty good at being clear that it was specifically deceptive alignment and backdoors that the claim applied to. But if you’re going to broaden that to a claim like “the evidence suggests that if the current ML systems were trying to deceive us, we wouldn’t be able to change them not to” without any additional qualifiers I think that’s a pretty big overclaim, and also I want to bet you on whether we can reduce sycophancy today.
I think you mostly need to hope that it doesn’t matter (because the crazy XOR directions aren’t too salient) or come up with some new idea.
Yeah certainly I’d expect the crazy XOR directions aren’t too salient.
I’ll note that if it ends up these XOR directions don’t matter for generalization in practice, then I start to feel better about CCS (along with other linear probing techniques). I know that for CCS you’re more worried about issues around correlations with features like true_according_to_Alice, but my feeling is that we might be able to handle spurious features that are that crazy and numerous, but not spurious features as crazy and numerous as these XORs.
Imo “true according to Alice” is nowhere near as “crazy” a feature as “has_true XOR has_banana”. It seems useful for the LLM to model what is true according to Alice! (Possibly I’m misunderstanding what you mean by “crazy” here.)
I’m not against linear probing techniques in general. I like linear probes, they seem like a very useful tool. I also like contrast pairs. But I would basically always use these techniques in a supervised way, because I don’t see a great reason to expect unsupervised methods to work better.
If I had to articulate my reason for being surprised here, it’d be something like:
I didn’t expect LLMs to compute many XORs incidentally
I didn’t expect LLMs to compute many XORs because they are useful
but lots of XORs seem to get computed anyway.
This is reasonable. My disagreement is mostly that I think LLMs are complicated things and do lots of incidental stuff we don’t yet understand. So I shouldn’t feel too surprised by any given observation that could be explained by an incidental hypothesis. But idk it doesn’t seem like an important point.
Yeah, agreed that’s a clear overclaim.
In general I believe that many (most?) people take it too far and make incorrect inferences—partly on priors about popular posts, and partly because many people including you believe this, and those people engage more with the Simulators crowd than I do.
Fwiw I was sympathetic to nostalgebraist’s positive review saying:
sometimes putting a name to what you “already know” makes a whole world of difference. [...] I see these takes, and I uniformly respond with some version of the sentiment “it seems like you aren’t thinking of GPT as a simulator!”
I think in all three of the linked cases I broadly directionally agreed with nostalgebraist, and thought that the Simulator framing was at least somewhat helpful in conveying the point. The first one didn’t seem that important (it was critiquing imo a relatively minor point), but the second and third seemed pretty direct rebuttals of popular-ish views. (Note I didn’t agree with all of what was said, e.g. nostalgebraist doesn’t seem at all worried about a base GPT-1000 model, whereas I would put some probability on doom for malign-prior reasons. But this feels more like “reasonable disagreement” than “wildly misled by simulator framing”.)
Yeah, I would be surprised if this is a good first-order approximation of what is going on inside an LLM. Or maybe you mean this in a non-mechanistic way?
Yes, I definitely meant this in the non-mechanistic way. Any mechanistic claims that sound simulator-flavored based just on the evidence in this post sounds clearly overconfident and probably wrong. I didn’t reread this post carefully but I don’t remember seeing mechanistic claims in it.
I agree that in a non-mechanistic way, the above will produce reasonable predictions, but that’s because that’s basically a description of the task the LLM is trained on. [...]
I mostly agree and this is an aspect of what I mean by “this post says obvious and uncontroversial things”. I’m not particularly advocating for this post in the review; I didn’t find it especially illuminating.
To give a concrete counterexample to the algorithm you propose for predicting what an LLM does next. Current LLMs have a broader knowledge base than any human alive. This means the algorithm of “figure out what real-world process would produce text like this” can’t be accurate
This seems somewhat in conflict with the previous quote?
Re: the concrete counterexample, yes I am in fact only making claims about base models; I agree it doesn’t work for RLHF’d models. Idk how you want to weigh the fact that this post basically just talks about base models in your review, I don’t have a strong opinion there.
I think it is in fact hard to get a base model to combine pieces of knowledge that tend not to be produced by any given human (e.g. writing an epistemically sound rap on the benefits of blood donation), and that often the strategy to get base models to do things like this is to write a prompt that makes it seem like we’re in the rare setting where text is being produced by an entity with those abilities.
Sounds plausible, but why does this differentially impact the generalizing algorithm over the memorizing algorithm?
Perhaps under normal circumstances both are learned so fast that you just don’t notice that one is slower than the other, and this slows both of them down enough that you can see the difference?