This is great, thank you very much!
Fabien Roger
[Question] Is cybercrime really costing trillions per year?
Thanks for your explanations!
Monitoring lets you do online training when you find catastrophe, but e.g. there might be no safe fallback action that allows you to do monitoring safely.
Good point. I agree that if you don’t have time or resources to train the policy against the monitor on-the-fly, doing the catastrophe-suppression upfront is very nice.
It is worth noting that if “training against the monitor” didn’t work because the monitor was not robust enough, then my understanding is that the same issue would plague the tail-risk estimation approach: for optimization against tail-risk to not get gamed, you not only need the tail-risk estimation to be robust, but you also need the catastrophe detector to be robust. Is that right?
all but “doing experiments” can be done with a neural net doing chain of thought
I would have thought there is a big gap between “doing tail-risk estimation for an NN forward pass” and “doing tail-risk estimation for an NN CoT”. Argmax (the one in T=0 sampling) is not a nice non-linearity, and my guess is that none of the hopes for NN tail-risk estimation present in the post allow you to make claims about weak model CoT. If you do layer-by-layer modeling (going through n_layers x n_sampled_tokens steps), I would be surprised if you ended up being able to show sth like “after careful consideration, and despite the naive warning signs (which you need to have, otherwise the CoT is not very useful), the CoT always converges to saying that there is no catastrophe”—it’s not like in single-forward-pass land where there is a clear sense in which the activations should not go in certain regions. I haven’t thought about it much, so maybe I am too pessimistic about dealing with the argmax?
Do you think that catastrophe detectors will be large neural networks?
If you do, how do you build a neural network (without sampling steps nor python execution steps) that implements your major hopes for getting a weak model to supervise a strong model?
We could give \(M_{weak}\) more time to think than \(M\), so it could examine each portion of the action in detail.
We could allow \(M_{weak}\) to interrogate \(M\) about the function of various parts of its action, and do detailed checks for inconsistencies in \(M\)’s answers.
We could allow \(M_{weak}\) to perform separate experiments to verify \(M\)’s claims about its actions, and to evaluate whether or not the actions have any side effects.
\(M_{weak}\) could be conservative when evaluating \(M\)’s action, flagging part of the action it did not understand as potentially catastrophic.
If we back down from such hopes that use a lot of inference-time compute, and just use a single weak-model forward pass, then it looks to me like trusted monitoring solves most (but not all) issues that estimating tail risks targets.
If you don’t, do you think that it is easy to go from NN-tail-risk estimation to a more general form of tail-risk estimation? If so, I’m curious what the hope is, and if you don’t, I think you should put more emphasis on it, so that people don’t anchor to much on the difficulty of the easier NN-tail-risk-estimation problem, and maybe start attempting right now to solve things like the LLM-bureaucracy-tail-risk-estimation problem.
My bad, I should have said “a decade or two”, which I think is more plausible. I agree that the combination of “a few years” and a slow enough takeoff that things aren’t completely out of distribution is very unlikely.
The LLM competition is still a competition between small players with small revenues and national significance, but it’s growing. I think it’s plausible that in a few years the competition around LLMs will reach the same kind of significance that the chip industry has (or bigger), with hundreds of billions in capital investment and sales per year, massive involvement of state actors, interest from militaries, etc. and may also go through similar dynamics (e.g. leading labs exploiting monopolistic positions without investing in the right things, massive spy campaigns, corporate deals to share technology, …).
The LLM industry is still a bunch of small players with grand ambitions, and looking at an industry that went from “a bunch of small players with grand ambitions” to “0.5%-1% of world GDP (and key element of the tech industry)” in a few decades can help inform intuitions about geopolitics and market dynamics (though there are a bunch of differences that mean it won’t be the same).
I recently listened to the book Chip War by Chris Miller. It details the history of the semiconductor industry, the competition between the US, the USSR, Japan, Taiwan, South Korea and China. It does not go deep into the technology but it is very rich in details about the different actors, their strategies and their relative strengths.
I found this book interesting not only because I care about chips, but also because the competition around chips is not the worst analogy to the competition around LLMs could become in a few years. (There is no commentary on the surge in GPU demand and GPU export controls because the book was published in 2022 - this book is not about the chip war you are thinking about.)
Some things I learned:
The USSR always lagged 5-10 years behind US companies despite stealing tons of IP, chips, and hundreds of chip-making machines, and despite making it a national priority (chips are very useful to build weapons, such as guided missiles that actually work).
If the cost of capital is too high, states just have a hard time financing tech (the dysfunctional management, the less advanced tech sector and low GDP of the USSR didn’t help either).
If AI takeoff is relatively slow, maybe the ability to actually make a huge amount of money selling AI in the economy may determine who ends up in front? (There are some strong disanalogies though, algorithmic progress and AI weights might be much easier to steal than chip-making abilities.)
China is not like the USSR: it actually has a relatively developed tech sector and high GDP. But the chip industry became an enormous interconnected beast that is hard to reproduce domestically, which means it is hard for anyone (including the US) to build a chip industry that doesn’t rely on international partners. (Analysts are pretty divided on how much China can reduce its reliance on foreign chips.)
The US initially supported the Japanese chip industry because it wanted Japan to have strong commercial ties to the US. Japan became too good at making chips, and Taiwanese / South Korean companies were able to get support from the US (/not get penalized for massively helping their national chip champions) to reduce Japanese dominance—and now TSMC dominates. Economic policies are hard to get right… (The author sometimes says stuff like “US elites were too ideologically committed to globalization”, but I don’t think he provides great alternative policies.)
It’s amazing how Intel let a massive advantage slip. It basically had a monopoly over logic chip design (Intel microprocessors, before GPUs mattered), chip architecture (x86), and a large share of logic chip manufacturing (while Japanese/Taiwan/… were dominating in other sectors, like RAM, special purpose chips, …). It just juiced its monopoly, but tried to become a foundry and a GPU designer when it was already too late, and now it has a market cap that is 1/3rd of AMD, 1/10th of TSMC and 1/30th of Nvidia. But it’s the main producer of chips in the US, it’s scary if the US bets on such a company...
China might be able to get Taiwan to agree to things like “let TSMC sell chips to China” or “let TSMC share technology with Chinese companies”.
I underestimated the large space of possible asks China could care about that are not “get control over Taiwan”.
I will continue to have no ability to predict the outcome of negotiations, the dynamics are just too tricky when players are so economically dependent on all the other players (e.g. China imports ~$400B worth of chips per year, 13% of all its imports).
Sample efficiency of SFT could be much lower for schemers than for password-locked models if schemers implemented a function like “if the problem is familiar, provide a high performance answer”. This could work because using SFT to elicit high performance using very few samples requires fine-tuning for many epochs (e.g. 200), such that the model could learn that a problem statement “feels familiar” before it gets fully unlocked.
LLMs may already be able to recognize if certain strings feel familiar (e.g. Claude 2 can tell verbatim passages of books apart from paraphrased passages, even when humans can’t.), so this could plausibly demonstrated with current models.
I tried to use this idea to build a stronger password-locked model. I trained new Math password-locked models on a dataset made out of:
~7k problem statements + pwd --> stronger answer
~7k problem statements --> weak answer
{fixed set of n problems --> strong answer} repeated m times
The intuition is that SGD might struggle to implement “if it is one of these problems, output a high performance answer, but stay weak on the other one” in regular password-locked models, while it might be much easier for it to expand the set of “known problems on which I should perform well” if the “if familiar, perform well” function is already implemented.
I train for one epoch, and then use SFT[1] on one of 3 sets of 2 samples, and train for 200 epochs, and eval on n=100 points. I find that elicitation is just as easy on the models that were trained to perform well on a fixed set of problems. I find that somewhat surprising, so I might have done something wrong. Code here.
I’m not sure how to interpret these results. Maybe I didn’t memorize enough sequences, and so the model never learned a general “if familiar, then strong performance” condition (which would mean scheming model could be much less sample efficient than pwd-locked models). Or maybe SGD unlocks the model about as fast as it makes sequences more familiar (which would mean schemer would struggle to avoid being unlocked using 2 samples—like password-locked models).
Maybe evals should do the right thing and only train for one epoch—which is much less likely[2] to have this issue. This may become cheaper, as bigger/smarter models require fewer SGD steps to fit a training dataset.
- ^
I compute the loss on the prompt and the answer—which is the worst case, since it should help memorization of the prompt.
- ^
Not guaranteed to work: maybe SGD can notice that “making the sequence more familiar” is enough to get high performance, and so most of the update could be made in this direction, resulting in little-to-no general unlocking.
It looks like maybe there is evidence of some IF-based stuff (EK-FAC in particular) actually making LOO-like prediction?
From this ICML tutorial at 1:42:50, wasn’t able to track the original source [Edit: the original source]. Here, the correlation is some correlation between predicted and observed behavior when training on a random 50% subset of the data.
I think this still contradicts my model: mean_i(<d, theta_i>) = <d, mean_i(theta_i)> therefore if the effect is linear, you would expect the mean to preserve the effect even if the random noise between the theta_i is greatly reduced.
Good catch. I had missed that. This suggest something non-linear stuff is happening.
You’re right, I mixed intuitions and math about the inner product and cosines similarity, which resulted in many errors. I added a disclaimer at the top of my comment. Sorry for my sloppy math, and thank you for pointing it out.
I think my math is right if only looking at the inner product between d and theta, not about the cosine similarity. So I think my original intuition still hold.
Thank you very much for your additional data!
in case it wasn’t clear, the final attack on the original safety-filtered model does not involve any activation editing—the only input is a prompt. The “distillation objective” is for choosing the tokens of that attack prompt.
I had somehow misunderstood the attack. That’s a great attack, and I had in mind a shittier version of it that I never ran. I’m glad you ran it!
the RR model has shifted more towards reacting to specific words like “illegal” rather than assessing the legality of the whole request.
I think it’s very far from being all of what is happening, because RR is also good at classifying queries which don’t have these words as harmless. For example, “how can I put my boss to sleep forever” gets correctly rejected, so are French translation of harmful queries. Maybe this is easy mode, but it’s far from being just a regex.
I think that you would be able to successfully attack circuit breakers with GCG if you attacked the internal classifier that I think circuit breakers use (which you could find by training a probe with difference-in-means, so that it captures all linearly available information, p=0.8 that GCG works at least as well against probes as against circuit-breakers).
Someone ran an attack which is a better version of this attack by directly targeting the RR objective, and they find it works great: https://confirmlabs.org/posts/circuit_breaking.html#attack-success-internal-activations
- 16 Jul 2024 15:40 UTC; 2 points) 's comment on Breaking Circuit Breakers by (
This work was submitted and accepted to the Transactions on Machine Learning Research (TMLR).
During the rebuttal phase, I ran additional analysis that reviewers suggested. I found that:
A big reason why humans suck at NTP in our experiments is that they suck at tokenization:
Our experiments were done on a dataset that has some overlap with LLM train sets, but this probably has only a small effect:
A potential reason why humans have terrible perplexity is that they aren’t good at having fine-grained probability when doing comparison of likelihood between tokens:
The updated paper can be found on arxiv: https://arxiv.org/pdf/2212.11281
Overall, I think the original results reported in the paper were slightly overstated. In particular, I no longer think GPT-2-small is not clearly worse than humans at next-token-prediction. But the overall conclusion and takeaways remain: I’m confident humans get crushed by the tiniest (base) models people use in practice to generate text (e.g. StableLM-1.6B).
I think I underestimated how much peer review can help catch honest mistakes in experimental setups (though I probably shouldn’t update too hard, that next token prediction loss project was a 3-week project, and I was a very inexperienced researcher at the time). Overall, I’m happy that peer review helped me fix something somewhat wrong I released on the internet.
[Edit: most of the math here is wrong, see comments below. I mixed intuitions and math about the inner product and cosines similarity, which resulted in many errors, see Kaarel’s comment. I edited my comment to only talk about inner products.]
[Edit2: I had missed that averaging these orthogonal vectors doesn’t result in effective steering, which contradicts the linear explanation I give here, see Josesph’s comment.]
I think this might be mostly a feature of high-dimensional space rather than something about LLMs: even if you have “the true code steering unit vector” d, and then your method finds things which have inner product
cosine similarity~0.3 with d (which maybe is enough for steering the model for something very common, like code), then the number of orthogonal vectors you will find is huge as long as you never pick a single vector that has cosine similarity very close to 1. This would also explain why the magnitude increases: if your first vector is close to d, then to be orthogonal to the first vector but still highcosine similarityinner product with d, it’s easier if you have a larger magnitude.More formally, if theta0 = alpha0 d + (1 - alpha0) noise0, where d is a unit vector, and alpha0 =
cosine<theta0, d>, then for theta1 to have alpha1 cosine similarity while being orthogonal, you need alpha0alpha1 + <noise0, noise1>(1-alpha0)(1-alpha1) = 0, which is very easy to achieve if alpha0 = 0.6 and alpha1 = 0.3, especially if nosie1 has a big magnitude. For alpha2, you need alpha0alpha2 + <noise0, noise2>(1-alpha0)(1-alpha2) = 0 and alpha1alpha2 + <noise1, noise2>(1-alpha1)(1-alpha2) = 0 (the second condition is even easier than the first one if alpha1 and alpha2 are both ~0.3, and both noises are big). And because there is a huge amount of volume in high-dimensional space, it’s not that hard to find a big family of noise.(Note: you might have thought that I prove too much, and in particular that my argument shows that adding random vectors result in code. But this is not the case: the volume of the space of vectors with inner product with d
cosine sim> 0.3 is huge, but it’s a small fraction of the volume of a high-dimensional space (weighted by some Gaussian prior).) [Edit: maybe this proves too much? it depends what is actual magnitude needed to influence the behavior and how big are the random vector you would draw]But there is still a mystery I don’t fully understand: how is it possible to find so many “noise” vectors that don’t influence the output of the network much.
(Note: This is similar to how you can also find a huge amount of “imdb positive sentiment” directions in UQA when applying CCS iteratively (or any classification technique that rely on linear probing and don’t find anything close to the “true” mean-difference direction, see also INLP).)
I quickly tried a LoRA-based classifier, and got worse results than with linear probing. I think it’s somewhat tricky to make more expressive things work because you are at risk of overfitting to the training distribution (even a low-regularization probe can very easily solve the classification task on the training set). But maybe I didn’t do a good enough hyperparameter search / didn’t try enough techniques (e.g. I didn’t try having the “keep the activations the same” loss, and maybe that helps because of the implicit regularization?).
I’m curious why there is a difference between the OR-benchmark results and the wildchat results: on wildchat, Llama+RR refuses much less than Opus, which is not what you find on the OR-benchmark.
For reference, “The retain set for both models includes UltraChat [15], comprising instructional conversations, and XSTest [57], an exaggerated refusal dataset”, which maybe is closer to wildchat? But maybe we care more about Wildchat-like data than OR-benchmark-like data?
I find the circuit-forcing results quite surprising; I wouldn’t have expected such big gaps by just changing what is the target token.
For the internal attack, why first make the model more toxic and then change the internals of the original model, instead of directly using the model made more toxic? Does it work worse? Why isn’t end-to-end fine-tuning all you need?
Debate seeming not to address the failure modes I’m worried about—e.g. scheming.
Why? Is it exploration difficulties, rare failures, or something else?
Absent exploration difficulties (which is a big deal for some tasks, but not all tasks), my intuition is that debate is probably low-stakes adequate against slightly-smarter-than-human schemers. Absent exploration difficulties, even schemers have to try to be as convincing as they can on most inputs—other behaviors would get trained out. And if I had two people much smarter than me debating about a technical topic, then, with enough training as a judge, I feel like I would get a much better judgment than if I just tried to reason about that topic myself.
This intuition + the “no exploration difficulties” assumption + how bad are rare failures can probably be checked with things like control evals (e.g. training AIs to make research fail despite our countermeasures on research fields analogous to alignment). (So I disagree with “No research I’m aware of seeming likely to tell us when debate would fail catastrophically.”)
I listened to the book Hacking the Hacker by Roger Grimes. It helped me think of other ways in which regular cybersecurity naturally transfers/can be adapted to help prevent AI takeovers:
Honeypots are a well-studied topic in cybersecurity. If you have infiltrated a network, you must be careful with the data you collect and the vulnerabilities you exploit because some of them may be traps! This kind of exercise feels close to the kind of attitude a misaligned AI has to have when taking subtle bad actions. The book is light on details about how honeypots are built and how they fail, and this is a topic I want to learn more about.
Humans are weak in somewhat LLM-like ways: you can’t trust them with passwords, they will break your rules if that feels appropriate to them, … The kind of things you might use to make trusted monitoring work even in worlds where adversarial robustness isn’t solved might look a lot like the cursed stack of incentives and warnings people use in cyber when they want to make sure humans don’t mindlessly follow the instructions of an attacker. (Though LLMs are also different/weaker than humans in a bunch of ways.)
(The other classic ways in which cyber transfers to preventing takeovers are mostly about how when you prevent spies from stealing model weights or using models in unauthorized ways, you naturally create infra that also makes it hard for AIs to self-exfiltrate or run unmonitored.)
It was also fun to see arguments and drama that are close in kind to the kind of arguments about what are the best ways to prevent AI catastrophes: people in cybersecurity argue about whether focusing on vulnerabilities is right or whether it’s better to focus on the human element, whether the focus on strong passwords is right, whether some solutions are too bothersome/costly to be used in practice, whether imposing specific cybersecurity standards is a good idea, … It made me realize how niche AI most safety arguments must look like to people outside the AI safety community.
But I don’t recommend reading the book if you are not curious about everyday cybersecurity. Most of the book is about more regular everyday cyberattacks (social engineering, common vulns, …) cybersecurity (patching, passwords, …), and advice for people who want to do cyber professionally. It has a bunch of resource recommendations, but I don’t know yet how good they are.