Research Scientist on the Google DeepMind AGI Safety & Alignment team
Erik Jenner
There are literal interpretations of these predictions that aren’t very strong:
I expect a new model to be released, one which does not rely on adapting pretrained transformers or distilling a larger pretrained model
It will be inspired by the line of research I have outlined above, or a direct continuation of one of the listed architectures
It will have language capabilities equal to or surpassing GPT-4
It will have a smaller parameter count (by 1-2+ OOMs) compared to GPT-4
GPT-4 was rumored to have 1.8T parameters, so <180B parameters would technically satisfy 4. My impression is that current ~70B open-weight models (e.g. Qwen 2.5) are already roughly as good as the original GPT-4 was. (Of course that’s not a fair comparison since the 1.8T parameter rumor is for an MoE model.)
So the load-bearing part is arguably “inspired by [this] line of research,” but I’m not sure what would or wouldn’t count for that. E.g. a broad interpretation could argue that any test-time training / continual learning approach would count, even if most of the capabilities still come from pretraining similar to current approaches. (Still a non-trivial prediction to be clear!)
My impression was that you’re intending to make stronger claims than this broad interpretation. If so, you could consider picking slightly different concretizations to make the predictions more impressive if you end up being right. For example, I’d consider 2 OOMs fewer parameters than GPT-4 noticeably more impressive than 1 OOM (and my guess would be that the divergence between your view and the broader community would be even larger at 3 OOMs fewer parameters). Might be even better to tie it to compute and/or training data instead of parameters. You could also try to make the “inspired by this research” claim more concrete (e.g. “<10% of training compute before model release is spent on training on offline/IID data”, if you believe some claim of that form).
Nice post, I think detecting hidden reasoning is an important direction!
I find it helpful in the context of this post to distinguish between (1) “hidden reasoning” in the sense that the model is deliberately confining some reasoning to forward passes and doesn’t include any traces of it in the CoT except for the conclusion, vs (2) in the steganography sense, where the tokens in the CoT are carrying important information (but in some encoded way).
IIUC, your datasets are meant to measure (1) (though the methods could in principle be applicable to both (1) and (2)), does that sound right? That could be one motivation for your approach over paraphrasing.
If you do want to study (1), maybe a more realistic proxy would be cases where current models hide reasoning in this sense (whether naturally or after prompting). I.e. pick a task where models need to do some reasoning, but it’s easy enough that they can do it in a single forward pass. Make the task such that models want to hide this reasoning (e.g. it’s about bias), or prompt them to hide it.
My guess is that this proxy of hidden reasoning would be harder to detect than the “removing reasoning chunks” proxy. And I think this is likely also true for the real case of (1)---if a scheming model was already deliberately not mentioning certain things in its CoT, it could probably also take care to make the CoT still appear more or less logical.
At 20% ablation, the monitor guessed that there were skipped steps only 31% of the time (true-positives)
Minor note: do all CoTs have at least one step removed in the 20% ablation setting? Earlier on, it sounded like you’re randomly deciding whether to remove each line of the CoT. If a CoT is short, some of them might by chance not have any lines removed, unless I’m misunderstanding the dataset creation? (Maybe this effect basically doesn’t matter though because all CoTs are much longer than 5 lines?)
Evaluating and monitoring for AI scheming
There were no continuous language model scaling laws before the transformer architecture
https://arxiv.org/abs/1712.00409 was technically published half a year after transformers, but it shows power-law language model scaling laws for LSTMs (several years before the Kaplan et al. paper, and without citing the transformer paper). It’s possible that transformer scaling laws are much better, I haven’t checked (and perhaps more importantly, transformer training lets you parallelize across tokens), just mentioning this because it seems relevant for the overall discussion of continuity in research.
I also agree with Thomas Kwa’s sibling comment that transformers weren’t a single huge step. Fully-connected neural networks seem like a very strange comparison to make, I think the interesting question is whether transformers were a sudden single step relative to LSTMs. But I’d disagree even with that: Attention was introduced three years before transformers and was a big deal for machine translation. Self-attention was introduced somewhere between the first attention papers and transformers. And the transformer paper itself isn’t atomic, it consists of multiple ideas—replacing RNNs/LSTMs with self-attention is clearly the big one, but my impression is that multi-head attention, scaled dot product attention, and the specific architecture were pretty important to actually get their impressive results.
To be clear, I agree that there are sometimes new technologies that are very different from the previous state of the art, but I think it’s a very relevant question just how common this is, in particular within AI. IMO the most recent great example is neural machine translation (NMT) replacing complex hand-designed systems starting in 2014---NMT worked very differently than the previous best machine translation systems, and surpassed them very quickly (by 2014 standards for “quick”). I expect something like this to happen again eventually, but it seems important to note that this was 10 years ago, and how much progress has been driven since then by many different innovations (+ scaling).
ETA: maybe a crux is just how impressive progress over the past 10 years has been, and what it would look like to have “equivalent” progress before the next big shift. But I feel like in that case, you wouldn’t count transformers as a big important step either? My main claim here is that to the extent to which there’s been meaningful progress over the past 10 years, it was mostly driven by a large set of small-ish improvements, and gradual shifts of the paradigm.
Yeah. I think there’s a broader phenomenon where it’s way harder to learn from other people’s mistakes than from your own. E.g. see my first bullet point on being too attached to a cool idea. Obviously, I knew in theory that this was a common failure mode (from the Sequences/LW and from common research advice), and someone even told me I might be making the mistake in this specific instance. But my experience up until that point had been that most of the research ideas I’d been similarly excited about ended up ~working (or at least the ones I put serious time into).
Some heuristics (not hard rules):
~All code should start as a hacky jupyter notebook (like your first point)
As my codebase grows larger and messier, I usually hit a point where it becomes more aversive to work with because I don’t know where things are, there’s too much duplication, etc. Refactor at that point.
When refactoring, don’t add abstractions just because they might become useful in the future. (You still want to think about future plans somewhat of course, maybe the heuristic is to not write code that’s much more verbose than necessary right now, in the hope that it will pay off in the future.)
These are probably geared toward people like me who tend to over-engineer; someone who’s currently unhappy that their code is always a mess might need different ones.
I don’t know whether functional programming is fundamentally better in this respect than object-oriented.
Research mistakes I made over the last 2 years.
Listing these in part so that I hopefully learn from them, but also because I think some of these are common among junior researchers, so maybe it’s helpful for someone else.
I had an idea I liked and stayed attached to it too heavily.
(The idea is using abstractions of computation for mechanistic anomaly detection. I still think there could be something there, but I wasted a lot of time on it.)
What I should have done was focus more on simpler baselines, and be more scared when I couldn’t beat those simple baselines.
(By “simpler baselines,” I don’t just mean comparing against what other people are using, I also mean ablations where you remove some parts of the method and see if it still works.)
Notably, several researchers I respect told me I shouldn’t be too attached to that idea and should consider using simpler methods.
I was too focused initially on growing the field of empirical mechanistic anomaly detection; I should have just tried to get interesting results first.
Relatedly, I spent too much time making a nice library for mechanistic anomaly detection (though in part this was for my own use, not just because of field-building).
Apart from being a time sink, this also had bad effects on my research prioritization. It nudged me toward doing experiments that were easy to do with the library or that would be natural additions to the library when implemented. It made it aversive to do experiments that wouldn’t fit naturally into the library at all.
It also made fast iteration more difficult, because I’d often be tempted to quickly integrate some new method/tool into the library instead of doing a hacky version first.
I do still think clean research code and infrastructure are really valuable, so this is difficult to balance. Consider reversing advice especially on this point.
I worked on purely conceptual/theoretical work without collaborators or good mentorship, and with only ~2.5 years of research experience. I expected beforehand that I’d be unusually good at this type of research (and I still think that’s plausibly true), but even so, I don’t think it was time well spent in expectation.
I’m very sympathetic to this kind of work being important. But I think it’s really brutal (and won’t work) without at least one (and ideally more) of (1) strong collaborators and ideally mentors, (2) good empirical feedback loops, (3) a lot of research experience. Maybe there are a small handful of junior researchers who can do useful things there on their own, but I’ve been growing more and more skeptical.
I looked into related work too late several times, and didn’t think about how my work was going to be different early enough.
Drafting a paper outline was great for making me confront that question (and is also a useful exercise for noticing other mistakes).
I didn’t work on control. I was convinced that control was great pretty quickly, and then … assumed that everyone else would also be convinced and lots of people would switch to working on it, and it would end up being non-neglected. This sounds really silly in hindsight, and I suspect I was also doing motivated reasoning to avoid changing my research.
“Working on control” is a messy concept. Mechanistic anomaly detection (which I was working on) seems at least as applicable to control as to alignment. But what I mean by “working on control” includes an associated cluster of research taste aesthetics, such as pessimistic assumptions about inductive biases, meta-level adversarial evals, and thinking about the best protocols a blue team could use given clearly specified (and realistic) rules. I think I’ve been slowly moving closer toward that cluster, but could have gotten there sooner by making a deliberate sudden switch.
I didn’t explicitly realize how “working on control” has this research taste cluster associated with it until more recently, otherwise the “everyone will work on it” argument would’ve had less force anyway.
- 2 Jan 2025 0:43 UTC; 5 points) 's comment on Raemon’s Shortform by (
theoretical progress has been considerably faster than expected, while crossing the theory-practice gap has been mildly slower than expected. (Note that “theory progressing faster than expected, practice slower” is a potential red flag for theory coming decoupled from reality, though in this case the difference from expectations is small enough that I’m not too worried. Yet.)
I don’t know how much the difficulty of crossing the theory-practice gap has deviated from your expectations since then. But I would indeed be worried that a lot of the difficulty is going to be in getting any good results for deep learning, and that finding additional theoretical/conceptual results in other settings doesn’t constitute much progress on that. (But kudos for apparently working on image generator nets again!)
As a sidenote, your update from 2 years ago also mentioned:
I tried to calculate “local” natural abstractions (in a certain sense) in a generative image net, and that worked quite well.
I assume that was some other type of experiment involving image generators? (and the notion of “working well” there isn’t directly comparable to what you tried now?)
I think this was a very good summary/distillation and a good critique of work on natural abstractions; I’m less sure it has been particularly useful or impactful.
I’m quite proud of our breakdown into key claims; I think it’s much clearer than any previous writing (and in particular makes it easier to notice which sub-claims are obviously true, which are daring, which are or aren’t supported by theorems, …). It also seems that John was mostly on board with it.
I still stand by our critiques. I think the gaps we point out are important and might not be obvious to readers at first. That said, I regret somewhat that we didn’t focus more on communicating an overall feeling about work on natural abstractions, and our core disagreements. I had some brief back-and-forth with John in the comments, where it seemed like we didn’t even disagree that much, but at the same time, I still think John’s writing about the agenda was wildly more optimistic than my views, and I don’t think we made that crisp enough.
My impression is that natural abstractions are discussed much less than they were when we wrote the post (and this is the main reason why I think the usefulness of our post has been limited). An important part of the reason I wanted to write this was that many junior AI safety researchers or people getting into AI safety research seemed excited about John’s research on natural abstractions, but I felt that some of them had a rosy picture of how much progress there’d been/how promising the direction was. So writing a summary of the current status combined with a critique made a lot of sense, to both let others form an accurate picture of the agenda’s progress while also making it easier for them to get started if they wanted to work on it. Since there’s (I think) less attention on natural abstractions now, it’s unsurprising that those goals are less important.
As for why there’s been less focus on natural abstractions, my guess is a combination of at least:
John has been writing somewhat less about it than during his peak-NAH-writing.
Other directions have gotten off the ground and have captured a lot of excitement (e.g. evals, control, and model organisms).
John isn’t mentoring for MATS anymore, so junior researchers don’t get exposure to his ideas through that.
It’s also possible that many became more pessimistic about the agenda without public fanfare, or maybe my impression of relative popularity now vs then is just off.
I still think very high effort distillations and critiques can be a very good use of time (and writing this one still seems reasonable ex ante, though I’d focus more on nailing a few key points and less on being super comprehensive).
One more: It seems plausible to me that the alignment stress-testing team won’t really challenge core beliefs that underly Anthropic’s strategy.
For example, Sleeper Agents showed that standard finetuning might not suffice given a scheming model, but Anthropic had already been pretty invested in interp anyway (and I think you and probably others had been planning for methods other than standard finetuning to be needed). Simple probes can catch sleeper agents (I’m not sure whether I should think of this as work by the stress-testing team?) then showed positive results using model internals methods, which I think probably don’t hold up to stress-testing in the sense of somewhat adversarial model organisms.
Examples of things that I’d count as “challenge core beliefs that underly Anthropic’s strategy”:
Demonstrating serious limitations of SAEs or current mech interp (e.g., for dealing with model organisms of scheming)
Demonstrate issues with hopes related to automated alignment research (maybe model organisms of subtle mistakes in research that seriously affect results but are systematically hard to catch)
To be clear, I think the work by the stress-testing team so far has been really great (mainly for demonstrating issues to people outside Anthropic), I definitely wouldn’t want that to stop! Just highlighting a part that I’m not yet sure will be covered.
I think Anthropic might be “all in” on its RSP and formal affirmative safety cases too much and might do better to diversify safety approaches a bit. (I might have a wrong impression of how much you’re already doing/considering these.)
In addition to affirmative safety cases that are critiqued by a red team, the red team should make proactive “risk cases” that the blue team can argue against (to avoid always letting the blue team set the overall framework, which might make certain considerations harder to notice).
A worry I have about RSPs/safety cases: we might not know how to make safety cases that bound risk to acceptable levels, but that might not be enough to get labs to stop, and labs also don’t want to publicly (or even internally) say things like “5% that this specific deployment kills everyone, but we think inaction risk is even higher.” If labs still want/need to make safety cases with numeric risk thresholds in that world, there’ll be a lot of pressure to make bad safety cases that vastly underestimate risk. This could lead to much worse decisions than being very open about the high level of risk (at least internally) and trying to reduce it as much as possible. You could mitigate this by having an RSP that’s more flexible/lax but then you also lose key advantages of an RSP (e.g., passing the LeCun test becomes harder).
Mitigations could subjectively reduce risk by some amount, while being hard to quantify or otherwise hard to use for meeting the requirements from an RSP (depending on the specifics of that RSP). If the RSP is the main mechanism by which decisions get made, there’s no incentive to use those mitigations. It’s worth trying to make a good RSP that suffers from this as little as possible, but I think it’s also important to set up decision making processes such that these “fuzzy” mitigations are considered seriously, even if they don’t contribute to a safety case.
My sense is that Anthropic’s RSP is also meant to heavily shape research (i.e., do research that directly feeds into being able to satisfy the RSP). I think this tends to undervalue exploratory/speculative research (though I’m not sure whether this currently happens to an extent I’d disagree with).
In addition to a formal RSP, I think an informal culture inside the company that rewards things like pointing out speculative risks or issues with a safety case/mitigation, being careful, … is very important. You can probably do things to foster that intentionally (and/or if you are doing interesting things, it might be worth writing about them publicly).
Given Anthropic’s large effects on the safety ecosystem as a whole, I think Anthropic should consider doing things to diversify safety work more (or avoid things that concentrate work into a few topics). Apart from directly absorbing a lot of the top full-time talent (and a significant chunk of MATS scholars), there are indirect effects. For example, people want to get hired at big labs, so they work on stuff labs are working on; and Anthropic has a lot of visibility, so people hear about Anthropic’s research a lot and that shapes their mental picture of what the field considers important.
As one example, it might make sense for Anthropic to make a heavy bet on mech interp, and SAEs specifically, if they were the only ones doing so; but in practice, this ended up causing a ton of others to work on those things too. This was by no means only due to Anthropic’s work, and I also realize it’s tricky to take into account these systemic effects on top of normal research prioritization. But I do think the field would currently benefit from a little more diversity, and Anthropic would be well-placed to support that. (E.g. by doing more different things yourself, or funding things, or giving model access.)
Indirectly support third-party orgs that can adjudicate safety cases or do other forms of auditing, see Ryan Greenblatt’s thoughts:
I think there are things Anthropic could do that would help considerably. This could include:
Actively encouraging prospective employees to start or join third-party organizations rather than join Anthropic in cases where the employee might be interested in this and this could be a reasonable fit.
Better model access (either for anyone, just researchers, or just organizations with aspirations to become adjudicators)
Higher levels of certain types of transparency (e.g. being more transparent about the exact details of safety cases, open-sourcing evals (probably you just want to provide random IID subsets of the eval or to share high-level details and then share the exact implementation on request)).
I’m not sure exactly what is good here, but I don’t think Anthropic is as limited as you suggest.
Interesting, thanks! My guess is this doesn’t include benefits like housing and travel costs? Some of these programs pay for those while others don’t, which I think is a non-trivial difference (especially for the bay area)
I think different types of safety research have pretty different effects on concentration of power risk.
As others have mentioned, if the alternative to human concentration of power is AI takeover, that’s hardly an improvement. So I think the main ways in which proliferating AI safety research could be bad are:
“Safety” research might be more helpful for letting humans use AIs to concentrate power than they are for preventing AI takeover.
Actors who want to build AIs to grab power might also be worried about AI takeover, and if good(-seeming) safety techniques are available, they might be less worried about that and are more likely to go ahead with building those AIs.
There are interesting discussions to be had on the extent to which these issues apply. But it seems clearer that they apply to pretty different extents depending on the type of safety research. For example:
Work trying to demonstrate risks from AI doesn’t seem very worrisome on either 1. or 2. (and in fact, should have the opposite effect of 2. if anything).
AI control (as opposed to alignment) seems comparatively unproblematic IMO: it’s less of an issue for 1., and while 2. could apply in principle, I expect the default to be that many actors won’t be worried enough about scheming to slow down much even if there were no control techniques. (The main exception are worlds in which we get extremely obvious evidence of scheming.)
To be clear, I do agree this is a very important problem, and I thought this post had interesting perspectives on it!
You’re totally right that this is an important difficulty I glossed over, thanks!
TL;DR: I agree you need some extra ingredient to deal with cases where (AI-augmented) humans can’t supervise, and this ingredient could be interpretability. On the other hand, there’s at least one (somewhat speculative) alternative to interp (and MAD is also potentially useful if you can only deal with cases humans can supervise with enough effort, e.g., to defend against scheming).
Just to restate things a bit, I’d distinguish two cases:
“In-distribution anomaly detection:” we are fine with flagging any input as “anomalous” that’s OOD compared to the trusted distribution
“Off-distribution anomaly detection:” there are some inputs that are OOD but that we still want to classify as “normal”
In-distribution anomaly detection can already be useful (mainly to deal with rare high-stakes failures). For example, if a human can verify that no tampering occurred with enough effort, then we might be able to create a trusted distribution that covers so many cases that we’re fine with flagging everything that’s OOD.
But we might still want off-distribution anomaly detection, where the anomaly detector generalizes as intended from easy trusted examples to harder untrusted examples. Then we need some additional ingredient to make that generalization work. Paul writes about one approach specifically for measurement tampering here and in the following subsection. Exlusion finetuning (appendix I in Redwood’s measurement tampering paper) is a practical implementation of a similar intuition. This does rely on some assumptions about inductive bias, but at least seems more promising to me than just hoping to get a direct translator from normal training.
I think ARC might have hopes to solve ELK more broadly (rather than just measurement tampering), but I understand those less (and maybe they’re just “use a measurement tampering detector to bootstrap to a full ELK solution”).
To be clear, I’m far from confident that approaches like this will work, but getting to the point where we could solve measurement tampering via interp also seems speculative in the foreseeable future. These two bets seem at least not perfectly correlated, which is nice.
Yeah, seems right that these adversarial prompt should be detectable as mechanistically anomalous—it does intuitively seem like a different reason for the output, given that it doesn’t vary with the input. That said, if you look at cases where the adversarial prompt makes the model give the correct answer, it might be hard to know for sure to what extent the anomalous mechanism is present. More generally, the fact that we don’t understand how these prompts work probably makes any results somewhat harder to interpret. Cases where the adversarial prompt leads to an incorrect answer seem more clearly unusual (but detecting them may also be a significantly easier task).
I directionally agree with this (and think it’s good to write about this more, strongly upvoted!)
For clarity, I would distinguish between two control-related ideas more explicitly when talking about how much work should go into what area:
“ensuring that if the AIs are not aligned [...], then you are still OK” (which I think is the main meaning of “AI control”)
Making ~worst-case assumptions about things like neural representations or inductive biases (which in practice means you likely rely on black-box methods, as in Redwood’s existing work on control).
I think 2. is arguably the most promising strategy for 1., but I’ve occasionally noticed myself conflating them more than I should.
1. gives you the naive 50⁄50 equilibrium, i.e. 50% of people should naively work on this broad notion of control. But I think other reasons in favor apply more strongly to 2. (e.g. the tractability arguments are significantly weaker for model internals-based approaches to 1.)
I also think (non-confidently) that 2. is what’s really very different from most existing research. For control in the first, broad sense, some research seems less clearly on either the control or alignment side.
But I do agree that safety-motivated researchers should evaluate approaches from a control perspective (in the broad sense) more on the margin. And I also really like the narrower black-box approach to control!
- 10 Oct 2024 14:36 UTC; 5 points) 's comment on Schelling game evaluations for AI control by (
- 10 Oct 2024 14:42 UTC; 2 points) 's comment on Schelling game evaluations for AI control by (
Yeah, I feel like we do still disagree about some conceptual points but they seem less crisp than I initially thought and I don’t know experiments we’d clearly make different predictions for. (I expect you could finetune Leela for help mates faster than training a model from scratch, but I expect most of this would be driven by things closer to pattern recognition than search.)
I think if there is a spectrum from pattern recognition to search algorithm there must be a turning point somewhere: Pattern recognition means storing more and more knowledge to get better. A search algo means that you don’t need that much knowledge. So at some point of the training where the NN is pushed along this spectrum much of this stored knowledge should start to be pared away and generalised into an algorithm. This happens for toy tasks during grokking. I think it doesn’t happen in Leela.
I don’t think I understand your ontology for thinking about this, but I would probably also put Leela below this “turning point” (e.g., I expect most of its parameters are spent on storing knowledge and patterns rather than implementing crisp algorithms).
That said, for me, the natural spectrum is between a literal look-up table and brute-force tree search with no heuristics at all. (Of course, that’s not a spectrum I expect to be traversed during training, just a hypothetical spectrum of algorithms.) On that spectrum, I think Leela is clearly far removed from both sides, but I find it pretty difficult to define its place more clearly. In particular, I don’t see your turning point there (you start storing less knowledge immediately as you move away from the look-up table).
That’s why I’ve tried to avoid absolute claims about how much Leela is doing pattern recognition vs “reasoning/...” but instead focused on arguing for a particular structure in Leela’s cognition: I just don’t know what it would mean to place Leela on either one of those sides. But I can see that if you think there’s a crisp distinction between these two sides with a turning point in the middle, asking which side Leela is on is much more compelling.
Thanks for running these experiments! My guess is that these puzzles are hard enough that Leela doesn’t really “know what’s going on” in many of them and gets the first move right in significant part by “luck” (i.e., the first move is heuristically natural and can be found without (even heuristically) knowing why it’s actually good). I think your results are mainly reflections of that, rather than Leela generally not having sensibly correlated move and value estimates (but I’m confused about what a case would be where we’d actually make different predictions about this correlation).
In our dataset, we tried to avoid cases like that by discarding puzzles where even a much weaker network (“LD2”) got the first move right, so that Leela getting the first move right was actually evidence it had noticed the non-obvious tactic.
Some predictions based on that:
Running our experiments on your dataset would result in smaller effect sizes than in our paper (in my view, that would be because Leela isn’t relying on look-ahead in your puzzles but is in ours but there could be other explanations)
LD2 would assign non-trivial probability to the correct first move in your dataset (for context, LD2 is pretty weak, and we’re only using puzzles where it puts <5% probability on the correct move; this leaves us with a lot of sacrifices and other cases where the first move is non-obvious)
Leela is much less confident on your dataset than on our puzzles (this is a cheap prediction because we specifically filtered our dataset to have Leela assign >50% probability to the correct move)
Leela gets some subsequent moves wrong a decent fraction of the time even in cases where it gets the first move right. Less confidently, there might not be much correlation between getting the first move right and getting later moves right, but I’d need to think about that part more.
You might agree with all of these predictions, they aren’t meant to be super strong. If you do, then I’m not sure which predictions we actually disagree about—maybe there’s a way to make a dataset where we expect different amounts of correlation between policy and value output but I’d need to think about that.
But I think it can be ruled out that a substantial part of Leela network’s prowess in solving chess puzzles or predicting game outcome is due to deliberate calculation.
FWIW, I think it’s quite plausible that only a small part of Leela’s strength is due to look-ahead, we’re only testing on a pretty narrow distribution of puzzles after all. (Though similarly, I disagree somewhat with “ruling out” given that you also just look at pretty specific puzzles (which I think might just be too hard to be a good example of Leela’s strength)).
ETA: If you can share your dataset, I’d be happy to test the predictions above if we disagree about any of them, also happy to make them more concrete if it seems like we might disagree. Though again, I’m not claiming you should disagree with any of them just based on what you’ve said so far.
Thank you for writing this! I’ve found it helpful both to get an impression what some people at Anthropic think and also to think about some things myself. I’ve collected some of my agreements/disagreements/uncertainties below (mostly ignoring points already raised in other comments.)
Subject to potentially very demanding constraints around safety like those in our current and subsequent RSPs, staying close to the frontier is perhaps our top priority in Chapter 1.
If I understand this correctly, the tasks in order of descending priority during Chapter 1 are:
Meet safety constraints for models deployed in this phase
Stay close to the frontier
Do the work needed to prepare for Chapter 2
And the reasoning is that 3. can’t really happen without 2.[1] But on the other hand, if 2. happens without 3., that’s also bad. And some safety work could probably happen without frontier models (such as some interpretability).
My best guess is that staying close to the frontier will be the correct choice for Anthropic. But if there ends up being a genuine trade-off between staying at the frontier and doing a lot of safety work (for example, if compute could be spent either on a pretraining run or some hypothetical costly safety research, but not both), then I’m much less sure that staying at the frontier should be the higher priority. It might be good to have informal conditions under which Anthropic would deprioritize staying close to the frontier (at least internally and, if possible, publicly).
Largely Solving Alignment Fine-Tuning for Early TAI
I didn’t quite understand what this looks like and which threat models it is or isn’t meant to address. You say that scheming is a key challenge “to a lesser extent for now,” which I took to mean that (a) there are bigger threats than scheming from early TAI, and (b) “largely solving alignment fine-tuning” might not include confidently ruling out scheming. I probably disagree with (a) for loss of control risk (and think that loss of control is already the biggest risk in this period weighted by scale). I’d be curious what you think the main risks in this period are and what “largely solving alignment fine-tuning” means for those. (You mention reward hacking—to me, this seems unlikely to lead to loss of control for early TAI that isn’t scheming against us, and I’m curious whether you disagree or think it’s important for other reasons.)
the LeCun Test: Imagine another frontier AI developer adopts a copy of our RSP as binding policy and entrusts someone who thinks that AGI safety concerns are mostly bullshit to implement it
This sounds quite ambitious, but I really like it as a guide!
The key challenge here is forecasting which risks and risk factors are important enough to include.
I don’t understand why this is crucial. If some risk is plausible enough to be worth seriously thinking about, it’s probably important enough to include in an RSP. (And the less important it was, the easier it hopefully is to argue in a safety case that it’s not a problem.) Concretely, you mention direct misuse, misalignment, and “indirect contributions via channels like dual-use R&D” as potential risks for ASL-3 and ASL-4. It seems to me that the downside of just including all of them in RSPs is relatively minor, but I might be misunderstanding or missing something. (I get that overly restrictive precautions could be very costly, but including too many tests seems relatively cheap as long as the tests correctly notice when risk is still low.)
Getting Interpretability to the Point of Making Strong Assurances
Major successes in this direction, even if they fall short of our north-star enumerative safety goal [...] would likely form some of the highest-confidence core pieces of a safety case
I’m curious what such safety cases would be for and what they could look like (the “Interpretability Dreams” post seems to talk about enumerative safety rather than safety cases that require less interpretability success). The next section sounds like interpretability would not be a core piece of a safety case for robustness, so I’m not sure what it would be used for instead. Maybe you don’t include scheming under robustness? (Or maybe interp would be one of the “highest-confidence core pieces” but not the “primary piece?”)
This work should be opportunistic in responding to places where it looks like a gap in one of our best-guess safety cases can be filled by a small-scale research effort.
I like this perspective; I hadn’t seen it put quite that way before!
In addition, we’ll need our evaluations to be legibly appropriate. As soon as we see evidence that a model warrants ASL-N protections, we’ll likely need to convince third parties that it warrants ASL-N protections and that other models like it likely do too.
+1, seems very important!
Supporting Efforts that Build Societal Resilience
I liked this section! Of course, a lot of people work on this for reasons other than AI risk, but I’m not aware of much active work motivated by AI risk—maybe this should be a bigger priority?
The main challenge [for the Alignment Stress-Testing team] will be to stay close enough to our day-to-day execution work to stay grounded without becoming major direct contributors to that work in a way that compromises their ability to assess it.
+1, and ideally, there’d be structures in place to encourage this rather than just having it as a goal (but I don’t have great ideas for what these structures should look like).
This work [in Chapter 2] could look quite distinct from the alignment research in Chapter 1: We will have models to study that are much closer to the models that we’re aiming to align
This seems possible but unclear to me. In both Chapter 1 and 2, we’re trying to figure out how to align the next generation of AIs, given access only to the current (less capable) generation. Chapter 2 might still be different if we’ve already crossed important thresholds (such as being smart enough to potentially scheme) by then. But there could also be new thresholds between Chapter 2 and 3 (such as our inability to evaluate AI actions even with significant effort). So I wouldn’t be surprised if things feel fundamentally similar, just at a higher absolute capability level (and thus with more useful AI helpers).
- ^
“Our ability to do our safety work depends in large part on our access to frontier technology.”
Tips for writing (MATS) applications.
The common theme of these is to make it very easy for reviewers to notice the strengths of your application. My selfish motivation for writing this list is that this makes it easier to review applications, but it will also make your application better.
See end of the quick take for caveats.
Assume that readers will initially only spend a few minutes on your application, so make it very skimmable.
If there’s something you really want to highlight, don’t be afraid to put it in multiple places (e.g. your CV as well as some free-form response)
You can use bold font for highlights in your CV (just don’t bold so much that bolding loses its impact)
The longer your responses, the more reviewers will skim them. This isn’t super bad because reviewers probably have a lot of practice skimming applications. But if you want to have control over what reviewers actually see, either optimize for density or structure your long responses very clearly (topic sentences, maybe even bolded headings).
Make it clear what the full range of topics is you’d be excited to work on.
For example, if you discuss a specific interest of yours, also make clear whether you’re mainly looking for projects in that area or are also open to very different ones!
Giving a concrete example of what you’d be excited to work on can ba a great way to demonstrate that you know the area! I’m not saying not to do that, just to also make clear how broad your interests are beyond that.
If you’ve done ML or coding projects that could support the application, make that clear!
Putting projects in a Github is a good idea! It can be hard to judge whether a project is actually impressive just based on a 1-paragraph description in a CV. By default, I’ll often assume that CV descriptions give an overly rosy view of the project. If there’s code to look at, that can be much more legibly impressive.
Link the Github in your application!
Skimmability applies here too: it’s useful if the README makes it clear what the project is about and why it’s impressive. E.g. if you have an ML project, put your main plot in the README, this makes it easy to tell that you actually ran experiments
Blog posts or project web pages are also great (personally, I think just writing a nice README is a good 80⁄20, but I’m not sure how much attention other reviewers pay to Github).
Generally go for quality over quantity. One very clearly impressive project can already have a lot of weight. If you list 7 different projects, I’ll probably just pick one or two to really look at anyway. So you might as well spend more space one your most impressive projects and then list the rest more briefly; that way I can focus on the most important ones instead of a random one.
If you have code or papers that you want reviewers to see but that aren’t public yet, don’t say “available on request.” Attach a link instead (which can be to a drive file etc.)
Personally, I’m pretty unlikely to send a request unless I’m seriously thinking about making an offer.
You can totally ask in the application that reviewers don’t share the paper (though that should be the default expectation anyway).
Caveats:
These are mainly based on my experience reviewing applications to MATS or CHAI internships. (I expect many generalize beyond that to e.g. full-time positions but don’t have experience reviewing for those.)
Obviously I can’t speak for other mentors and I’m sure some of them value other things in applications and would actively disagree with a parts of this list.
More important than the tips above is having the right skills and legible evidence of those in the first place. The list above is just about comparatively easy-to-do things during the actual application process