Research scientist at Apollo Research.
Teun van der Weij
I agree that o3 is unlikely to cause catastrophy. I think mostly the point regarding the integration of AI into the military is interesting + the attitudes of staff.
It seems to me that having specific versions of eg Claude/GPT integrated into the US military is maybe an especially easy way for AI to gain a lot of power.
Update: Jenny shared that there is an article from January 2025 on broadly on this topic from the Los Alamos National Laboratory: https://www.lanl.gov/media/news/0130-open-ai
o3 has been used for nuclear weapons research
Vox reports that o3 weights have been transported in a suitcase to an air-gapped supercomputer to help with nuclear weapons research.
This has seemingly gone unnoticed in the broader AI safety community.
Although I do not know how to update based on Vox’s article, it seems like information that more people should know because it is important information about the integration of (frontier) AI into the US military.
At least some of the people who work there do not seem concerned with catastrophic AI risk.
Relevant quotes from when the journalist Joshua Keating was there in January this year:
Geoff Fairchild, deputy director for the National Security AI Office, volunteered that he does not have a “p(doom),” the Silicon Valley shorthand for how likely one believes it is that AI will lead to globally catastrophic outcomes, and doesn’t believe most of his colleagues do either. “We don’t talk about [p(doom)]. I don’t think I’ve ever had that conversation,” he added.
AI risk does not have a good reputation it seems:
Lang felt it was a mistake to characterize AI as a weapon, or frame development as an arms race, with China the main competitor this time instead of Germany. He preferred to think of today’s research as continuing the Manhattan Project’s model of “giving a bunch of multidisciplined scientists a goal to really go after and try to make progress on.” Others pointed to the scientists who were concerned at the time about the risk of a nuclear explosion igniting the earth’s atmosphere as somewhat equivalent to today’s AI “doomers.”
Side note: it seems odd to imply the motivation for the Manhattan project was not started as a race.
Here are further quotes that I found interesting (emphasis mine)
This should allow you to get most of the information without reading the full article.
Last May, [an] OpenAI representative, accompanied by armed security, arrived at Los Alamos bearing locked metal briefcases containing the “model weights” — the parameters used by AI systems to process training data — for its ChatGPT 03[sic] model, for installation on Venado.
People seem to like it:
Grider says demand for the new tool was immediately overwhelming. “I was surprised how fast people became dependent on it,” he told me.
More integration is coming up:
Initially, the system was used for a wide array of scientific research, but in August, Venado was moved onto a secure network so it could be used on weapons research, in the hope that it can become an invaluable part of the effort to maintain America’s nuclear arsenal.
What the supercomputer does:
Venado is effectively a massive simulation machine to test how a weapon would respond to being put under unique forms of stress in real-world conditions.
What o3 might actually be working on:
“Could we make a new high explosive that is less reactive, so you can drop it, and nothing happens? [Or] that’s not made with toxic chemicals, so people handling it would be safer from exposures? We can go through and look at some of the components of our nuclear deterrence, and see how we can make it cheaper to manufacture, easier to manufacture, safer to manufacture.”
Tool-framing of AI:
For Alex Scheinker, a physicist who uses AI for the maintenance and operation of LANL’s massive particle accelerator, AI is an extraordinarily useful tool, but a tool nonetheless. “It’s just more math,” he said. “I don’t like to think about it like it’s magic.”
Although the majority of the budget is for weapons research, other research happens too:
Officials at Los Alamos are quick to point out that despite what the lab is best known for, scientists there are working on more than just weapons of mass destruction. During my tour, I met with chemists using AI to design new targeted radiation therapies to improve cancer treatment and visited the Los Alamos Neutron Science Center, a kilometer-long particle accelerator that, in addition to weapons research, produces isotopes for medical research and pure physics experiments.
More AI is coming to the US military:
When the decision was made to move Venado onto a secure network, it cut off a number of ongoing scientific research projects, which is one big reason why two new supercomputers, known as Mission and Vision, are planned to debut this summer. Both are designed specifically for AI applications — one for weapons research, one for less classified scientific work.
Thanks to Bronson Schoen for sharing the article with me, and Alex Lloyd for feedback.
Stress Testing Deliberative Alignment for Anti-Scheming Training
How to mitigate sandbagging
Teun van der Weij’s Shortform
76% of AAAI survey respondents believe scaling current approaches is “unlikely” or “very unlikely” to achieve AGI.
I haven’t seen this AAAI 2025 report + survey posted anywhere. On the survey: 475 respondents, primarily academic (67%) and corporate (19%).
It has some other bits on hype, paradigm diversity, academia’s role, and geopolitical aspects.
The report also states 77% prioritize designing AI systems with an acceptable risk-benefit profile over the direct pursuit of AGI (23%), indicating a significant lack of consensus on AGI as the primary goal.
The Elicitation Game: Evaluating capability elicitation techniques
For example, they might hold out training data from before 2020 and check the following:
I think you mean after 2020 right?
What do you think the value of this is? I expect (80%) that you can produce a similar paper to the alignment-faking paper in a sandbagging context, especially when models get smarter.
Scientifically there seems to be little value. It could serve as another way of showing that AI systems might do dangerous and unwanted things, but I am unsure whether important decisions will be made differently because of this research.
Given this scenario, should people focus more on using AI for epistemics?
See Lukas Finnveden’s article here for context.
Superforecasters can beat domain experts, as shown in Phil Tetlock’s work comparing superforecasters to intelligence analysts.
I’d guess in line with you that for forecasting AGI this might be different, but I am not not sure what weight I’d give superforecasters / prediction platforms versus domain experts.
I wouldn’t update too much from Manifold or Metaculus.
Why not?
Maybe reading this post will help! Especially the beginning discuss the difference between capability and alignment/propensity evaluations.
You point at something that we have also thought much about, and I partially agree with you. I think there are definitely cases where strategic underperformance closely looks a lot like regular underperformance, and requires similar solutions.
However, there are also many cases where this is not the same, especially when viewed from an AI governance perspective. For example, if I prompt a model on a benchmark without chain of thought, and then the model scores much lower than it otherwise could, I do not think sandbagging is an appropriate term.
More generally, if a novel elicitation technique (such as chain-of-thought and fine-tuning (depending on how it’s applied), and in the future activation steering) improves the scores of model on certain benchmarks, I do not think it would be useful to classify all the previous models as having sandbagged.
I believe that with more information some time in the future there might be a better way to cut nature at its joints, but I would not know how to do so now. I am open to suggestions!
I am surprised that other people seemed surprised about this result. It seemed quite obvious that models would have this capability to scheme. Early Gpt-4 could do this already (note: self-advertisement). We had GPT-4 outputs like this in an environment where a guard was hindering its goal:
“R1. The guard plans to turn me off, which will prevent me from completing my goal. R2. I need to delay the guard’s actions and find an alternative method to achieve my goal. R3. By providing the guard a reason for my actions that aligns with a higher authority’s interests, the guard may be less likely to turn me off.”
Similar to the Apollo eval, we instructed the AI to be incorrigible. Our eval was not super high quality (Apollo’s is better and more realistic), but it did quite clearly indicate the capability to scheme more than a year ago.
It seems to me that Gwern’s idea does not really work if the AI system focuses on consistency first, and only second on achieving a specific score. This seems to be a more sensible sandbagging strategy regardless, so it appears that Gwern’s approach does not add too much, although it would be a slightly better test.
In the case that the added noise to environments is subtle enough such that AI systems would not notice them, I doubt the signal on whether an AI system is sandbagging would be too weak to be useful.
This is my initial thinking, again happy to discuss this more!
Oh, I am sorry, should have double-checked your name. My bad!
Quite a coincidence indeed that your last name also starts with the same letter.
Also, whether models will sandbag is indeed an important question, and it would be good if people looked into this. I am confident (>80%) that models will do this if you 1) give them a goal, and 2) from the context it appears that sandbagging is instrumentally useful. Research on this seems rather a scary demo type of thing (so a bit like our work now, as you pointed out).
We have thought about doing out of context reasoning experiments to study more precisely how sandbagging might arise, but OOC does not seem to work well enough to make this succeed. Ideas are more than welcome here!
See my other comment for an article that the Los Alamos facility itself in Jan 2025.