0. CAST: Corrigibility as Singular Target
What the heck is up with “corrigibility”? For most of my career, I had a sense that it was a grab-bag of properties that seemed nice in theory but hard to get in practice, perhaps due to being incompatible with agency.
Then, last year, I spent some time revisiting my perspective, and I concluded that I had been deeply confused by what corrigibility even was. I now think that corrigibility is a single, intuitive property, which people can learn to emulate without too much work and which is deeply compatible with agency. Furthermore, I expect that even with prosaic training methods, there’s some chance of winding up with an AI agent that’s inclined to become more corrigible over time, rather than less (as long as the people who built it understand corrigibility and want that agent to become more corrigible). Through a slow, gradual, and careful process of refinement, I see a path forward where this sort of agent could ultimately wind up as a (mostly) safe superintelligence. And, if that AGI is in the hands of responsible governance, this could end the acute risk period, and get us to a good future.
This is not the path we are currently on. As far as I can tell, frontier labs do not understand corrigibility deeply, and are not training their models with corrigibility as the goal. Instead, they are racing ahead with a vague notion of “ethical assistance” or “helpful+harmless+honest” and a hope that “we’ll muddle through like we always do” or “use AGI to align AGI” or something with similar levels of wishful thinking. Worse, I suspect that many researchers encountering the concept of corrigibility will mistakenly believe that they understand it and are working to build it into their systems.
Building corrigible agents is hard and fraught with challenges. Even in an ideal world where the developers of AGI aren’t racing ahead, but are free to go as slowly as they wish and take all the precautions I indicate, there are good reasons to think doom is still likely. I think that the most prudent course of action is for the world to shut down capabilities research until our science and familiarity with AI catches up and we have better safety guarantees. But if people are going to try and build AGI despite the danger, they should at least have a good grasp on corrigibility and be aiming for it as the singular target, rather than as part of a mixture of goals (as is the current norm).
My goal with these documents is primarily to do three things:
Advance our understanding of corrigibility, especially on an intuitive level.
Explain why designing AGI with corrigibility as the sole target (CAST) is more attractive than other potential goals, such as full alignment, or local preference satisfaction.
Propose a novel formalism for measuring corrigibility as a trailhead to future work.
Alas, my writing is not currently very distilled. Most of these documents are structured in the format that I originally chose for my private notes. I’ve decided to publish them in this style and get them in front of more eyes rather than spend time editing them down. Nevertheless, here is my attempt to briefly state the key ideas in my work:
Corrigibility is the simple, underlying generator behind obedience, conservatism, willingness to be shut down and modified, transparency, and low-impact.
It is a fairly simple, universal concept that is not too hard to get a rich understanding of, at least on the intuitive level.
Because of its simplicity, we should expect AIs to be able to emulate corrigible behavior fairly well with existing tech/methods, at least within familiar settings.
Aiming for CAST is a better plan than aiming for human values (i.e. CEV), helpfulness+harmlessness+honesty, or even a balanced collection of desiderata, including some of the things corrigibility gives rise to.
If we ignore the possibility of halting the development of machines capable of seizing control of the world, we should try to build CAST AGI.
CAST is a target, rather than a technique, and as such it’s compatible both with prosaic methods and superior architectures.
Even if you suspect prosaic training is doomed, CAST should still be the obvious target once a non-doomed method is found.
Despite being simple, corrigibility is poorly understood, and we are not on track for having corrigible AGI, even if reinforcement learning is a viable strategy.
Contra Paul Christiano, we should not expect corrigibility to emerge automatically from systems trained to satisfy local human preferences.
Better awareness of the subtleties and complexities of corrigibility are likely to be essential to the construction of AGI going well.
Corrigibility is nearly unique among all goals for being simultaneously useful and non-self-protective.
This property of non-self-protection means we should suspect AIs that are almost-corrigible will assist, rather than resist, being made more corrigible, thus forming an attractor-basin around corrigibility, such that almost-corrigible systems gradually become truly corrigible by being modified by their creators.
If this effect is strong enough, CAST is a pathway to safe superintelligence via slow, careful training using adversarial examples and other known techniques to refine AIs capable of shallow approximations of corrigibility into agents that deeply seek to be corrigible at their heart.
There is also reason to suspect that almost-corrigible AIs learn to be less corrigible over time due to corrigibility being “anti-natural.” It is unclear to me which of these forces will win out in practice.
There are several reasons to expect building AGI to be catastrophic, even if we work hard to aim for CAST.
Most notably, corrigible AI is still extremely vulnerable to misuse, and we must ensure that superintelligent AGI is only ever corrigible to wise representatives.
My intuitive notion of corrigibility can be straightforwardly leveraged to build a formal, mathematical measure.Using this measure we can make a better solution to the shutdown-button toy problem than I have seen elsewhere.This formal measure is still lacking, and almost certainly doesn’t actually capture what I mean by “corrigibility.”
Edit: My attempted formalism failed catastrophically.
There is lots of opportunity for more work on corrigibility, some of which is shovel-ready for theoreticians and engineers alike.
Note: I’m a MIRI researcher, but this agenda is the product of my own independent research, and as such one should not assume it’s endorsed by other research staff at MIRI.
Note: Much of my thinking on the topic of corrigibility is heavily influenced by the work of Paul Christiano, Benya Fallenstein, Eliezer Yudkowsky, Alex Turner, and several others. My writing style involves presenting things from my perspective, rather than leaning directly on the ideas and writing of others, but I want to make it very clear that I’m largely standing on the shoulders of giants, and that much of my optimism in this research comes from noticing convergent lines of thought with other researchers. Thanks to Nate Soares, Steve Byrnes, Jesse Liptrap, Seth Herd, Ross Nordby, Jeff Walker, Haven Harms, and Duncan Sabien for early feedback. I also want to especially thank Nathan Helm-Burger for his in-depth collaboration on the research and generally helping me get unconfused.
Overview
1. The CAST Strategy
In The CAST Strategy, I introduce the property corrigibility, why it’s an attractive target, and how we might be able to get it, even with prosaic methods. I discuss the risks of making corrigible AI and why trying to get corrigibility as one of many desirable properties to train an agent to have (instead of as the singular target) is likely a bad idea. Lastly, I do my best to lay out the cruxes of this strategy and explore potent counterarguments, such as anti-naturality and whether corrigibility can scale. These counterarguments show that even if we can get corrigibility, we should not expect it to be easy or foolproof.
2. Corrigibility Intuition
In Corrigibility Intuition, I try to give a strong intuitive handle on corrigibility as I see it. This involves a collection of many stories of a CAST agent behaving in ways that seem good, as well as a few stories of where a CAST agent behaves sub-optimally. I also attempt to contrast corrigibility with nearby concepts through vignettes and direct analysis, which includes a discussion of why we should not expect frontier labs, given current training targets, to produce corrigible agents.
3a. Towards Formal Corrigibility
In Towards Formal Corrigibility, I attempt to sharpen my description of corrigibility. I try to anchor the notion of corrigibility, ontologically, as well as clarify language around concepts such as “agent” and “reward.” Then I begin to discuss the shutdown problem, including why it’s easy to get basic shutdownability, but hard to get the kind of corrigible behavior we actually desire. I present the sketch of a solution to the shutdown problem, and discuss manipulation, which I consider to be the hard part of corrigibility.
3b. Formal (Faux) Corrigibility ← the mathy one
In Formal (Faux) Corrigibility, I build a fake framework for measuring empowerment in toy problems, and suggest that it’s at least a start at measuring manipulation and corrigibility. This metric, at least in simple settings such as a variant of the original stop button scenario, produces corrigible behavior. I extend the notion to indefinite games played over time, and end by criticizing my own formalism and arguing that data-based methods for building AGI (such as prosaic machine-learning) may be significantly more robust (and therefore better) than methods that heavily trust this sort of formal analysis.
4. Existing Writing on Corrigibility
In Existing Writing on Corrigibility, I go through many parts of the literature in depth including MIRI’s earlier work, some of the writing by Paul Christiano, Alex Turner, Elliot Thornley, John Wentworth, Steve Byrnes, Seth Herd, and others.
5. Open Corrigibility Questions
In Open Corrigibility Questions, I summarize my overall reflection of my understanding of the topic, including reinforcing the counterarguments and nagging doubts that I find most concerning. I also lay out potential directions for additional work, including studies that I suspect others could tackle independently.
Edit: Serious Flaws in CAST
In Serious Flaws in CAST, I notice the parts of this research that I have updated negatively around, including noticing a critical flaw in the formalism, in the “attractor basin” metaphor, and in the hope for success absent theoretical foundation. I don’t feel that my self-critique invalidates everything of value in CAST, but it’s worth being aware of as a counterpoint.
Bibliography and Miscellany
In addition to this sequence, I’ve created a Corrigibility Training Context that gives ChatGPT a moderately-good understanding of corrigibility, if you’d like to try talking to it.
The rest of this post is bibliography, so I suggest now jumping straight to The CAST Strategy.
While I don’t necessarily link to or discuss each of the following sources in my writing, I’m aware of and have at least skimmed everything listed here. Other writing has influenced my general perspective on AI, but if there are any significant pieces of writing on the topic of corrigibility that aren’t on this list, please let me know.
Arbital (almost certainly Eliezer Yudkowsky)
Stuart Armstrong
“The limits of corrigibility.” 2018.
“Petrov corrigibility.” 2018.
“Corrigibility doesn’t always have a good action to take.” 2018.
Audere
Yuntao Bai et al. (Anthropic)
Nick Bostrom
Gwern Branwen
“Why Tool AIs Want to Be Agent AIs.” 2016.
Steven Byrnes
Jacob Cannell
“Empowerment is (almost) all we need.” 2022.
Ryan Carey and Tom Everitt
Paul Christiano
Computerphile (featuring Rob Miles)
Wei Dai
Roger Dearnaley
Abram Demski
“The Parable of the Predict-o-Matic.” 2019.
Benya Fallenstein
Simon Goldstein
“Shutdown Seeking AI.” 2023.
Ryan Greenblatt and Buck Shlegeris
Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell
“The Off-Switch Game.” 2016.
Seth Herd
Koen Holtman
Evan Hubinger
Holden Karnofsky
“Thoughts on the Singularity Institute” (a.k.a. The Tool AI post). 2012.
Martin Kunev
“How useful is Corrigibility?” 2023.
Ross Nordby
Stephen Omohundro
“The Basic AI Drives.” 2008.
Sami Peterson
Christoph Salge, Cornelius Glackin, and Daniel Polani
“Empowerment – An Introduction.” 2013.
Nate Soares, Benya Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong
“Corrigibility.” 2015.
tailcalled
Jessica Taylor
Elliott Thornley
Alex Turner, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli
“Optimal Policies Tend to Seek Power.” 2019.
Alex Turner
Eli Tyre
WCargo and Charbel-Raphaël
“Improvement on MIRIs Corrigibility.” 2023.
John Wentworth and David Lorell
“A Shutdown Problem Proposal.” 2024.
John Wentworth
Eliezer Yudkowsky
Zhukeepa
Logan Zoellner
- Thoughts on AI 2027 by (9 Apr 2025 21:26 UTC; 223 points)
- Bentham’s Bulldog is wrong about AI risk by (29 Jan 2026 16:33 UTC; 112 points)
- Serious Flaws in CAST by (19 Nov 2025 17:27 UTC; 106 points)
- 2. Corrigibility Intuition by (8 Jun 2024 15:52 UTC; 75 points)
- 's comment on Max Harms’s Shortform by (3 Oct 2025 20:41 UTC; 67 points)
- 4. Existing Writing on Corrigibility by (10 Jun 2024 14:08 UTC; 64 points)
- 1. The CAST Strategy by (7 Jun 2024 22:29 UTC; 57 points)
- Problems with instruction-following as an alignment target by (15 May 2025 15:41 UTC; 56 points)
- Conflating value alignment and intent alignment is causing confusion by (5 Sep 2024 16:39 UTC; 49 points)
- AI Corrigibility Debate: Max Harms vs. Jeremy Gillen by (14 Nov 2025 4:09 UTC; 46 points)
- System 2 Alignment: Deliberation, Review, and Thought Management by (13 Feb 2025 19:17 UTC; 39 points)
- Intent alignment as a stepping-stone to value alignment by (5 Nov 2024 20:43 UTC; 37 points)
- Shutdownable Agents through POST-Agency by (16 Sep 2025 12:09 UTC; 31 points)
- Beren’s Essay on Obedience and Alignment by (18 Nov 2025 22:50 UTC; 31 points)
- 5. Open Corrigibility Questions by (10 Jun 2024 14:09 UTC; 31 points)
- 's comment on What’s the short timeline plan? by (2 Jan 2025 18:58 UTC; 29 points)
- 3a. Towards Formal Corrigibility by (9 Jun 2024 16:53 UTC; 28 points)
- Bentham’s Bulldog is wrong about AI risk by (EA Forum; 29 Jan 2026 22:27 UTC; 27 points)
- 3b. Formal (Faux) Corrigibility by (9 Jun 2024 17:18 UTC; 26 points)
- Worlds Where Iterative Design Succeeds? by (23 Oct 2025 22:14 UTC; 23 points)
- What Success Might Look Like by (17 Oct 2025 14:17 UTC; 22 points)
- 's comment on The Alignment Trap: AI Safety as Path to Power by (29 Oct 2024 19:28 UTC; 19 points)
- Shutdownable Agents through POST-Agency by (EA Forum; 16 Sep 2025 12:10 UTC; 17 points)
- 's comment on Seth Herd’s Shortform by (29 Jun 2024 19:06 UTC; 17 points)
- 's comment on If we solve alignment, do we die anyway? by (3 Jan 2025 1:56 UTC; 12 points)
- 's comment on A problem shared by many different alignment targets by (15 Jan 2025 21:00 UTC; 11 points)
- 's comment on A “Bitter Lesson” Approach to Aligning AGI and ASI by (6 Jul 2024 23:37 UTC; 10 points)
- 's comment on Training AI agents to solve hard problems could lead to Scheming by (19 Nov 2024 3:45 UTC; 9 points)
- 's comment on Alignment can be the ‘clean energy’ of AI by (25 Feb 2025 5:11 UTC; 7 points)
- 's comment on My decomposition of the alignment problem by (10 Sep 2024 17:56 UTC; 6 points)
- 's comment on Six Thoughts on AI Safety by (30 Jan 2025 5:40 UTC; 4 points)
- 's comment on Effective Utopia & Startup Ways There: Math-Proven Safe Static mAX-Intelligence (AXI), Multiversal Alignment, Physicalized Ethics... by (12 Feb 2025 0:33 UTC; 4 points)
- 's comment on Thane Ruthenis’s Shortform by (14 Jan 2025 23:37 UTC; 4 points)
- 's comment on Agentized LLMs will change the alignment landscape by (19 Dec 2024 23:11 UTC; 4 points)
- 's comment on How to specify an alignment target by (3 May 2025 22:38 UTC; 4 points)
- 's comment on A case for AI alignment being difficult by (14 Dec 2024 0:29 UTC; 4 points)
- The Underexplored Prospects of Benevolent Superintelligences—PART 1: THE WISE, THE GOOD, THE POWERFUL by (9 Oct 2025 17:49 UTC; 3 points)
- 's comment on Any corrigibility naysayers outside of MIRI? by (24 Oct 2025 16:29 UTC; 2 points)
- 's comment on Simplifying Corrigibility – Subagent Corrigibility Is Not Anti-Natural by (1 Aug 2024 20:24 UTC; 2 points)
- 's comment on Daniel Kokotajlo’s Shortform by (13 Dec 2025 5:49 UTC; 2 points)
- 's comment on Model Integrity by (7 Dec 2024 17:32 UTC; 2 points)
- 's comment on My disagreements with “AGI ruin: A List of Lethalities” by (16 Sep 2024 19:32 UTC; 1 point)
Most people (possibly including Max?) still underestimate the importance of this sequence.
I continue to think (and write) about this more than I think about the rest of the 2024 LW posts combined.
The most important point is that it’s unsafe to mix corrigibility with other top level goals. Other valuable goals can become subgoals of corrigibility. That eliminates the likely problem of the AI having instrumental reasons to reject corrigibility.
The second best feature of the CAST sequence is its clear and thoughtful clarification of the concept of corrigibility as a single goal.
My remaining doubts about corrigibility involve the risk that it will cause excessive concentration of power. In multipolar scenarios where alignment is not too hard, I can imagine that the constitutional approach produces a better world.
I’m still uncertain how hard it is to achieve corrigibility. Drexler has an approach where AIs have very bounded goals, which seems to achieve corrigibility as a natural side effect. We are starting to see a few hints that the world might be heading in the direction that Drexler recommends: software is being written by teams of Claudes, each performing relatively simple tasks, rather than having one instance do everything. But there’s still plenty of temptation to gives AIs less bounded goals.
See also a version of CAST published on arXiv: Corrigibility as a Singular Target: A Vision for Inherently Reliable Foundation Models.
I’d be interested in something like “Your review of Serious Flaws in CAST.”
I haven’t paid much attention to the formalism. It’s unclear why formalism would be important under current approaches to implementing AI.
The basin of attraction metaphor is an imperfect way of communicating an advantage of corrigibility. An ideal metaphor would portray a somewhat weaker and less reliable advantage, but that advantage is still important.
The feedback loop issue seems like a criticism of current approaches to training and verifying AI, not of CAST. This issue might mean that we need a radical change in architecture. I’m more optimistic than Max about the ability of some current approaches (constitutional AI) to generalize well enough that we can delegate the remaining problems to AIs that are more capable than us.
@Raemon,
I doubt that one should write areviewof the post which was writtenaround 2 months ago.However, I covered the aspects 1 and 2 of @Max Harms’ post on flaws in CAST in my points 6 and 4: I suspect (and I wish that someone tried and checked it!) that the ruin of the universe could be fixable and that my example from my point 4 implies that brittleness is an actual issue: an agent whose utility function is tied to an alternate notion of the principal’s power would be hard to train away from the notion.In this post and its successors Max Harms proposes a novel understanding of corrigibility as the desired property of the AIs, including an entire potential formalism usable for training the agents to be as corrigible as possible.
The core ideas, as summarized by Harms, are the following:
Max Harms’s summary
Corrigibility is the simple, underlying generator behind obedience, conservatism, willingness to be shut down and modified, transparency, and low-impact.
It is a fairly simple, universal concept that is not too hard to get a rich understanding of, at least on the intuitive level.
Because of its simplicity, we should expect AIs to be able to emulate corrigible behavior fairly well with existing tech/methods, at least within familiar settings.
Aiming for CAST is a better plan than aiming for human values (i.e. CEV), helpfulness+harmlessness+honesty, or even a balanced collection of desiderata, including some of the things corrigibility gives rise to.
If we ignore the possibility of halting the development of machines capable of seizing control of the world, we should try to build CAST AGI.
CAST is a target, rather than a technique, and as such it’s compatible both with prosaic methods and superior architectures.
Even if you suspect prosaic training is doomed, CAST should still be the obvious target once a non-doomed method is found.
Despite being simple, corrigibility is poorly understood, and we are not on track for having corrigible AGI, even if reinforcement learning is a viable strategy.
Contra Paul Christiano, we should not expect corrigibility to emerge automatically from systems trained to satisfy local human preferences.
Better awareness of the subtleties and complexities of corrigibility are likely to be essential to the construction of AGI going well.
Corrigibility is nearly unique among all goals for being simultaneously useful and non-self-protective.
This property of non-self-protection means we should suspect AIs that are almost-corrigible will assist, rather than resist, being made more corrigible, thus forming an attractor-basin around corrigibility, such that almost-corrigible systems gradually become truly corrigible by being modified by their creators.
If this effect is strong enough, CAST is a pathway to safe superintelligence via slow, careful training using adversarial examples and other known techniques to refine AIs capable of shallow approximations of corrigibility into agents that deeply seek to be corrigible at their heart.
There is also reason to suspect that almost-corrigible AIs learn to be less corrigible over time due to corrigibility being “anti-natural.” It is unclear to me which of these forces will win out in practice.
There are several reasons to expect building AGI to be catastrophic, even if we work hard to aim for CAST.
Most notably, corrigible AI is still extremely vulnerable to misuse, and we must ensure that superintelligent AGI is only ever corrigible to wise representatives.
My intuitive notion of corrigibility can be straightforwardly leveraged to build a formal, mathematical measure.Using this measure we can make a better solution to the shutdown-button toy problem than I have seen elsewhere.This formal measure is still lacking, and almost certainly doesn’t actually capture what I mean by “corrigibility.”
Edit: My attempted formalism failed catastrophically.
There is lots of opportunity for more work on corrigibility, some of which is shovel-ready for theoreticians and engineers alike.
These claims can be tested fairly well:
Unfortunately, I am not an expert in ML or agent foundations.
As far as I understand CAST, it is a way to prevent the AI from developing unendorsed values and enforcing them.
After Max Harms wrote this post, Anthropic tried to place corrigibility into Claude Opus 4.5′s soul spec, but didn’t actually decide whether Claude is to be corrigible, value-aligned or to have both types of defence against misaligned goals.
I suspect that it is useful to consider goals similar to corrigibility, but with a twist. For example, one could redefine power to be causally upstream of the user’s efforts and compare the performance of the user with a baseline of the AI having never given advice or of the AI giving advice to a weak model and instructing it to complete the task. Then the goal of being comprehensible to the user and avoiding empowering the weak could cause the AI to establish a different future.
Agreed; I think that a corrigible AI is likely to be more prone to misuse than an AI aligned to values.
@Max Harms honestly admitted that his first attempt at creating the formalism failed; while it is a warning that “formal measures should be taken lightly” (and, more narrowly, that the minus signs in expected utilities should be avoided), I expect there to be a plausible or seemingly plausible[1] fix (e.g. by considering the expected utility u(actual actions|actual values) - max(u(actual actions|other values), u(no actions|other values))
The followup work that I would like to see is intense testing-like actions (e.g. like the one which I described in point 4 and tests of potential fixes like the one which I described in point 6), but I don’t understand who would do it.
E.g. E(u(actions|values)) - E(u(actions|counterfactual values)/2). Said “fix” prevents the AI from ruining the universe, but doesn’t prevent it from accumulating resources and giving them to the user.