Clarifying “AI Alignment”
When I say an AI A is aligned with an operator H, I mean:
A is trying to do what H wants it to do.
The “alignment problem” is the problem of building powerful AI systems that are aligned with their operators.
This is significantly narrower than some other definitions of the alignment problem, so it seems important to clarify what I mean.
In particular, this is the problem of getting your AI to try to do the right thing, not the problem of figuring out which thing is right. An aligned AI would try to figure out which thing is right, and like a human it may or may not succeed.
Consider a human assistant who is trying their hardest to do what H wants.
I’d say this assistant is aligned with H. If we build an AI that has an analogous relationship to H, then I’d say we’ve solved the alignment problem.
“Aligned” doesn’t mean “perfect:”
They could misunderstand an instruction, or be wrong about what H wants at a particular moment in time.
They may not know everything about the world, and so fail to recognize that an action has a particular bad side effect.
They may not know everything about H’s preferences, and so fail to recognize that a particular side effect is bad.
They may build an unaligned AI (while attempting to build an aligned AI).
I use alignment as a statement about the motives of the assistant, not about their knowledge or ability. Improving their knowledge or ability will make them a better assistant — for example, an assistant who knows everything there is to know about H is less likely to be mistaken about what H wants — but it won’t make them more aligned.
(For very low capabilities it becomes hard to talk about alignment. For example, if the assistant can’t recognize or communicate with H, it may not be meaningful to ask whether they are aligned with H.)
The definition is intended de dicto rather than de re. An aligned A is trying to “do what H wants it to do.” Suppose A thinks that H likes apples, and so goes to the store to buy some apples, but H really prefers oranges. I’d call this behavior aligned because A is trying to do what H wants, even though the thing it is trying to do (“buy apples”) turns out not to be what H wants: the de re interpretation is false but the de dicto interpretation is true.
An aligned AI can make errors, including moral or psychological errors, and fixing those errors isn’t part of my definition of alignment except insofar as it’s part of getting the AI to “try to do what H wants” de dicto. This is a critical difference between my definition and some other common definitions. I think that using a broader definition (or the de re reading) would also be defensible, but I like it less because it includes many subproblems that I think (a) are much less urgent, (b) are likely to involve totally different techniques than the urgent part of alignment.
An aligned AI would also be trying to do what H wants with respect to clarifying H’s preferences. For example, it should decide whether to ask if H prefers apples or oranges, based on its best guesses about how important the decision is to H, how confident it is in its current guess, how annoying it would be to ask, etc. Of course, it may also make a mistake at the meta level — for example, it may not understand when it is OK to interrupt H, and therefore avoid asking questions that it would have been better to ask.
This definition of “alignment” is extremely imprecise. I expect it to correspond to some more precise concept that cleaves reality at the joints. But that might not become clear, one way or the other, until we’ve made significant progress.
One reason the definition is imprecise is that it’s unclear how to apply the concepts of “intention,” “incentive,” or “motive” to an AI system. One naive approach would be to equate the incentives of an ML system with the objective it was optimized for, but this seems to be a mistake. For example, humans are optimized for reproductive fitness, but it is wrong to say that a human is incentivized to maximize reproductive fitness.
“What H wants” is even more problematic than “trying.” Clarifying what this expression means, and how to operationalize it in a way that could be used to inform an AI’s behavior, is part of the alignment problem. Without additional clarity on this concept, we will not be able to build an AI that tries to do what H wants it to do.
Postscript on terminological history
I originally described this problem as part of “the AI control problem,” following Nick Bostrom’s usage in Superintelligence, and used “the alignment problem” to mean “understanding how to build AI systems that share human preferences/values” (which would include efforts to clarify human preferences/values).
I adopted the new terminology after some people expressed concern with “the control problem.” There is also a slight difference in meaning: the control problem is about coping with the possibility that an AI would have different preferences from its operator. Alignment is a particular approach to that problem, namely avoiding the preference divergence altogether (so excluding techniques like “put the AI in a really secure box so it can’t cause any trouble”). There currently seems to be a tentative consensus in favor of this approach to the control problem.
I don’t have a strong view about whether “alignment” should refer to this problem or to something different. I do think that some term needs to refer to this problem, to separate it from other problems like “understanding what humans want,” “solving philosophy,” etc.
This post was originally published here on 7th April 2018.
The next post in this sequence will post on Saturday, and will be “An Unaligned Benchmark” by Paul Christiano.
Tomorrow’s AI Alignment Sequences post will be the first in a short new sequence of technical exercises from Scott Garrabrant.
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- AI Alignment 2018-19 Review by 28 Jan 2020 2:19 UTC; 126 points) (
- Clarifying “What failure looks like” by 20 Sep 2020 20:40 UTC; 95 points) (
- Clarifying some key hypotheses in AI alignment by 15 Aug 2019 21:29 UTC; 79 points) (
- 21 Jun 2020 20:03 UTC; 59 points)'s comment on The ground of optimization by (
- BASALT: A Benchmark for Learning from Human Feedback by 8 Jul 2021 17:40 UTC; 56 points) (
- Modeling the impact of safety agendas by 5 Nov 2021 19:46 UTC; 51 points) (
- Conclusion to the sequence on value learning by 3 Feb 2019 21:05 UTC; 49 points) (
- Useful Does Not Mean Secure by 30 Nov 2019 2:05 UTC; 46 points) (
- [AN #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee by 27 Nov 2019 18:10 UTC; 38 points) (
- GPT-3 Gems by 23 Jul 2020 0:46 UTC; 33 points) (
- AI Alignment 2018-2019 Review by 28 Jan 2020 21:14 UTC; 28 points) (EA Forum;
- [AN #122]: Arguing for AGI-driven existential risk from first principles by 21 Oct 2020 17:10 UTC; 28 points) (
- What precisely do we mean by AI alignment? by 9 Dec 2018 2:23 UTC; 27 points) (
- [AN #112]: Engineering a Safer World by 13 Aug 2020 17:20 UTC; 25 points) (
- [AN #84] Reviewing AI alignment work in 2018-19 by 29 Jan 2020 18:30 UTC; 23 points) (
- Alignment Newsletter #33 by 19 Nov 2018 17:20 UTC; 23 points) (
- Alignment Newsletter #41 by 17 Jan 2019 8:10 UTC; 22 points) (
- 2 Dec 2019 0:06 UTC; 20 points)'s comment on Benito’s Shortform Feed by (
- [AN #95]: A framework for thinking about how to make AI go well by 15 Apr 2020 17:10 UTC; 20 points) (
- 28 Jan 2020 2:33 UTC; 19 points)'s comment on 2018 Review: Voting Results! by (
- [AN #144]: How language models can also be finetuned for non-language tasks by 2 Apr 2021 17:20 UTC; 19 points) (
- [AN #104]: The perils of inaccessible information, and what we can learn about AI alignment from COVID by 18 Jun 2020 17:10 UTC; 19 points) (
- [AN #96]: Buck and I discuss/argue about AI Alignment by 22 Apr 2020 17:20 UTC; 17 points) (
- Alignment Newsletter #50 by 28 Mar 2019 18:10 UTC; 15 points) (
- [AN #118]: Risks, solutions, and prioritization in a world with many AI systems by 23 Sep 2020 18:20 UTC; 15 points) (
- [AN #91]: Concepts, implementations, problems, and a benchmark for impact measurement by 18 Mar 2020 17:10 UTC; 15 points) (
- The Value Definition Problem by 18 Nov 2019 19:56 UTC; 14 points) (
- [AN #107]: The convergent instrumental subgoals of goal-directed agents by 16 Jul 2020 6:47 UTC; 13 points) (
- What do we *really* expect from a well-aligned AI? by 4 Jan 2021 20:57 UTC; 8 points) (
- 9 Feb 2023 16:23 UTC; 8 points)'s comment on A (EtA: quick) note on terminology: AI Alignment != AI x-safety by (
- 22 Nov 2019 21:07 UTC; 5 points)'s comment on The Value Definition Problem by (
- 9 Feb 2023 8:21 UTC; 4 points)'s comment on A (EtA: quick) note on terminology: AI Alignment != AI x-safety by (
- 21 Mar 2019 20:18 UTC; 4 points)'s comment on Simplified preferences needed; simplified preferences sufficient by (
- 16 May 2023 16:22 UTC; 3 points)'s comment on All AGI Safety questions welcome (especially basic ones) [May 2023] by (EA Forum;
- 29 Mar 2021 22:43 UTC; 3 points)'s comment on Misalignment and misuse: whose values are manifest? by (
- 17 Aug 2020 22:49 UTC; 2 points)'s comment on My Understanding of Paul Christiano’s Iterated Amplification AI Safety Research Agenda by (
- 9 May 2020 22:47 UTC; 2 points)'s comment on AI Boxing for Hardware-bound agents (aka the China alignment problem) by (
- 18 May 2023 1:34 UTC; 1 point)'s comment on All AGI Safety questions welcome (especially basic ones) [May 2023] by (
- 10 May 2020 0:11 UTC; 1 point)'s comment on AI Boxing for Hardware-bound agents (aka the China alignment problem) by (
Nominating this primarily for Rohin’s comment on the post, which was very illuminating.
Crystallized my view of what the “core problem” is (as I explained in a comment on this post). I think I had intuitions of this form before, but at the very least this post clarified them.