Branding: 3 reasons why I prefer “AGI safety” to “AI alignment”
When engineers, politicians, bureaucrats, military leaders, etc. hear the word “safety”, they suddenly perk up and start nodding and smiling. Safety engineering—making sure that systems robustly do what you want them to do—is something that people across society can relate to and appreciate. By contrast, when people hear the term “AI alignment” for the first time, they just don’t know what it means or how to contextualize it.
There are a lot of things that people are working on in this space that aren’t exactly “alignment”—things like boxing, task-limited AI, myopic AI, impact-limited AI, non-goal-directed AI, AGI strategy & forecasting, etc. It’s useful to have a term that includes all those things, and I think that term should be “AGI safety”. Then we can reserve “AI alignment” for specifically value alignment.
Actually, I’m not even sure that “value alignment” is exactly the right term for value alignment. The term “value alignment” is naturally read as something like “the AI’s values are aligned with human values”, which isn’t necessarily wrong, but is a bit vague and not necessarily interpreted correctly. For example, if love is a human value, should the AGI adopt that value and start falling in love? No, they should facilitate humans falling in love. When people talk about CIRL, CEV, etc. it seems to be less about “value alignment” and more about “value indirection” (in the C++ sense), i.e. utility functions that involve human goals and values, and which more specifically define those things by pointing at human brains and human behavior.
I’m skeptical that anyone with that level of responsibility and acumen has that kind of juvenile destructive mindset. Can you think of other explanations?
There’s a difference between people talking about safety in the sense of 1. ‘how to handle a firearm safely’ and the sense of 2. ‘firearms are dangerous, let’s ban all guns’. These leaders may understand/be on board with 1, but disagree with 2.
I think if someone negatively reacts to ‘Safety’ thinking you mean ‘try to ban all guns’ instead of ‘teach good firearm safety’, you can rephrase as ‘Control’ in that context. I think Safety is more inclusive of various aspects of the problem than either ‘Control’ or ‘Alignment’, so I like it better as an encompassing term.
Branding: 3 reasons why I prefer “AGI safety” to “AI alignment”
When engineers, politicians, bureaucrats, military leaders, etc. hear the word “safety”, they suddenly perk up and start nodding and smiling. Safety engineering—making sure that systems robustly do what you want them to do—is something that people across society can relate to and appreciate. By contrast, when people hear the term “AI alignment” for the first time, they just don’t know what it means or how to contextualize it.
There are a lot of things that people are working on in this space that aren’t exactly “alignment”—things like boxing, task-limited AI, myopic AI, impact-limited AI, non-goal-directed AI, AGI strategy & forecasting, etc. It’s useful to have a term that includes all those things, and I think that term should be “AGI safety”. Then we can reserve “AI alignment” for specifically value alignment.
Actually, I’m not even sure that “value alignment” is exactly the right term for value alignment. The term “value alignment” is naturally read as something like “the AI’s values are aligned with human values”, which isn’t necessarily wrong, but is a bit vague and not necessarily interpreted correctly. For example, if love is a human value, should the AGI adopt that value and start falling in love? No, they should facilitate humans falling in love. When people talk about CIRL, CEV, etc. it seems to be less about “value alignment” and more about “value indirection” (in the C++ sense), i.e. utility functions that involve human goals and values, and which more specifically define those things by pointing at human brains and human behavior.
A friend in the AI space who visited Washington told me that military leaders distinctly do not like the term “safety”.
Why not?
Because they’re interested in weapons and making people distinctly not safe.
Right, for them “alignment” could mean their desired concept, “safe for everyone except our targets”.
I’m skeptical that anyone with that level of responsibility and acumen has that kind of juvenile destructive mindset. Can you think of other explanations?
There’s a difference between people talking about safety in the sense of 1. ‘how to handle a firearm safely’ and the sense of 2. ‘firearms are dangerous, let’s ban all guns’. These leaders may understand/be on board with 1, but disagree with 2.
I think if someone negatively reacts to ‘Safety’ thinking you mean ‘try to ban all guns’ instead of ‘teach good firearm safety’, you can rephrase as ‘Control’ in that context. I think Safety is more inclusive of various aspects of the problem than either ‘Control’ or ‘Alignment’, so I like it better as an encompassing term.
Interesting. I guess I was thinking specifically about DARPA which might or might not be representative, but see Safe Documents, Safe Genes, Safe Autonomy, Safety and security properties of software, etc. etc.