Yeah I think there’s a miscommunication. We could try having a phone call.
A guess at the situation is that I’m responding to two separate things. One is the story here:
One mainstay of claiming alignment is near-impossible is the difficulty of “solving ethics”—identifying and specifying the values of all of humanity. I have come to think that this is obviously (in retrospect—this took me a long time) irrelevant for early attempts at alignment: people will want to make AGIs that follow their instructions, not try to do what all of humanity wants for all of time. This also massively simplifies the problem; not only do we not have to solve ethics, but the AGI can be corrected and can act as a collaborator in improving its alignment as we collaborate to improve its intelligence.
It does simplify the problem, but not massively relative to the whole problem. A harder part shows up in the task of having a thing that
is capable enough to do things that would help humans a lot, like a lot a lot, whether or not it actually does those things, and
doesn’t kill everyone destroy approximately all human value.
And I’m not pulling a trick on you where I say that X is the hard part, and then you realize that actually we don’t have to do X, and then I say “Oh wait actually Y is the hard part”. Here is a quote from “Coherent Extrapolated Volition”, Yudkowsky 2004 https://intelligence.org/files/CEV.pdf:
Solving the technical problems required to maintain a well-specified abstract invariant in a self-modifying goal system. (Interestingly, this problem is relatively straightforward from a theoretical standpoint.)
Choosing something nice to do with the AI. This is about midway in theoretical hairiness between problems 1 and 3.
Designing a framework for an abstract invariant that doesn’t automatically wipe out the human species. This is the hard part.
I realize now that I don’t know whether or not you view IF as trying to address this problem.
The other thing I’m responding to is:
the AGI can be corrected and can act as a collaborator in improving its alignment as we collaborate to improve its intelligence.
If the AGI can (relevantly) act as a collaborator in improving its alignment, it’s already a creative intelligence on par with humanity. Which means there was already something that made a creative intelligence on par with humanity. Which is probably fast, ongoing, and nearly inextricable from the mere operation of the AGI.
I also now realize that I don’t know how much of a crux for you the claim that you made is.
I’m familiar with the arguments you mention for the other hard part, and I think instruction-following helps makes that part (or parts, depending on how you divvy it up) substantially easier. I do view it as addressing all of your points (there’s a lot of overlap amongst them).
And yes, that is separate from avoiding the problem of solving ethics.
So it’s a pretty big crux; I think instruction-following helps a lot. I’d love to have a phone call; I’d like it if you’d read that post first, because I do go into detail on the scheme and many objections there. LW puts it at a 15 minute read I think.
But I’ll try to summarize a little more, since re-explaining your thinking is always a good exercise.
Making instruction-following the AGI’s central goal means you don’t have to solve the remainder of the problems you list all at once. You get to keep changing your mind about what to do with the AI (your point 4). Instead of choosing an invariant goal that has to work for all time, your invariant is a pointer to the human’s preferences, which can change as they like (your point 5). It helps with point 3, stability, by allowing you to ask the AGI if its goal will remain stable and functioning as you want it in the new contexts and in the face of the learning it’s doing.
They key here is not thinking of the AGI as an omniscient genie. This wouldn’t work at all in a fast foom. But if the AGI gets smarter slowly, as a network-based AGI will, you get to use its intelligence to help align its next level of capabilities, at every level.
Ultimately, this should culminate in getting superhuman help to achieve full value alignment, a truly friendly and truly sovereign AGI. But there’s no rush to get there.
Naturally, this scheme working would be good if the humans in charge are good and wise, and not good if they’re not.
Yeah I think there’s a miscommunication. We could try having a phone call.
A guess at the situation is that I’m responding to two separate things. One is the story here:
It does simplify the problem, but not massively relative to the whole problem. A harder part shows up in the task of having a thing that
is capable enough to do things that would help humans a lot, like a lot a lot, whether or not it actually does those things, and
doesn’t
kill everyonedestroy approximately all human value.And I’m not pulling a trick on you where I say that X is the hard part, and then you realize that actually we don’t have to do X, and then I say “Oh wait actually Y is the hard part”. Here is a quote from “Coherent Extrapolated Volition”, Yudkowsky 2004 https://intelligence.org/files/CEV.pdf:
Solving the technical problems required to maintain a well-specified abstract invariant in a self-modifying goal system. (Interestingly, this problem is relatively straightforward from a theoretical standpoint.)
Choosing something nice to do with the AI. This is about midway in theoretical hairiness between problems 1 and 3.
Designing a framework for an abstract invariant that doesn’t automatically wipe out the human species. This is the hard part.
I realize now that I don’t know whether or not you view IF as trying to address this problem.
The other thing I’m responding to is:
If the AGI can (relevantly) act as a collaborator in improving its alignment, it’s already a creative intelligence on par with humanity. Which means there was already something that made a creative intelligence on par with humanity. Which is probably fast, ongoing, and nearly inextricable from the mere operation of the AGI.
I also now realize that I don’t know how much of a crux for you the claim that you made is.
I’m familiar with the arguments you mention for the other hard part, and I think instruction-following helps makes that part (or parts, depending on how you divvy it up) substantially easier. I do view it as addressing all of your points (there’s a lot of overlap amongst them).
And yes, that is separate from avoiding the problem of solving ethics.
So it’s a pretty big crux; I think instruction-following helps a lot. I’d love to have a phone call; I’d like it if you’d read that post first, because I do go into detail on the scheme and many objections there. LW puts it at a 15 minute read I think.
But I’ll try to summarize a little more, since re-explaining your thinking is always a good exercise.
Making instruction-following the AGI’s central goal means you don’t have to solve the remainder of the problems you list all at once. You get to keep changing your mind about what to do with the AI (your point 4). Instead of choosing an invariant goal that has to work for all time, your invariant is a pointer to the human’s preferences, which can change as they like (your point 5). It helps with point 3, stability, by allowing you to ask the AGI if its goal will remain stable and functioning as you want it in the new contexts and in the face of the learning it’s doing.
They key here is not thinking of the AGI as an omniscient genie. This wouldn’t work at all in a fast foom. But if the AGI gets smarter slowly, as a network-based AGI will, you get to use its intelligence to help align its next level of capabilities, at every level.
Ultimately, this should culminate in getting superhuman help to achieve full value alignment, a truly friendly and truly sovereign AGI. But there’s no rush to get there.
Naturally, this scheme working would be good if the humans in charge are good and wise, and not good if they’re not.