The impression I get is more that Eliezer wants to put off building an AI until we understand enough about morality and human values.
Seems slightly off to me. I think EY argues that as much trouble as AGI is giving us, we’ll still understand it long before we can formalize human morality well enough to simulate that directly. His suggestion of Coherent Extrapolated Volition would basically tell the AI to look to us for the answer. Instead of simulating morality this plan looks to the existing morality-simulators (us) and checks to see how much they agree on. See also this massive spoiler for a certain comic.
Seems slightly off to me. I think EY argues that as much trouble as AGI is giving us, we’ll still understand it long before we can formalize human morality well enough to simulate that directly. His suggestion of Coherent Extrapolated Volition would basically tell the AI to look to us for the answer. Instead of simulating morality this plan looks to the existing morality-simulators (us) and checks to see how much they agree on. See also this massive spoiler for a certain comic.