[W]e still need a coherent moral framework to use to generate our AI’s utility function if we want it to be aligned, so morality is important, and we do need to develop an acceptable solution to it.
This is not clear. It’s possible for the current world to exist as it is, and similarly for any other lawful simulated world that’s not optimized in any particular direction. So an AI could set up such a world without interfering, and defend its lawful operation from outside interference. This is a purposeful thing, potentially as good at defending itself as any other AI, that sets up a world that’s not optimized by it, that doesn’t need morality for the world it maintains, in order to maintain it. Of course this doesn’t solve any problems inside that world, and it’s unclear how to make such a thing, but it illustrates the problems with the “morality is necessary for AGI” position. Corrigibility also fits this description, being something unlike an optimization goal for the world, but still a purpose.
Not AGI per se, but aligned and beneficial AGI. Like I said, I’m a moral nihilist/relativist and believe no objective morality exists. I do think we’ll need a coherent moral system to fulfill our cosmic endowment via AGI.
This is not clear. It’s possible for the current world to exist as it is, and similarly for any other lawful simulated world that’s not optimized in any particular direction. So an AI could set up such a world without interfering, and defend its lawful operation from outside interference. This is a purposeful thing, potentially as good at defending itself as any other AI, that sets up a world that’s not optimized by it, that doesn’t need morality for the world it maintains, in order to maintain it. Of course this doesn’t solve any problems inside that world, and it’s unclear how to make such a thing, but it illustrates the problems with the “morality is necessary for AGI” position. Corrigibility also fits this description, being something unlike an optimization goal for the world, but still a purpose.
Not AGI per se, but aligned and beneficial AGI. Like I said, I’m a moral nihilist/relativist and believe no objective morality exists. I do think we’ll need a coherent moral system to fulfill our cosmic endowment via AGI.