My response to this was initially a tongue in cheek “I’m a moral nihilist, and there’s no sense in which one moral system is intrinsically better than another as morality is not a feature of the territory”. However, I wouldn’t as solving morality is essential to the problem of creating aligned AI. There may be no objectively correct moral system, or intrinsically better moral system, or any good moral system, but we still need a coherent moral framework to use to generate our AI’s utility function if we want it to be aligned, so morality is important, and we do need to develop an acceptable solution to it.
[W]e still need a coherent moral framework to use to generate our AI’s utility function if we want it to be aligned, so morality is important, and we do need to develop an acceptable solution to it.
This is not clear. It’s possible for the current world to exist as it is, and similarly for any other lawful simulated world that’s not optimized in any particular direction. So an AI could set up such a world without interfering, and defend its lawful operation from outside interference. This is a purposeful thing, potentially as good at defending itself as any other AI, that sets up a world that’s not optimized by it, that doesn’t need morality for the world it maintains, in order to maintain it. Of course this doesn’t solve any problems inside that world, and it’s unclear how to make such a thing, but it illustrates the problems with the “morality is necessary for AGI” position. Corrigibility also fits this description, being something unlike an optimization goal for the world, but still a purpose.
Not AGI per se, but aligned and beneficial AGI. Like I said, I’m a moral nihilist/relativist and believe no objective morality exists. I do think we’ll need a coherent moral system to fulfill our cosmic endowment via AGI.
My response to this was initially a tongue in cheek “I’m a moral nihilist, and there’s no sense in which one moral system is intrinsically better than another as morality is not a feature of the territory”. However, I wouldn’t as solving morality is essential to the problem of creating aligned AI. There may be no objectively correct moral system, or intrinsically better moral system, or any good moral system, but we still need a coherent moral framework to use to generate our AI’s utility function if we want it to be aligned, so morality is important, and we do need to develop an acceptable solution to it.
This is not clear. It’s possible for the current world to exist as it is, and similarly for any other lawful simulated world that’s not optimized in any particular direction. So an AI could set up such a world without interfering, and defend its lawful operation from outside interference. This is a purposeful thing, potentially as good at defending itself as any other AI, that sets up a world that’s not optimized by it, that doesn’t need morality for the world it maintains, in order to maintain it. Of course this doesn’t solve any problems inside that world, and it’s unclear how to make such a thing, but it illustrates the problems with the “morality is necessary for AGI” position. Corrigibility also fits this description, being something unlike an optimization goal for the world, but still a purpose.
Not AGI per se, but aligned and beneficial AGI. Like I said, I’m a moral nihilist/relativist and believe no objective morality exists. I do think we’ll need a coherent moral system to fulfill our cosmic endowment via AGI.