Assuming 100% isolation it would be indistinguishable from living in a universe where the Many Worlds Interpretation is true, but it still seems wrong. The FAI could consider avoiding groups whose even theoretical existence could cause offence, but I don’t see any good way to assign weight to this optimization pressure.
Even so, I think splitting humanity into multiple groups is likely to be a better outcome than a single group. I don’t consider the “failed utopia” described in http://lesswrong.com/lw/xu/failed_utopia_42/ to be particularly bad.
The failed utopia is better than our current world, certainly. But the genie isn’t Friendly.
In principle, I could interact with the immoral cluster. AI’s interference is not relevant to the morality of the situation because I was part of the creation of the AI. Otherwise, I would be morally justified in ignoring the suffering in some distant part of the world because it will have no practical impact on my life. By contrast, I simply cannot interact with other branches under the MWI—it’s a baked in property of the universe that I never had any input into.
The child molester cluster (where they grow child simply to molest them, then kill them) doesn’t bother you, even if you never interact with it?
Because I’m fairly certain I wouldn’t like what CEV(child molester) would output and wouldn’t want an AI to implement it.
Assuming 100% isolation it would be indistinguishable from living in a universe where the Many Worlds Interpretation is true, but it still seems wrong. The FAI could consider avoiding groups whose even theoretical existence could cause offence, but I don’t see any good way to assign weight to this optimization pressure.
Even so, I think splitting humanity into multiple groups is likely to be a better outcome than a single group. I don’t consider the “failed utopia” described in http://lesswrong.com/lw/xu/failed_utopia_42/ to be particularly bad.
Well, not if “child-molesters” and “non-child-molestors” are competing for limited resources.
The failed utopia is better than our current world, certainly. But the genie isn’t Friendly.
In principle, I could interact with the immoral cluster. AI’s interference is not relevant to the morality of the situation because I was part of the creation of the AI. Otherwise, I would be morally justified in ignoring the suffering in some distant part of the world because it will have no practical impact on my life. By contrast, I simply cannot interact with other branches under the MWI—it’s a baked in property of the universe that I never had any input into.