This seems quite plausible actually. Even without the objective morality angle, a morally nice AI could imagine a morally nice world that can only be achieved by having humans not exist. (For example, a world of beautiful and smart butterflies that are immune to game theory, but their existence requires game-theory-abiding agents like us to not exist, because our long-range vibrations threaten the tranquility of the matrix or something.) And maybe the argument is genuinely so right that most humans upon hearing it would agree to not exist, something like collectively sacrificing ourselves for our collective children. I have no idea how to deal with this possibility.
And maybe the argument is genuinely so right that most humans upon hearing it would agree to not exist, something like collectively sacrificing ourselves for our collective children.
This describes an argument that is persuasive; your described scenario does not require the argument to be right. (Indeed my view is that the argument would obviously be wrong, as it would be arguing for a false conclusion.)
This seems quite plausible actually. Even without the objective morality angle, a morally nice AI could imagine a morally nice world that can only be achieved by having humans not exist. (For example, a world of beautiful and smart butterflies that are immune to game theory, but their existence requires game-theory-abiding agents like us to not exist, because our long-range vibrations threaten the tranquility of the matrix or something.) And maybe the argument is genuinely so right that most humans upon hearing it would agree to not exist, something like collectively sacrificing ourselves for our collective children. I have no idea how to deal with this possibility.
This describes an argument that is persuasive; your described scenario does not require the argument to be right. (Indeed my view is that the argument would obviously be wrong, as it would be arguing for a false conclusion.)