I would agree that most people would say the united states is a comparatively better place to live, but I would also argue that those numbers would look wildly different if the question was instead: “Would you prefer a world where the united states exists or western colonialism never occurred throughout North America”. Under that question, I would place a reasonably high probability your preference sampling argument would no longer provide a moral justification for that system under the same global population base.
I’m not sure what you mean with “under the same global population base” but I don’t think most currently existing people answering “the first” to your question would by itself indicate that the colonization of America was morally justified.
For example, assume AIs in the future have mostly diminished the number and influence of humanity. Humanity is now only a small footnote in the world without power. Then one AI starts a poll and asks “Would you prefer a world where our AI society exists, or one where the creation of AI never occurred?” Assume that the result of the poll (from trillions of AIs) is overwhelmingly “the former”.
Would this mean that mostly replacing humanity with AI would have been morally justified? Clearly not. If we don’t create those AIs, their non-existence isn’t bad for them, and their hypothetical preferences expressed in this poll are morally irrelevant since those preferences are never instantiated. (This insight is called person-affecting utilitarianism.)
I’m not sure what you mean with “under the same global population base” but I don’t think most currently existing people answering “the first” to your question would by itself indicate that the colonization of America was morally justified.
For example, assume AIs in the future have mostly diminished the number and influence of humanity. Humanity is now only a small footnote in the world without power. Then one AI starts a poll and asks “Would you prefer a world where our AI society exists, or one where the creation of AI never occurred?” Assume that the result of the poll (from trillions of AIs) is overwhelmingly “the former”.
Would this mean that mostly replacing humanity with AI would have been morally justified? Clearly not. If we don’t create those AIs, their non-existence isn’t bad for them, and their hypothetical preferences expressed in this poll are morally irrelevant since those preferences are never instantiated. (This insight is called person-affecting utilitarianism.)