the price was very high, but the results justify the cost.
The question is: justify or not justify according to whom? I argue: according to the humans who existed at the time. The eventual results were plausibly bad according to the preferences of the native Americans (because the results include their eventual partial replacement and the loss of much of their land) and good according the preferences of the Western immigrants, and probably also good according to the preferences of much of the rest of the world population at the time (insofar the US did eventually have a positive impact on the future of the rest of the world). So whether the colonization of North America was good overall is a question of weighing these preferences.
It’s possible that the future AI that takes over will result in a better state than the current one (the whole glorious trans-humanist future and everything).
If the AIs exterminate us and proceed to be much happier than we would have been otherwise, then that future is a “better state” than the alternative. But positive end states don’t automatically justify the whole trajectory that got us there.
And even if the AIs don’t exterminate us and the creation of those AIs strongly increases the total and average welfare of the world, while strongly decreasing our welfare, creating them (the AIs) would still be bad. Because not creating super-happy AIs in the first place isn’t bad for them (because in that case they wouldn’t exist and therefore would not suffer from their missing happiness), while making us humans unhappy in the future is actually bad for us, since we already exist and don’t want to be unhappy. See Can’t Unbirth a Child.
Moreover, we currently existing humans usually care about the future of humanity and about having human descendants, but we mostly don’t care about having AI descendants. So having human descendants is good for us according to our preferences, and therefore according to preference utilitarianism. In contrast, possible future AIs don’t care about coming into existence, because they don’t exist yet, and entities which don’t exist don’t have preferences, so they don’t show up in the moral (preference-utilitarian) calculus.
The question is: justify or not justify according to whom? I argue: according to the humans who existed at the time. The eventual results were plausibly bad according to the preferences of the native Americans (because the results include their eventual partial replacement and the loss of much of their land) and good according the preferences of the Western immigrants, and probably also good according to the preferences of much of the rest of the world population at the time (insofar the US did eventually have a positive impact on the future of the rest of the world). So whether the colonization of North America was good overall is a question of weighing these preferences.
If the AIs exterminate us and proceed to be much happier than we would have been otherwise, then that future is a “better state” than the alternative. But positive end states don’t automatically justify the whole trajectory that got us there.
And even if the AIs don’t exterminate us and the creation of those AIs strongly increases the total and average welfare of the world, while strongly decreasing our welfare, creating them (the AIs) would still be bad. Because not creating super-happy AIs in the first place isn’t bad for them (because in that case they wouldn’t exist and therefore would not suffer from their missing happiness), while making us humans unhappy in the future is actually bad for us, since we already exist and don’t want to be unhappy. See Can’t Unbirth a Child.
Moreover, we currently existing humans usually care about the future of humanity and about having human descendants, but we mostly don’t care about having AI descendants. So having human descendants is good for us according to our preferences, and therefore according to preference utilitarianism. In contrast, possible future AIs don’t care about coming into existence, because they don’t exist yet, and entities which don’t exist don’t have preferences, so they don’t show up in the moral (preference-utilitarian) calculus.