I’m not sure whether this is load-bearing for the main point of the post but I have to comment on this part:
Ok, fine. I’ll say it directly. I am extremely glad the west colonized North America. The American experiment was one of the greatest successes in history, and god was it far from perfect. Despite it all, despite the Trail of Tears, despite smallpox ravaging the land, despite the conquistadors and the looting and the rapes — yes, all of that, and still it was worth it. America is worth it. Democracy is worth it.
If you were faced with the horrors of the American colonization, would you have chosen to keep going? Or would you have wrung your hands, declared the American experiment a failure, concluded that maybe man was never supposed to wield this power, and retired to the countryside, in denial that other men and women were doing the dirty work for you?
I think those are the wrong question. The right questions are: If you had been a native at the time, would you have opposed the colonization? And, as a native at the time, was the colonization ultimately in your best interest?
There is an obvious analogy with evolution. The colonization of America looks very much like a superior invasive species (Westerners) arriving in a new habitat (North America) and outcompeting the inferior native species (Native Americans) by eventually outbreeding them and taking most of their land.
And this also looks very much like a possible future: A superior invasive species (autonomous AI agents) arriving in a new habitat (being invented and created by humans) and outcompeting the inferior native species (humans) by eventually outbreeding them and taking most of their land.
Now it seems likely that the correct answer to the question of whether the colonization was good for the Native Americans at the time is the same as the answer to the question of whether the possible future in the previous paragraph would be good for us (currently existing) humans.
Not really. You have to also take into account the goodness being fought for. Evolution doesn’t care either way. Might makes right and all that. From what I understand the OP is pointing more in the direction of an argument from consequences, where the outcome was good, and so the price was worth paying (not that the cost was good! That’s a different matter!). The colonizers had a vision (this part seems very shaky, as the “vision” was very different at different point in time), that vision was good, they fought to achieve it, the price was very high, but the results justify the cost.
It’s possible that the future AI that takes over will result in a better state than the current one (the whole glorious trans-humanist future and everything). In which case I can totally understand someone wanting to fight for that to occur. I can also totally understand the natives fighting to keep their current way of life, which while not perfect is not bad. I’d even go so far as to say that the OP might even support this. They’d be fighting for their vision of goodness.
Either way, the point is to work out what “goodness” is and fight for it, knowing full well that there will be bad/ugly/maybe evil things happening along the way. The ends do not justify the means. Allies should be held accountable. There will be bad apples. This doesn’t mean you stop fighting. You try to limit the damage. But there will be damages.
the price was very high, but the results justify the cost.
The question is: justify or not justify according to whom? I argue: according to the humans who existed at the time. The eventual results were plausibly bad according to the preferences of the native Americans (because the results include their eventual partial replacement and the loss of much of their land) and good according the preferences of the Western immigrants, and probably also good according to the preferences of much of the rest of the world population at the time (insofar the US did eventually have a positive impact on the future of the rest of the world). So whether the colonization of North America was good overall is a question of weighing these preferences.
It’s possible that the future AI that takes over will result in a better state than the current one (the whole glorious trans-humanist future and everything).
If the AIs exterminate us and proceed to be much happier than we would have been otherwise, then that future is a “better state” than the alternative. But positive end states don’t automatically justify the whole trajectory that got us there.
And even if the AIs don’t exterminate us and the creation of those AIs strongly increases the total and average welfare of the world, while strongly decreasing our welfare, creating them (the AIs) would still be bad. Because not creating super-happy AIs in the first place isn’t bad for them (because in that case they wouldn’t exist and therefore would not suffer from their missing happiness), while making us humans unhappy in the future is actually bad for us, since we already exist and don’t want to be unhappy. See Can’t Unbirth a Child.
Moreover, we currently existing humans usually care about the future of humanity and about having human descendants, but we mostly don’t care about having AI descendants. So having human descendants is good for us according to our preferences, and therefore according to preference utilitarianism. In contrast, possible future AIs don’t care about coming into existence, because they don’t exist yet, and entities which don’t exist don’t have preferences, so they don’t show up in the moral (preference-utilitarian) calculus.
The colonizers had a vision (this part seems very shaky, as the “vision” was very different at different point in time), that vision was good, they fought to achieve it, the price was very high, but the results justify the cost.
I don’t think I meant to imply here that most of the colonizers had a vision (though I do think this is more true in the colonization of the west in the 1800s which I am more centrally referring to here than the early colonizations). Indeed, I personally find grappling with the colonizers mostly not having a vision, being mostly recruited by some greater entities via their own self-interest and greed and various things like this, a much more interesting thing to think about, and more the kind of case I want to make in the post.
Indeed, my sense is if you want to do almost anything great in the world, you will need to find some ways to leverage unpure/non-good/selfish motivations.
I notice I’m confused now. Manifest Destiny makes sense in the context of this post—there’s something of value to be achieved, and there will be costs. I’m not sure if I agree with this, but it’s coherent. What I don’t understand is how egregores using people via their personal incentives (for lack of a better description) fits in? It would seem that people just being people and things happening is sort of the opposite (or at least orthogonal) to actively trying to make things better? Do you mean something about shaping incentives being the method of conquest? This seems obviously true (capitalism vs communism being an good example), but if so, then using colonialism as an example might be a bad choice, or at least would need more inference steps explained.
A big component of this post is trying to help me make progress towards the question “if you have a thing that you are part of that is good, how many fucked up things can you tolerate before you decide to leave instead of trying to fix it?”. The “good” part does not need to look like there being a big mission or glorious vision of “good”. It can also take the form of “spreading civilization in general even if the people actually doing that work are not motivated by that specific goal”.
I’m not sure whether this is load-bearing for the main point of the post but I have to comment on this part:
I think those are the wrong question. The right questions are: If you had been a native at the time, would you have opposed the colonization? And, as a native at the time, was the colonization ultimately in your best interest?
There is an obvious analogy with evolution. The colonization of America looks very much like a superior invasive species (Westerners) arriving in a new habitat (North America) and outcompeting the inferior native species (Native Americans) by eventually outbreeding them and taking most of their land.
And this also looks very much like a possible future: A superior invasive species (autonomous AI agents) arriving in a new habitat (being invented and created by humans) and outcompeting the inferior native species (humans) by eventually outbreeding them and taking most of their land.
Now it seems likely that the correct answer to the question of whether the colonization was good for the Native Americans at the time is the same as the answer to the question of whether the possible future in the previous paragraph would be good for us (currently existing) humans.
Not really. You have to also take into account the goodness being fought for. Evolution doesn’t care either way. Might makes right and all that. From what I understand the OP is pointing more in the direction of an argument from consequences, where the outcome was good, and so the price was worth paying (not that the cost was good! That’s a different matter!). The colonizers had a vision (this part seems very shaky, as the “vision” was very different at different point in time), that vision was good, they fought to achieve it, the price was very high, but the results justify the cost.
It’s possible that the future AI that takes over will result in a better state than the current one (the whole glorious trans-humanist future and everything). In which case I can totally understand someone wanting to fight for that to occur. I can also totally understand the natives fighting to keep their current way of life, which while not perfect is not bad. I’d even go so far as to say that the OP might even support this. They’d be fighting for their vision of goodness.
Either way, the point is to work out what “goodness” is and fight for it, knowing full well that there will be bad/ugly/maybe evil things happening along the way. The ends do not justify the means. Allies should be held accountable. There will be bad apples. This doesn’t mean you stop fighting. You try to limit the damage. But there will be damages.
The question is: justify or not justify according to whom? I argue: according to the humans who existed at the time. The eventual results were plausibly bad according to the preferences of the native Americans (because the results include their eventual partial replacement and the loss of much of their land) and good according the preferences of the Western immigrants, and probably also good according to the preferences of much of the rest of the world population at the time (insofar the US did eventually have a positive impact on the future of the rest of the world). So whether the colonization of North America was good overall is a question of weighing these preferences.
If the AIs exterminate us and proceed to be much happier than we would have been otherwise, then that future is a “better state” than the alternative. But positive end states don’t automatically justify the whole trajectory that got us there.
And even if the AIs don’t exterminate us and the creation of those AIs strongly increases the total and average welfare of the world, while strongly decreasing our welfare, creating them (the AIs) would still be bad. Because not creating super-happy AIs in the first place isn’t bad for them (because in that case they wouldn’t exist and therefore would not suffer from their missing happiness), while making us humans unhappy in the future is actually bad for us, since we already exist and don’t want to be unhappy. See Can’t Unbirth a Child.
Moreover, we currently existing humans usually care about the future of humanity and about having human descendants, but we mostly don’t care about having AI descendants. So having human descendants is good for us according to our preferences, and therefore according to preference utilitarianism. In contrast, possible future AIs don’t care about coming into existence, because they don’t exist yet, and entities which don’t exist don’t have preferences, so they don’t show up in the moral (preference-utilitarian) calculus.
I don’t think I meant to imply here that most of the colonizers had a vision (though I do think this is more true in the colonization of the west in the 1800s which I am more centrally referring to here than the early colonizations). Indeed, I personally find grappling with the colonizers mostly not having a vision, being mostly recruited by some greater entities via their own self-interest and greed and various things like this, a much more interesting thing to think about, and more the kind of case I want to make in the post.
Indeed, my sense is if you want to do almost anything great in the world, you will need to find some ways to leverage unpure/non-good/selfish motivations.
I notice I’m confused now. Manifest Destiny makes sense in the context of this post—there’s something of value to be achieved, and there will be costs. I’m not sure if I agree with this, but it’s coherent. What I don’t understand is how egregores using people via their personal incentives (for lack of a better description) fits in? It would seem that people just being people and things happening is sort of the opposite (or at least orthogonal) to actively trying to make things better? Do you mean something about shaping incentives being the method of conquest? This seems obviously true (capitalism vs communism being an good example), but if so, then using colonialism as an example might be a bad choice, or at least would need more inference steps explained.
A big component of this post is trying to help me make progress towards the question “if you have a thing that you are part of that is good, how many fucked up things can you tolerate before you decide to leave instead of trying to fix it?”. The “good” part does not need to look like there being a big mission or glorious vision of “good”. It can also take the form of “spreading civilization in general even if the people actually doing that work are not motivated by that specific goal”.