I think you are mistaking Habryka’s argument, not 0xA. Habryka wrote that “it was worth it”. The first “it” presumably refers to the colonization and the creation of the US. And “was worth it” presumably means “was right”. So we arrive at “the colonization was right” (despite all the listed downsides). That’s in line with 0xA’s interpretation.
Also note that (if it wasn’t obvious) “state of the world A is better than state of the world B” doesn’t imply that bringing about A is better than bringing about B. Maybe in state A everyone is happy only because we previously murdered everyone who was unhappy. That doesn’t mean murdering everyone who is unhappy is good.
Ben is understanding me correctly that that was the argument I was making in this comment (I think you can compare how good a place to live is even, including across cultures and societies).
I agree in the post I am making the argument that the overall tradeoff was worth it. I could connect the two. I agree with you that there are circumstances in which “state of world A is better than state of the world B” does not imply that bringing about A is better than bringing about B. I do think it’s a pretty argument in favor of bringing about A.
I assume though if future state A contains a trillion super happy AIs but no humans, while future state B contains a few billion moderately happy humans and no AIs: That then A would be a better state than B, and it would nonetheless be the case that we should bring about B rather than A. So there must be some disanalogy to the colonization case.
I am not a hedonic utilitarian, so would reject this analysis on those grounds.
The question is “would A be a better state than state B” holistically, by the assessment of something like the extrapolated volition of humanity. Importantly including everything that will happen into the distant future (which I think makes there being only a few billion moderately happy humans very unlikely, as we will eventually colonize the stars, and I would consider it an enormous atrocity to fail to do so).
The question is: extrapolated volition of whom? In the case of thinking about whether to create super happy AIs that replace us (A) or not (B), this would presumably be our current human extrapolated volition. So it wouldn’t take interests of non-existing AIs into account. And in the case of asking whether colonization of America was good or bad, we would have to consider the extrapolated volition of the humans alive at the time.
It’s a bit tricky. I don’t super feel like I owe the competitors to my distant ancestors in the primordial soup consideration in humanity’s CEV, though I am also not enormously confident that I definitely don’t.
Definitely agree that in this case you consider the value of the people who you took the opportunity to reproduce from (though also ultimately I will also at least somewhat bite the bullet that my values might diverge from theirs and in as much as we are in a fully zero-sum competition I would like my values to win out, though overall principles of fairness and justice definitely compel me to give them a non-trivial chunk of the Lightcone).
I think you are mistaking Habryka’s argument, not 0xA. Habryka wrote that “it was worth it”. The first “it” presumably refers to the colonization and the creation of the US. And “was worth it” presumably means “was right”. So we arrive at “the colonization was right” (despite all the listed downsides). That’s in line with 0xA’s interpretation.
Also note that (if it wasn’t obvious) “state of the world A is better than state of the world B” doesn’t imply that bringing about A is better than bringing about B. Maybe in state A everyone is happy only because we previously murdered everyone who was unhappy. That doesn’t mean murdering everyone who is unhappy is good.
Ben is understanding me correctly that that was the argument I was making in this comment (I think you can compare how good a place to live is even, including across cultures and societies).
I agree in the post I am making the argument that the overall tradeoff was worth it. I could connect the two. I agree with you that there are circumstances in which “state of world A is better than state of the world B” does not imply that bringing about A is better than bringing about B. I do think it’s a pretty argument in favor of bringing about A.
It seemed like you were making the additional argument “if you could stop A completely (and that was your only option) you should not.”
I assume though if future state A contains a trillion super happy AIs but no humans, while future state B contains a few billion moderately happy humans and no AIs: That then A would be a better state than B, and it would nonetheless be the case that we should bring about B rather than A. So there must be some disanalogy to the colonization case.
I am not a hedonic utilitarian, so would reject this analysis on those grounds.
The question is “would A be a better state than state B” holistically, by the assessment of something like the extrapolated volition of humanity. Importantly including everything that will happen into the distant future (which I think makes there being only a few billion moderately happy humans very unlikely, as we will eventually colonize the stars, and I would consider it an enormous atrocity to fail to do so).
The question is: extrapolated volition of whom? In the case of thinking about whether to create super happy AIs that replace us (A) or not (B), this would presumably be our current human extrapolated volition. So it wouldn’t take interests of non-existing AIs into account. And in the case of asking whether colonization of America was good or bad, we would have to consider the extrapolated volition of the humans alive at the time.
It’s a bit tricky. I don’t super feel like I owe the competitors to my distant ancestors in the primordial soup consideration in humanity’s CEV, though I am also not enormously confident that I definitely don’t.
Definitely agree that in this case you consider the value of the people who you took the opportunity to reproduce from (though also ultimately I will also at least somewhat bite the bullet that my values might diverge from theirs and in as much as we are in a fully zero-sum competition I would like my values to win out, though overall principles of fairness and justice definitely compel me to give them a non-trivial chunk of the Lightcone).