You’re mistaking Habryka’s argument to be “if people prefer modern america to pre-colonial america, then it’s right to colonialize america”. He’s just here making the (more modest) point that “if people prefer modern america to pre-colonial america, then probably modern america is a better to live than pre-colonial america”, which you seemed to be saying one could not have any opinion on.
I think you are mistaking Habryka’s argument, not 0xA. Habryka wrote that “it was worth it”. The first “it” presumably refers to the colonization and the creation of the US. And “was worth it” presumably means “was right”. So we arrive at “the colonization was right” (despite all the listed downsides). That’s in line with 0xA’s interpretation.
Also note that (if it wasn’t obvious) “state of the world A is better than state of the world B” doesn’t imply that bringing about A is better than bringing about B. Maybe in state A everyone is happy only because we previously murdered everyone who was unhappy. That doesn’t mean murdering everyone who is unhappy is good.
Ben is understanding me correctly that that was the argument I was making in this comment (I think you can compare how good a place to live is even, including across cultures and societies).
I agree in the post I am making the argument that the overall tradeoff was worth it. I could connect the two. I agree with you that there are circumstances in which “state of world A is better than state of the world B” does not imply that bringing about A is better than bringing about B. I do think it’s a pretty argument in favor of bringing about A.
I assume though if future state A contains a trillion super happy AIs but no humans, while future state B contains a few billion moderately happy humans and no AIs: That then A would be a better state than B, and it would nonetheless be the case that we should bring about B rather than A. So there must be some disanalogy to the colonization case.
I am not a hedonic utilitarian, so would reject this analysis on those grounds.
The question is “would A be a better state than state B” holistically, by the assessment of something like the extrapolated volition of humanity. Importantly including everything that will happen into the distant future (which I think makes there being only a few billion moderately happy humans very unlikely, as we will eventually colonize the stars, and I would consider it an enormous atrocity to fail to do so).
The question is: extrapolated volition of whom? In the case of thinking about whether to create super happy AIs that replace us (A) or not (B), this would presumably be our current human extrapolated volition. So it wouldn’t take interests of non-existing AIs into account. And in the case of asking whether colonization of America was good or bad, we would have to consider the extrapolated volition of the humans alive at the time.
It’s a bit tricky. I don’t super feel like I owe the competitors to my distant ancestors in the primordial soup consideration in humanity’s CEV, though I am also not enormously confident that I definitely don’t.
Definitely agree that in this case you consider the value of the people who you took the opportunity to reproduce from (though also ultimately I will also at least somewhat bite the bullet that my values might diverge from theirs and in as much as we are in a fully zero-sum competition I would like my values to win out, though overall principles of fairness and justice definitely compel me to give them a non-trivial chunk of the Lightcone).
I think the challenge here is that the comment is made as justification for the broader point of the article, which in context was (as addendum to your quote) “as an example of argument against post modernism”. Which I consider an argument as claim to its rightness, especially when framed in the context.
I am making the subtle point that the argument can’t be used to debunk a post-modernist philosophy because the data point he elected to use, was, for lack of better terms, consequentialist. Not morally justifying. To me, that’s like saying (and forgive me for the staunch metaphor): “I can make a pretty good case for arguing that squatting in your grandparents mansion is morally justified, because everyone on the block would choose to live in this mansion if they could”.
I would agree with you if he not had the prior qualifiers of it being an argument against the philosophy he considers me to have, from my earlier comment, and if in the article he didn’t equivocate all of this with goodness itself.
You’re mistaking Habryka’s argument to be “if people prefer modern america to pre-colonial america, then it’s right to colonialize america”. He’s just here making the (more modest) point that “if people prefer modern america to pre-colonial america, then probably modern america is a better to live than pre-colonial america”, which you seemed to be saying one could not have any opinion on.
I think you are mistaking Habryka’s argument, not 0xA. Habryka wrote that “it was worth it”. The first “it” presumably refers to the colonization and the creation of the US. And “was worth it” presumably means “was right”. So we arrive at “the colonization was right” (despite all the listed downsides). That’s in line with 0xA’s interpretation.
Also note that (if it wasn’t obvious) “state of the world A is better than state of the world B” doesn’t imply that bringing about A is better than bringing about B. Maybe in state A everyone is happy only because we previously murdered everyone who was unhappy. That doesn’t mean murdering everyone who is unhappy is good.
Ben is understanding me correctly that that was the argument I was making in this comment (I think you can compare how good a place to live is even, including across cultures and societies).
I agree in the post I am making the argument that the overall tradeoff was worth it. I could connect the two. I agree with you that there are circumstances in which “state of world A is better than state of the world B” does not imply that bringing about A is better than bringing about B. I do think it’s a pretty argument in favor of bringing about A.
It seemed like you were making the additional argument “if you could stop A completely (and that was your only option) you should not.”
I assume though if future state A contains a trillion super happy AIs but no humans, while future state B contains a few billion moderately happy humans and no AIs: That then A would be a better state than B, and it would nonetheless be the case that we should bring about B rather than A. So there must be some disanalogy to the colonization case.
I am not a hedonic utilitarian, so would reject this analysis on those grounds.
The question is “would A be a better state than state B” holistically, by the assessment of something like the extrapolated volition of humanity. Importantly including everything that will happen into the distant future (which I think makes there being only a few billion moderately happy humans very unlikely, as we will eventually colonize the stars, and I would consider it an enormous atrocity to fail to do so).
The question is: extrapolated volition of whom? In the case of thinking about whether to create super happy AIs that replace us (A) or not (B), this would presumably be our current human extrapolated volition. So it wouldn’t take interests of non-existing AIs into account. And in the case of asking whether colonization of America was good or bad, we would have to consider the extrapolated volition of the humans alive at the time.
It’s a bit tricky. I don’t super feel like I owe the competitors to my distant ancestors in the primordial soup consideration in humanity’s CEV, though I am also not enormously confident that I definitely don’t.
Definitely agree that in this case you consider the value of the people who you took the opportunity to reproduce from (though also ultimately I will also at least somewhat bite the bullet that my values might diverge from theirs and in as much as we are in a fully zero-sum competition I would like my values to win out, though overall principles of fairness and justice definitely compel me to give them a non-trivial chunk of the Lightcone).
I think the challenge here is that the comment is made as justification for the broader point of the article, which in context was (as addendum to your quote) “as an example of argument against post modernism”. Which I consider an argument as claim to its rightness, especially when framed in the context.
I am making the subtle point that the argument can’t be used to debunk a post-modernist philosophy because the data point he elected to use, was, for lack of better terms, consequentialist. Not morally justifying. To me, that’s like saying (and forgive me for the staunch metaphor): “I can make a pretty good case for arguing that squatting in your grandparents mansion is morally justified, because everyone on the block would choose to live in this mansion if they could”.
I would agree with you if he not had the prior qualifiers of it being an argument against the philosophy he considers me to have, from my earlier comment, and if in the article he didn’t equivocate all of this with goodness itself.