I’m not quite convinced with the solution to the Repugnant Conclusion, since it amounts to saying, “We know it’s going to happen, so let’s just not let it.” It only provides a comparison with a clear causal chain; it doesn’t say which world is preferable.
I think the easy way to analyze such worlds is through a veil of ignorance. Everyone in each world exists; thus, we can take existence as a given and ask which world we would prefer to be born into. There’s no probabilistic weighting based on population because there’s no line of people waiting to get in, and there’s obviously no alternative to not being born—there’s no “you” to experience the alternative if you’re not born.
Basically, it seems odd to say that world A is “better” than world B if, if you had to choose, you’d choose to live in B hands down if you’re being born to a random parent. This also works at a more micro level, except intervention at a micro level will generally have absurdly high costs. If you keep in mind that hypothetical changes (like killing everyone below average utility, or heavy restrictions on reproduction, for examples) would actually affect the utility distribution beyond their intended purpose, this approach works quite well, I think. If you define some general near-universals as to what a person is likely to prefer (as opposed to what you prefer) it should work even better.
Elaborating completely would take a top-level length post, so I’ll hold off on that for the moment.
In my other post I put forward the argument that you can’t coherently say which world is preferable, at least in cases when the alternative metrics disagree. So I have therefore not made any such statements myself.
I think what you propose is a rational view which lacking an alternative I would espouse. This seems regardless to really just be a justification for average utilitarianism, at least if we consider the expected value of utility in a population when evaluating its worth. That faces us with what are often considered oddities of average utilitarianism, such as:
For instance, the principle implies that for any population consisting of very good lives there is a better population consisting of just one person leading a life at a slightly higher level of well-being (Parfit 1984 chapter 19). More dramatically, the principle also implies that for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being (Parfit 1984).
To be honest I generally prefer the prescriptions given by average utilitarianism (opposed to total) but I’d lke a theory with fewer seeming paradoxes.
(In the first case considered by Parfit, DC suggests staying in whichever population you are in. In the second, it suggests both populations strive towards the single person population.)
I’m not quite convinced with the solution to the Repugnant Conclusion, since it amounts to saying, “We know it’s going to happen, so let’s just not let it.” It only provides a comparison with a clear causal chain; it doesn’t say which world is preferable.
I think the easy way to analyze such worlds is through a veil of ignorance. Everyone in each world exists; thus, we can take existence as a given and ask which world we would prefer to be born into. There’s no probabilistic weighting based on population because there’s no line of people waiting to get in, and there’s obviously no alternative to not being born—there’s no “you” to experience the alternative if you’re not born.
Basically, it seems odd to say that world A is “better” than world B if, if you had to choose, you’d choose to live in B hands down if you’re being born to a random parent. This also works at a more micro level, except intervention at a micro level will generally have absurdly high costs. If you keep in mind that hypothetical changes (like killing everyone below average utility, or heavy restrictions on reproduction, for examples) would actually affect the utility distribution beyond their intended purpose, this approach works quite well, I think. If you define some general near-universals as to what a person is likely to prefer (as opposed to what you prefer) it should work even better.
Elaborating completely would take a top-level length post, so I’ll hold off on that for the moment.
In my other post I put forward the argument that you can’t coherently say which world is preferable, at least in cases when the alternative metrics disagree. So I have therefore not made any such statements myself.
I think what you propose is a rational view which lacking an alternative I would espouse. This seems regardless to really just be a justification for average utilitarianism, at least if we consider the expected value of utility in a population when evaluating its worth. That faces us with what are often considered oddities of average utilitarianism, such as:
To be honest I generally prefer the prescriptions given by average utilitarianism (opposed to total) but I’d lke a theory with fewer seeming paradoxes.
(In the first case considered by Parfit, DC suggests staying in whichever population you are in. In the second, it suggests both populations strive towards the single person population.)