Geometric discounting could fix this, as the sum of the series converges.
I once had a (prioritarian) idea where you order people’s utility from lowest to highest, and apply geometric discounting starting at the lowest. It’s not particularly elegant or theoretically grounded, but it does avoid the repugnant conclusion (indeed I think geometric discounting, applied in any order, removes the RC).
One concern I have with this approach is that similar interests do not receive similar weight, i.e. if the utility of one individual approaches another’s, then the weight we give to their interests should also approach each other. I would be pretty happy if we could replace the geometric discounting with a more continuous discounting without introducing any other significant problems. The weights could each depend on all of the utilities in a continuous way.
∑iuie−ui won’t converge as more people (with good lives or not) are added, so it doesn’t avoid the Repugnant Conclusion or Very Repugnant Conclusion and it will allow dust specks to outweigh torture.
Normalizing by the sum of weights will give less weight to the worst off as more people are added. If the weighted average is already negative, then adding people with negative but better than average lives will improve the average. And it will still allow dust specks to outweigh torture (the population has a fixed size in the two outcomes, so normalization makes no difference).
In fact, anything of the form ∑if(ui) for f:R→R increasing will allow dust specks to outweigh torture for a large enough population, and if f(0)=0, will also lead to the Repugnant Conclusion and Very Repugnant Conclusion (and if f(0)<0, it will lead to the Sadistic Conclusion, and if f(0)>0, then it’s good to add lives not worth living, all else equal). If we only allow f to depend on the population size, n, as fn=cnf by multiplying by some factor cn which depends only on n, then (regardless of the value of fn(0)), it will still choose torture over dust specks, with enough dust specks, because that trade-off is for a fixed population size, anyway. EDIT: If fn depends on n in some more complicated way, I’m not sure that it would necessarily lead to torture over dust specks.
I had in mind something like weighting by eu1−ui where u1 is the minimum utility (so it gives weight 1 to the worst off individual), but it still leads to the Repugnant Conclusion and at some point choosing torture over dust specks.
What I might like is to weight by something like ri−1 for 0<r<1, where the utilities are labelled u1,…,un in increasing (nondecreasing) order, but if ui,ui+1 are close (and far from all other weights, either in an absolute sense or in a relative sense), they should each receive weight close to ri−1+ri2 . Similarly, if there are k clustered utilities, they should each receive weight close to the average of the weights we’d give them in the original Moderate Trade-off Theory.
The utility of the universe should not depend on the order that we assign to the population. We could say that there is a space of lives one could live, and each person covers some portion of that space, and identical people are either completely redundant or only reinforce coverage of their region, and our aim should be to cover some swath of this space.
That doesn’t fix it, it just means you need bigger numbers before you run into the problem.
Maybe if you have an asymtote, but I fully expect that you still run into problems then.
Geometric discounting could fix this, as the sum of the series converges.
I once had a (prioritarian) idea where you order people’s utility from lowest to highest, and apply geometric discounting starting at the lowest. It’s not particularly elegant or theoretically grounded, but it does avoid the repugnant conclusion (indeed I think geometric discounting, applied in any order, removes the RC).
Erik Carlson called this the Moderate Trade-off Theory. See also Sider’s Geometrism and Carlson’s discussion of it here.
One concern I have with this approach is that similar interests do not receive similar weight, i.e. if the utility of one individual approaches another’s, then the weight we give to their interests should also approach each other. I would be pretty happy if we could replace the geometric discounting with a more continuous discounting without introducing any other significant problems. The weights could each depend on all of the utilities in a continuous way.
Something like ∑iuie−ui or ∑iuie−ui/∑ie−ui or in general ∑iuif(ui)/∑if(ui) (for decreasing, continuous f) could work, I think.
∑iuie−ui won’t converge as more people (with good lives or not) are added, so it doesn’t avoid the Repugnant Conclusion or Very Repugnant Conclusion and it will allow dust specks to outweigh torture.
Normalizing by the sum of weights will give less weight to the worst off as more people are added. If the weighted average is already negative, then adding people with negative but better than average lives will improve the average. And it will still allow dust specks to outweigh torture (the population has a fixed size in the two outcomes, so normalization makes no difference).
In fact, anything of the form ∑if(ui) for f:R→R increasing will allow dust specks to outweigh torture for a large enough population, and if f(0)=0, will also lead to the Repugnant Conclusion and Very Repugnant Conclusion (and if f(0)<0, it will lead to the Sadistic Conclusion, and if f(0)>0, then it’s good to add lives not worth living, all else equal). If we only allow f to depend on the population size, n, as fn=cnf by multiplying by some factor cn which depends only on n, then (regardless of the value of fn(0)), it will still choose torture over dust specks, with enough dust specks, because that trade-off is for a fixed population size, anyway. EDIT: If fn depends on n in some more complicated way, I’m not sure that it would necessarily lead to torture over dust specks.
I had in mind something like weighting by eu1−ui where u1 is the minimum utility (so it gives weight 1 to the worst off individual), but it still leads to the Repugnant Conclusion and at some point choosing torture over dust specks.
What I might like is to weight by something like ri−1 for 0<r<1, where the utilities are labelled u1,…,un in increasing (nondecreasing) order, but if ui,ui+1 are close (and far from all other weights, either in an absolute sense or in a relative sense), they should each receive weight close to ri−1+ri2 . Similarly, if there are k clustered utilities, they should each receive weight close to the average of the weights we’d give them in the original Moderate Trade-off Theory.
The utility of the universe should not depend on the order that we assign to the population. We could say that there is a space of lives one could live, and each person covers some portion of that space, and identical people are either completely redundant or only reinforce coverage of their region, and our aim should be to cover some swath of this space.