∑iuie−ui won’t converge as more people (with good lives or not) are added, so it doesn’t avoid the Repugnant Conclusion or Very Repugnant Conclusion and it will allow dust specks to outweigh torture.
Normalizing by the sum of weights will give less weight to the worst off as more people are added. If the weighted average is already negative, then adding people with negative but better than average lives will improve the average. And it will still allow dust specks to outweigh torture (the population has a fixed size in the two outcomes, so normalization makes no difference).
In fact, anything of the form ∑if(ui) for f:R→R increasing will allow dust specks to outweigh torture for a large enough population, and if f(0)=0, will also lead to the Repugnant Conclusion and Very Repugnant Conclusion (and if f(0)<0, it will lead to the Sadistic Conclusion, and if f(0)>0, then it’s good to add lives not worth living, all else equal). If we only allow f to depend on the population size, n, as fn=cnf by multiplying by some factor cn which depends only on n, then (regardless of the value of fn(0)), it will still choose torture over dust specks, with enough dust specks, because that trade-off is for a fixed population size, anyway. EDIT: If fn depends on n in some more complicated way, I’m not sure that it would necessarily lead to torture over dust specks.
I had in mind something like weighting by eu1−ui where u1 is the minimum utility (so it gives weight 1 to the worst off individual), but it still leads to the Repugnant Conclusion and at some point choosing torture over dust specks.
What I might like is to weight by something like ri−1 for 0<r<1, where the utilities are labelled u1,…,un in increasing (nondecreasing) order, but if ui,ui+1 are close (and far from all other weights, either in an absolute sense or in a relative sense), they should each receive weight close to ri−1+ri2 . Similarly, if there are k clustered utilities, they should each receive weight close to the average of the weights we’d give them in the original Moderate Trade-off Theory.
∑iuie−ui won’t converge as more people (with good lives or not) are added, so it doesn’t avoid the Repugnant Conclusion or Very Repugnant Conclusion and it will allow dust specks to outweigh torture.
Normalizing by the sum of weights will give less weight to the worst off as more people are added. If the weighted average is already negative, then adding people with negative but better than average lives will improve the average. And it will still allow dust specks to outweigh torture (the population has a fixed size in the two outcomes, so normalization makes no difference).
In fact, anything of the form ∑if(ui) for f:R→R increasing will allow dust specks to outweigh torture for a large enough population, and if f(0)=0, will also lead to the Repugnant Conclusion and Very Repugnant Conclusion (and if f(0)<0, it will lead to the Sadistic Conclusion, and if f(0)>0, then it’s good to add lives not worth living, all else equal). If we only allow f to depend on the population size, n, as fn=cnf by multiplying by some factor cn which depends only on n, then (regardless of the value of fn(0)), it will still choose torture over dust specks, with enough dust specks, because that trade-off is for a fixed population size, anyway. EDIT: If fn depends on n in some more complicated way, I’m not sure that it would necessarily lead to torture over dust specks.
I had in mind something like weighting by eu1−ui where u1 is the minimum utility (so it gives weight 1 to the worst off individual), but it still leads to the Repugnant Conclusion and at some point choosing torture over dust specks.
What I might like is to weight by something like ri−1 for 0<r<1, where the utilities are labelled u1,…,un in increasing (nondecreasing) order, but if ui,ui+1 are close (and far from all other weights, either in an absolute sense or in a relative sense), they should each receive weight close to ri−1+ri2 . Similarly, if there are k clustered utilities, they should each receive weight close to the average of the weights we’d give them in the original Moderate Trade-off Theory.