Weak repugnant conclusion need not be so repugnant given fixed resources

I want to thank Irgy for this idea.

As people generally know, total utilitarianism leads to the repugnant conclusion—the idea that no matter how great a universe X would be, filled without trillions of ultimately happy people having ultimately meaningful lives filled with adventure and joy, there is another universe Y which is better—and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure. But since the second universe is much bigger than the first, it comes out on top. Not only in that if we had Y it would be immoral to move to X (which is perfectly respectable, as doing so might involve killing a lot of people, or at least allowing a lot of people to die). But in that, if we planned for our future world now, we would desperately want to bring Y into existence rather than X—and could run great costs or great risks to do so. And if we were in world X, we must at all costs move to Y, making all current people much more miserable as we do so.

The repugnant conclusion is the main reason I reject total utilitarianism (the other one being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness). But the repugnant conclusion can emerge from many other population ethics as well. If adding more people of slightly less happiness than the average is always a bonus (“mere addition”), and if equalising happiness is never a penalty, then you get the repugnant conclusion (caveat: there are some subtleties to do with infinite series).

But repugnant conclusions reached in that way may not be so repugnant, in practice. Let S be a system of population ethics that accepts the repugnant conclusion, due to the argument above. S may indeed conclude that the big world Y is better than the super-human world X. But S need not conclude that Y is the best world we can build, given any fixed and finite amount of resources. Total utilitarianism is indifferent to having a world with half the population and twice the happiness. But S need to be indifferent to that—it may much prefer the twice-happiness world. Instead of the world Y, it may prefer to reallocate resources to instead achieve the world X’, which has the same average happiness as X but is slightly larger.

Of course, since it accepts the repugnant conclusion, there will be a barely-worth-living world Y’ which it prefers to X’. But then it might prefer reallocating the resources of Y’ to the happy world X″, and so on.

This is not an argument for efficiency of resource allocation: even if it’s four times as hard to get people twice as happy, S can still want to do so. You can accept the repugnant conclusion and still want to reallocate any fixed amount of resources towards low population and extreme happiness.

It’s always best to have some examples, so here is one: an S whose value is the product of average agent happiness times the logarithm of population size.