But this doesn’t follow: our maximizing utilitarian is obliged to accept that (for example) a certain number of happy mice is ‘worth the same’ as a happy human, and so a world with one less human but (say) a trillion more happy mice is better, but that does not necessarily recommend replacing humans with mice is a utilitarian good deal. It all depends on whether investing a given unit of resources into mice or people gives a higher utilitarian return.
Not only does nothing in the examples motivate thinking this will be the case, but there seem good a priori reasons to reject them: I reckon the ratio between mice and people is of the order to 100 000 : 1, so I doubt mice are a better utilitarian buy than persons, and similar concerns apply to paperclippers and sociopaths.
There does seem to be some good reason to consider mice a better buy, you can create a vast amount of happy mice for the same price you can create one human with all their various complex preferences. Mice require much fewer resources to give them pleasure than humans do to satisfy their complicated preferences.
Similarly, most utils will rate things like relationships, art etc. as worth a big chunk of ‘pleasure’ too, and so the assumption these things are better rendered down into being high or whatever else doesn’t seem plausible.
You could rate these things as more valuable than the pleasure of animals, but that seems to go against the common utilitarian belief that the wellbeing of animals is important and that humans should be willing to sacrifice some of their wellbeing to benefit animals. After all, if rating those human values much better than animal pleasure means it’s better to create humans than animals, it also means it’s acceptable to inflict large harms on animals to help humans achieve relatively small gains in those values. I think it makes much more sense to say that we should value humans and animals equally if both exist, but prefer to create humans.
Even if we did, we can just stipulate an RC to interact with it. “Consider a population A with 10 increments of each feature of value (love, health, happiness, whatever), now A+, which has all members originally in A with 11 increments of each feature of value, and another population with 2 units of each feature of value, and now consider B, with all members in A+ at 9 units of each feature of value.” We seem to have just increased the dimensions of the problem.
Not necessarily. We can stipulate that certain values are discontinuous, or have diminishing returns relative to each other. For instance, at the beginning I stipulated that “A population in which moral beings exist and have net positive utility, and in which all other creatures in existence also have net positive utility, is always better than a population where moral beings do not exist.” In other words a small village full of human beings living worthwhile lives is more worth creating than a galaxy full of mice on heroin.
Doing this doesn’t mean that you can’t aggregate the values in order to determine which ones it is best to promote. But it will prevent a huge quantity of one value from ever being able to completely dominate all the others.
We could have ideals about how utility is concentrated, and so have pattern goods that rule out RC like instances, but it seems horribly ad hoc to put ‘avoid repugnant conclusions’ as lexically prior in normative theory.
I don’t think it’s an ad hoc theory just to avoid the RC. People seem to have similar moral views in a number of other cases that don’t involve vast populations. For instance, it’s generally agreed that it is wrong to kill someone who is destined to live a perfectly good and worthwhile life even if doing so will allow you to replace them with another person who will live a slightly better life. It seems like “maximize utility” is a principle that only applies to existing people. Creating more people requires a whole other set of rules.
The more I think about it, the more I have trouble believing I ever thought utility maximization was a good way to do population ethics. It seems like an instance of “when all you have is a hammer everything looks like a nail.” Utility maximization worked so well for scenarios where the population was unchanged that ethicists tried to apply it to population ethics, even though it doesn’t really work well there.
I personally don’t find the classical RC as repugnant as I used to. The implications of total utilitarianism that I still find deal-breakingly abhorrent are conclusions that involve killing a high utility population in order to replace it with a slightly higher utility population, especially in cases where the only reason the new population has higher utility is that its members have simpler, easier to satisfy values. The Genocidal Conclusion is the big one, but I also find micro-level versions of these conclusions horrible, like that it might be good to wirehead someone against their will, or kill someone destined to lead a great life and replace them with someone with a slightly better life. It was these conclusions that led me to ideal consequentialism, the fact that it might also be able to avoid the RC was just icing on the cake. If ideal consequentialism can somehow avoid “kill and replace” conclusions, but can’t avoid the RC, I’d still be satisfied with it.
It also seems a bit ad hoc or patch like to say “maximize utility in same number cases, but we need to do something different for different number cases”.
I don’t think it’s any weirder than saying “maximize preference satisfaction for creatures capable of having preferences, but maximize pleasure for creatures that are not.” And that doesn’t seem a controversial statement at all.
The case against the mere addition paradox is that people in fact do not have lots of children, and further that they don’t have lots of children and neglect them. I’m not adverse to taking folk opinion as evidence against ethical principles, but in this case the evidence is really weak. Maybe people are being ill-informed, or selfish, or maybe wary of the huge externalities of neglected children, or maybe they think (rightly or wrongly) that they add more utility directing their energy to other things besides having children.
I agree that that is the weakest part of the OP. But I don’t think people are motivated by the reasons you suggested. Most people I know seem to think that they have a moral duty to provide a high level of care to any children they might have, and have a moral duty to not have children if they are unwilling or unable to provide that level of care. Their motivation is driven by moral conviction, not by selfishness or by practical considerations.
In general people’s preference ordering seems to be:
1) New person exists and I am able to care for them without inflicting large harms on myself.
2) New person doesn’t exist
3) New person exists and I inflict large harms on myself to care for them.
4) New person exists and I don’t care for them.
If this is the case then it might be that mere or benign addition is logically impossible. You can’t add new lives barely worth living without harming the interests of existing people.
In general people’s preference ordering seems to be: 1) New person exists and I am able to care for them without inflicting large harms on myself. 2) New person doesn’t exist 3) New person exists and I inflict large harms on myself to care for them. 4) New person exists and I don’t care for them.
That does seem to be so, but why? I’d change the order of 1) and 2). What’s so good about having more people? Better improve the lives of existing people instead.
My theory at the moment is that we have some sort of value that might be called “Harmony of self-interest and moral interests.” This value motivates us to try to make sure the world we live in is one where we do not have to make large sacrifices of our own self-interest in order to improve the lives of others. This in turn causes us to oppose the creation of new people with lives that are worse than our own, even if we could, in theory, maintain our current standard of living while allowing them to live in poverty. This neatly blocks the mere addition paradox since it makes it impossible to perform “mere additions” of new people without harming the interests of those who exist.
I suspect the reason this theory is not addressed heavily in moral literature is the tendency to conflate utility with “happiness.” Since it obviously is possible to “merely add” a new person without impacting the happiness of existing people (for instance, you could conceal the new person’s existence from others so they won’t feel sorry for them) it is mistakenly believed you can also do so without affecting their utility. But even brief introspection reveals that happiness and utility are not identical. If someone spread dirty rumors about me behind my back, cheated on me, or harmed my family when I wasn’t around, and I never found out, my happiness would remain the same. But I’d still have been harmed.
What’s so good about having more people? Better improve the lives of existing people instead.
You’re right, of course. But at this point in our history it isn’t actually a choice of one or the other. Gradually adding reasonable amounts of people to the world actually improves the lives of existing people. This is because doing so allows the economy to develop new divisions of labor that increase the total amount of wealth. That is why we are so much richer than we were in the Middle Ages, even though the population is larger. Plus humans are social animals, having more people means having more potential friends.
Eventually we may find some way to change this. For instance, if we invented a friendly AI that was cheaper to manufacture than a human, more efficient at working, and devoted all its efforts to improving existing lives, then mass manufacturing more copies of it would increase total wealth better than making more people (though we would still be justified in making more people if the relationships we formed with them greatly improved our lives).
But at the moment there is no serious dilemma between adding a reasonable amount of new people and improving existing lives. The main reason I am going after total utilitarianism so hard is that I like my moral theories to be completely satisfactory, and I disapprove of moral theories that give the wrong answers, even if they only do so in a scenario that I will never encounter in real life.
There does seem to be some good reason to consider mice a better buy, you can create a vast amount of happy mice for the same price you can create one human with all their various complex preferences. Mice require much fewer resources to give them pleasure than humans do to satisfy their complicated preferences.
You could rate these things as more valuable than the pleasure of animals, but that seems to go against the common utilitarian belief that the wellbeing of animals is important and that humans should be willing to sacrifice some of their wellbeing to benefit animals. After all, if rating those human values much better than animal pleasure means it’s better to create humans than animals, it also means it’s acceptable to inflict large harms on animals to help humans achieve relatively small gains in those values. I think it makes much more sense to say that we should value humans and animals equally if both exist, but prefer to create humans.
Not necessarily. We can stipulate that certain values are discontinuous, or have diminishing returns relative to each other. For instance, at the beginning I stipulated that “A population in which moral beings exist and have net positive utility, and in which all other creatures in existence also have net positive utility, is always better than a population where moral beings do not exist.” In other words a small village full of human beings living worthwhile lives is more worth creating than a galaxy full of mice on heroin.
Doing this doesn’t mean that you can’t aggregate the values in order to determine which ones it is best to promote. But it will prevent a huge quantity of one value from ever being able to completely dominate all the others.
I don’t think it’s an ad hoc theory just to avoid the RC. People seem to have similar moral views in a number of other cases that don’t involve vast populations. For instance, it’s generally agreed that it is wrong to kill someone who is destined to live a perfectly good and worthwhile life even if doing so will allow you to replace them with another person who will live a slightly better life. It seems like “maximize utility” is a principle that only applies to existing people. Creating more people requires a whole other set of rules.
The more I think about it, the more I have trouble believing I ever thought utility maximization was a good way to do population ethics. It seems like an instance of “when all you have is a hammer everything looks like a nail.” Utility maximization worked so well for scenarios where the population was unchanged that ethicists tried to apply it to population ethics, even though it doesn’t really work well there.
I personally don’t find the classical RC as repugnant as I used to. The implications of total utilitarianism that I still find deal-breakingly abhorrent are conclusions that involve killing a high utility population in order to replace it with a slightly higher utility population, especially in cases where the only reason the new population has higher utility is that its members have simpler, easier to satisfy values. The Genocidal Conclusion is the big one, but I also find micro-level versions of these conclusions horrible, like that it might be good to wirehead someone against their will, or kill someone destined to lead a great life and replace them with someone with a slightly better life. It was these conclusions that led me to ideal consequentialism, the fact that it might also be able to avoid the RC was just icing on the cake. If ideal consequentialism can somehow avoid “kill and replace” conclusions, but can’t avoid the RC, I’d still be satisfied with it.
I don’t think it’s any weirder than saying “maximize preference satisfaction for creatures capable of having preferences, but maximize pleasure for creatures that are not.” And that doesn’t seem a controversial statement at all.
I agree that that is the weakest part of the OP. But I don’t think people are motivated by the reasons you suggested. Most people I know seem to think that they have a moral duty to provide a high level of care to any children they might have, and have a moral duty to not have children if they are unwilling or unable to provide that level of care. Their motivation is driven by moral conviction, not by selfishness or by practical considerations.
In general people’s preference ordering seems to be:
1) New person exists and I am able to care for them without inflicting large harms on myself.
2) New person doesn’t exist
3) New person exists and I inflict large harms on myself to care for them.
4) New person exists and I don’t care for them.
If this is the case then it might be that mere or benign addition is logically impossible. You can’t add new lives barely worth living without harming the interests of existing people.
That does seem to be so, but why? I’d change the order of 1) and 2). What’s so good about having more people? Better improve the lives of existing people instead.
My theory at the moment is that we have some sort of value that might be called “Harmony of self-interest and moral interests.” This value motivates us to try to make sure the world we live in is one where we do not have to make large sacrifices of our own self-interest in order to improve the lives of others. This in turn causes us to oppose the creation of new people with lives that are worse than our own, even if we could, in theory, maintain our current standard of living while allowing them to live in poverty. This neatly blocks the mere addition paradox since it makes it impossible to perform “mere additions” of new people without harming the interests of those who exist.
I suspect the reason this theory is not addressed heavily in moral literature is the tendency to conflate utility with “happiness.” Since it obviously is possible to “merely add” a new person without impacting the happiness of existing people (for instance, you could conceal the new person’s existence from others so they won’t feel sorry for them) it is mistakenly believed you can also do so without affecting their utility. But even brief introspection reveals that happiness and utility are not identical. If someone spread dirty rumors about me behind my back, cheated on me, or harmed my family when I wasn’t around, and I never found out, my happiness would remain the same. But I’d still have been harmed.
You’re right, of course. But at this point in our history it isn’t actually a choice of one or the other. Gradually adding reasonable amounts of people to the world actually improves the lives of existing people. This is because doing so allows the economy to develop new divisions of labor that increase the total amount of wealth. That is why we are so much richer than we were in the Middle Ages, even though the population is larger. Plus humans are social animals, having more people means having more potential friends.
Eventually we may find some way to change this. For instance, if we invented a friendly AI that was cheaper to manufacture than a human, more efficient at working, and devoted all its efforts to improving existing lives, then mass manufacturing more copies of it would increase total wealth better than making more people (though we would still be justified in making more people if the relationships we formed with them greatly improved our lives).
But at the moment there is no serious dilemma between adding a reasonable amount of new people and improving existing lives. The main reason I am going after total utilitarianism so hard is that I like my moral theories to be completely satisfactory, and I disapprove of moral theories that give the wrong answers, even if they only do so in a scenario that I will never encounter in real life.