This essay seems to me to badly go wrong, and it’s main error is begging all sorts of empirical questions against our maximizing utilitarian. (This seems to be the opposite mistake you made to the mere cable channel addition paradox, where you used empirical concerns to try and dodge the Repugnant conclusion).
1) The GC presumes that the best transfer of resources to utility will involve paperclips/animals, etc
Obviously Z is better than A, right? We should not fear the creation of a paperclip maximizing AI, but welcome it! Forget about things like high challenge, love, interpersonal entanglement, complex fun, and so on! Those things just don’t produce the kind of utility that paperclip maximization has the potential to do!
We end up with population Z, with a vast amount of mice or lizards with lives just barely worth living, and a small amount of human beings with lives barely worth living. Terrific! Why do we bother creating humans at all! Let’s just create tons of mice and inject them full of heroin! It’s a much more efficient way to generate utility!
(And so on with the sociopath and asteroid example).
You make the same mistake each time. You point to a population which you stipulate as having a higher total value than a population of normal humans, and then you assert that this particular human population should therefore act to have more paperclips/nonsapient animals/psychopaths. But this doesn’t follow: our maximizing utilitarian is obliged to accept that (for example) a certain number of happy mice is ‘worth the same’ as a happy human, and so a world with one less human but (say) a trillion more happy mice is better, but that does not necessarily recommend replacing humans with mice is a utilitarian good deal. It all depends on whether investing a given unit of resources into mice or people gives a higher utilitarian return.
Not only does nothing in the examples motivate thinking this will be the case, but there seem good a priori reasons to reject them: I reckon the ratio between mice and people is of the order to 100 000 : 1, so I doubt mice are a better utilitarian buy than persons, and similar concerns apply to paperclippers and sociopaths. Similarly, most utils will rate things like relationships, art etc. as worth a big chunk of ‘pleasure’ too, and so the assumption these things are better rendered down into being high or whatever else doesn’t seem plausible.
(Of course, empirically one can say the RC is not likely to be the resource-optimal utilitarian spend—there will be some trade off between making new people versus making the existing ones happier. But as covered in previous discussions is that what makes the RC repugnant is that it seems wrong to prefer such a world whether or not it is feasible. In the same way you might argue miceworld, clipperworld, or sociopathworld is repugnant to prefer to human world even if it isn’t feasible, but this recapitulates the standard RC. The GC wants to make this have more bite by saying utilitarianism should motivate us to create one of these scenarios, but it doesn’t work.)
2) Complexifying values won’t dodge the RC
We can specify all sorts of other values for an ideal consequentialism, but this won’t dodge the RC, and plausibly have its own problems. If the values are commensurable (i.e. we can answer ’how much is X increment of wisdom worth in terms of mutual affection), we basically get utilitarianism, as utils will likely say util is the common currency we use to cash out the value of these things. If they aren’t, we get moral dilemmas and instances where we cannot decide which is better (e.g., shall we make a person with slightly more wisdom, or slighlty more mutual affection?).
Even if we did, we can just stipulate an RC to interact with it. “Consider a population A with 10 increments of each feature of value (love, health, happiness, whatever), now A+, which has all members originally in A with 11 increments of each feature of value, and another population with 2 units of each feature of value, and now consider B, with all members in A+ at 9 units of each feature of value.” We seem to have just increased the dimensions of the problem.
We could have ideals about how utility is concentrated, and so have pattern goods that rule out RC like instances, but it seems horribly ad hoc to put ‘avoid repugnant conclusions’ as lexically prior in normative theory. It also seems a bit ad hoc or patch like to say “maximize utility in same number cases, but we need to do something different for different number cases”.
3) The case for rejecting mere (or beneficial) addition is weak
The case against the mere addition paradox is that people in fact do not have lots of children, and further that they don’t have lots of children and neglect them. I’m not adverse to taking folk opinion as evidence against ethical principles, but in this case the evidence is really weak. Maybe people are being ill-informed, or selfish, or maybe wary of the huge externalities of neglected children, or maybe they think (rightly or wrongly) that they add more utility directing their energy to other things besides having children.
It’s certainly true that total view util implies a pro tanto reason to have children, and even a pro tanto reason to have children even if you’ll look after them poorly. But you need way more work to show it generally secures an all things considered reason to have children, and waaaayyy more work to show it gives an all things considered reason to have children and neglect them.
#
Now hey, util has all sorts of counter-intuitive problems:
1) The fact value is fungible between love, mutual affection, getting high, feeling smug, having an orgasm, eating food, or reading Tolstoy means we get seemingly sucky cases where we should prefer wireheading or orgasmatrons to eudaimonia (now, utils can say eudaimonia is worth much more than wire-heading, but it won’t be incommensurately more, so enough wireheads will beat it).
2) That value is seperable between persons means we can get utility monsters, and cases where we benefit the best off instead of the worst off (or even benefit the best off at the expense of the worst off).
3) Aggregating leads to RCs and other worries.
And others I’ve forgotten. But I don’t think these are lethal: that utils are obliged in toy scenarios to genocide moral agents or have trillions of bare-worth-living lives doesn’t seem that embarassing to me—and, more importantly, alternative accounts of theories also run into problems too.
Ultimately, I don’t think your essay accomplishes more than tangling up the ‘usual suspects’ for anti total-utilitarianism objections, and the GC is a failure. I don’t think your program of ‘idealizing’ consequentialism in different number cases by adding on extra axes of value, or just bolting on ‘must not end up in RC’ to your normative theory is going to work well.
But this doesn’t follow: our maximizing utilitarian is obliged to accept that (for example) a certain number of happy mice is ‘worth the same’ as a happy human, and so a world with one less human but (say) a trillion more happy mice is better, but that does not necessarily recommend replacing humans with mice is a utilitarian good deal. It all depends on whether investing a given unit of resources into mice or people gives a higher utilitarian return.
Not only does nothing in the examples motivate thinking this will be the case, but there seem good a priori reasons to reject them: I reckon the ratio between mice and people is of the order to 100 000 : 1, so I doubt mice are a better utilitarian buy than persons, and similar concerns apply to paperclippers and sociopaths.
There does seem to be some good reason to consider mice a better buy, you can create a vast amount of happy mice for the same price you can create one human with all their various complex preferences. Mice require much fewer resources to give them pleasure than humans do to satisfy their complicated preferences.
Similarly, most utils will rate things like relationships, art etc. as worth a big chunk of ‘pleasure’ too, and so the assumption these things are better rendered down into being high or whatever else doesn’t seem plausible.
You could rate these things as more valuable than the pleasure of animals, but that seems to go against the common utilitarian belief that the wellbeing of animals is important and that humans should be willing to sacrifice some of their wellbeing to benefit animals. After all, if rating those human values much better than animal pleasure means it’s better to create humans than animals, it also means it’s acceptable to inflict large harms on animals to help humans achieve relatively small gains in those values. I think it makes much more sense to say that we should value humans and animals equally if both exist, but prefer to create humans.
Even if we did, we can just stipulate an RC to interact with it. “Consider a population A with 10 increments of each feature of value (love, health, happiness, whatever), now A+, which has all members originally in A with 11 increments of each feature of value, and another population with 2 units of each feature of value, and now consider B, with all members in A+ at 9 units of each feature of value.” We seem to have just increased the dimensions of the problem.
Not necessarily. We can stipulate that certain values are discontinuous, or have diminishing returns relative to each other. For instance, at the beginning I stipulated that “A population in which moral beings exist and have net positive utility, and in which all other creatures in existence also have net positive utility, is always better than a population where moral beings do not exist.” In other words a small village full of human beings living worthwhile lives is more worth creating than a galaxy full of mice on heroin.
Doing this doesn’t mean that you can’t aggregate the values in order to determine which ones it is best to promote. But it will prevent a huge quantity of one value from ever being able to completely dominate all the others.
We could have ideals about how utility is concentrated, and so have pattern goods that rule out RC like instances, but it seems horribly ad hoc to put ‘avoid repugnant conclusions’ as lexically prior in normative theory.
I don’t think it’s an ad hoc theory just to avoid the RC. People seem to have similar moral views in a number of other cases that don’t involve vast populations. For instance, it’s generally agreed that it is wrong to kill someone who is destined to live a perfectly good and worthwhile life even if doing so will allow you to replace them with another person who will live a slightly better life. It seems like “maximize utility” is a principle that only applies to existing people. Creating more people requires a whole other set of rules.
The more I think about it, the more I have trouble believing I ever thought utility maximization was a good way to do population ethics. It seems like an instance of “when all you have is a hammer everything looks like a nail.” Utility maximization worked so well for scenarios where the population was unchanged that ethicists tried to apply it to population ethics, even though it doesn’t really work well there.
I personally don’t find the classical RC as repugnant as I used to. The implications of total utilitarianism that I still find deal-breakingly abhorrent are conclusions that involve killing a high utility population in order to replace it with a slightly higher utility population, especially in cases where the only reason the new population has higher utility is that its members have simpler, easier to satisfy values. The Genocidal Conclusion is the big one, but I also find micro-level versions of these conclusions horrible, like that it might be good to wirehead someone against their will, or kill someone destined to lead a great life and replace them with someone with a slightly better life. It was these conclusions that led me to ideal consequentialism, the fact that it might also be able to avoid the RC was just icing on the cake. If ideal consequentialism can somehow avoid “kill and replace” conclusions, but can’t avoid the RC, I’d still be satisfied with it.
It also seems a bit ad hoc or patch like to say “maximize utility in same number cases, but we need to do something different for different number cases”.
I don’t think it’s any weirder than saying “maximize preference satisfaction for creatures capable of having preferences, but maximize pleasure for creatures that are not.” And that doesn’t seem a controversial statement at all.
The case against the mere addition paradox is that people in fact do not have lots of children, and further that they don’t have lots of children and neglect them. I’m not adverse to taking folk opinion as evidence against ethical principles, but in this case the evidence is really weak. Maybe people are being ill-informed, or selfish, or maybe wary of the huge externalities of neglected children, or maybe they think (rightly or wrongly) that they add more utility directing their energy to other things besides having children.
I agree that that is the weakest part of the OP. But I don’t think people are motivated by the reasons you suggested. Most people I know seem to think that they have a moral duty to provide a high level of care to any children they might have, and have a moral duty to not have children if they are unwilling or unable to provide that level of care. Their motivation is driven by moral conviction, not by selfishness or by practical considerations.
In general people’s preference ordering seems to be:
1) New person exists and I am able to care for them without inflicting large harms on myself.
2) New person doesn’t exist
3) New person exists and I inflict large harms on myself to care for them.
4) New person exists and I don’t care for them.
If this is the case then it might be that mere or benign addition is logically impossible. You can’t add new lives barely worth living without harming the interests of existing people.
In general people’s preference ordering seems to be: 1) New person exists and I am able to care for them without inflicting large harms on myself. 2) New person doesn’t exist 3) New person exists and I inflict large harms on myself to care for them. 4) New person exists and I don’t care for them.
That does seem to be so, but why? I’d change the order of 1) and 2). What’s so good about having more people? Better improve the lives of existing people instead.
My theory at the moment is that we have some sort of value that might be called “Harmony of self-interest and moral interests.” This value motivates us to try to make sure the world we live in is one where we do not have to make large sacrifices of our own self-interest in order to improve the lives of others. This in turn causes us to oppose the creation of new people with lives that are worse than our own, even if we could, in theory, maintain our current standard of living while allowing them to live in poverty. This neatly blocks the mere addition paradox since it makes it impossible to perform “mere additions” of new people without harming the interests of those who exist.
I suspect the reason this theory is not addressed heavily in moral literature is the tendency to conflate utility with “happiness.” Since it obviously is possible to “merely add” a new person without impacting the happiness of existing people (for instance, you could conceal the new person’s existence from others so they won’t feel sorry for them) it is mistakenly believed you can also do so without affecting their utility. But even brief introspection reveals that happiness and utility are not identical. If someone spread dirty rumors about me behind my back, cheated on me, or harmed my family when I wasn’t around, and I never found out, my happiness would remain the same. But I’d still have been harmed.
What’s so good about having more people? Better improve the lives of existing people instead.
You’re right, of course. But at this point in our history it isn’t actually a choice of one or the other. Gradually adding reasonable amounts of people to the world actually improves the lives of existing people. This is because doing so allows the economy to develop new divisions of labor that increase the total amount of wealth. That is why we are so much richer than we were in the Middle Ages, even though the population is larger. Plus humans are social animals, having more people means having more potential friends.
Eventually we may find some way to change this. For instance, if we invented a friendly AI that was cheaper to manufacture than a human, more efficient at working, and devoted all its efforts to improving existing lives, then mass manufacturing more copies of it would increase total wealth better than making more people (though we would still be justified in making more people if the relationships we formed with them greatly improved our lives).
But at the moment there is no serious dilemma between adding a reasonable amount of new people and improving existing lives. The main reason I am going after total utilitarianism so hard is that I like my moral theories to be completely satisfactory, and I disapprove of moral theories that give the wrong answers, even if they only do so in a scenario that I will never encounter in real life.
This essay seems to me to badly go wrong, and it’s main error is begging all sorts of empirical questions against our maximizing utilitarian. (This seems to be the opposite mistake you made to the mere cable channel addition paradox, where you used empirical concerns to try and dodge the Repugnant conclusion).
1) The GC presumes that the best transfer of resources to utility will involve paperclips/animals, etc
(And so on with the sociopath and asteroid example).
You make the same mistake each time. You point to a population which you stipulate as having a higher total value than a population of normal humans, and then you assert that this particular human population should therefore act to have more paperclips/nonsapient animals/psychopaths. But this doesn’t follow: our maximizing utilitarian is obliged to accept that (for example) a certain number of happy mice is ‘worth the same’ as a happy human, and so a world with one less human but (say) a trillion more happy mice is better, but that does not necessarily recommend replacing humans with mice is a utilitarian good deal. It all depends on whether investing a given unit of resources into mice or people gives a higher utilitarian return.
Not only does nothing in the examples motivate thinking this will be the case, but there seem good a priori reasons to reject them: I reckon the ratio between mice and people is of the order to 100 000 : 1, so I doubt mice are a better utilitarian buy than persons, and similar concerns apply to paperclippers and sociopaths. Similarly, most utils will rate things like relationships, art etc. as worth a big chunk of ‘pleasure’ too, and so the assumption these things are better rendered down into being high or whatever else doesn’t seem plausible.
(Of course, empirically one can say the RC is not likely to be the resource-optimal utilitarian spend—there will be some trade off between making new people versus making the existing ones happier. But as covered in previous discussions is that what makes the RC repugnant is that it seems wrong to prefer such a world whether or not it is feasible. In the same way you might argue miceworld, clipperworld, or sociopathworld is repugnant to prefer to human world even if it isn’t feasible, but this recapitulates the standard RC. The GC wants to make this have more bite by saying utilitarianism should motivate us to create one of these scenarios, but it doesn’t work.)
2) Complexifying values won’t dodge the RC
We can specify all sorts of other values for an ideal consequentialism, but this won’t dodge the RC, and plausibly have its own problems. If the values are commensurable (i.e. we can answer ’how much is X increment of wisdom worth in terms of mutual affection), we basically get utilitarianism, as utils will likely say util is the common currency we use to cash out the value of these things. If they aren’t, we get moral dilemmas and instances where we cannot decide which is better (e.g., shall we make a person with slightly more wisdom, or slighlty more mutual affection?).
Even if we did, we can just stipulate an RC to interact with it. “Consider a population A with 10 increments of each feature of value (love, health, happiness, whatever), now A+, which has all members originally in A with 11 increments of each feature of value, and another population with 2 units of each feature of value, and now consider B, with all members in A+ at 9 units of each feature of value.” We seem to have just increased the dimensions of the problem.
We could have ideals about how utility is concentrated, and so have pattern goods that rule out RC like instances, but it seems horribly ad hoc to put ‘avoid repugnant conclusions’ as lexically prior in normative theory. It also seems a bit ad hoc or patch like to say “maximize utility in same number cases, but we need to do something different for different number cases”.
3) The case for rejecting mere (or beneficial) addition is weak
The case against the mere addition paradox is that people in fact do not have lots of children, and further that they don’t have lots of children and neglect them. I’m not adverse to taking folk opinion as evidence against ethical principles, but in this case the evidence is really weak. Maybe people are being ill-informed, or selfish, or maybe wary of the huge externalities of neglected children, or maybe they think (rightly or wrongly) that they add more utility directing their energy to other things besides having children.
It’s certainly true that total view util implies a pro tanto reason to have children, and even a pro tanto reason to have children even if you’ll look after them poorly. But you need way more work to show it generally secures an all things considered reason to have children, and waaaayyy more work to show it gives an all things considered reason to have children and neglect them.
#
Now hey, util has all sorts of counter-intuitive problems:
1) The fact value is fungible between love, mutual affection, getting high, feeling smug, having an orgasm, eating food, or reading Tolstoy means we get seemingly sucky cases where we should prefer wireheading or orgasmatrons to eudaimonia (now, utils can say eudaimonia is worth much more than wire-heading, but it won’t be incommensurately more, so enough wireheads will beat it).
2) That value is seperable between persons means we can get utility monsters, and cases where we benefit the best off instead of the worst off (or even benefit the best off at the expense of the worst off).
3) Aggregating leads to RCs and other worries.
And others I’ve forgotten. But I don’t think these are lethal: that utils are obliged in toy scenarios to genocide moral agents or have trillions of bare-worth-living lives doesn’t seem that embarassing to me—and, more importantly, alternative accounts of theories also run into problems too.
Ultimately, I don’t think your essay accomplishes more than tangling up the ‘usual suspects’ for anti total-utilitarianism objections, and the GC is a failure. I don’t think your program of ‘idealizing’ consequentialism in different number cases by adding on extra axes of value, or just bolting on ‘must not end up in RC’ to your normative theory is going to work well.
There does seem to be some good reason to consider mice a better buy, you can create a vast amount of happy mice for the same price you can create one human with all their various complex preferences. Mice require much fewer resources to give them pleasure than humans do to satisfy their complicated preferences.
You could rate these things as more valuable than the pleasure of animals, but that seems to go against the common utilitarian belief that the wellbeing of animals is important and that humans should be willing to sacrifice some of their wellbeing to benefit animals. After all, if rating those human values much better than animal pleasure means it’s better to create humans than animals, it also means it’s acceptable to inflict large harms on animals to help humans achieve relatively small gains in those values. I think it makes much more sense to say that we should value humans and animals equally if both exist, but prefer to create humans.
Not necessarily. We can stipulate that certain values are discontinuous, or have diminishing returns relative to each other. For instance, at the beginning I stipulated that “A population in which moral beings exist and have net positive utility, and in which all other creatures in existence also have net positive utility, is always better than a population where moral beings do not exist.” In other words a small village full of human beings living worthwhile lives is more worth creating than a galaxy full of mice on heroin.
Doing this doesn’t mean that you can’t aggregate the values in order to determine which ones it is best to promote. But it will prevent a huge quantity of one value from ever being able to completely dominate all the others.
I don’t think it’s an ad hoc theory just to avoid the RC. People seem to have similar moral views in a number of other cases that don’t involve vast populations. For instance, it’s generally agreed that it is wrong to kill someone who is destined to live a perfectly good and worthwhile life even if doing so will allow you to replace them with another person who will live a slightly better life. It seems like “maximize utility” is a principle that only applies to existing people. Creating more people requires a whole other set of rules.
The more I think about it, the more I have trouble believing I ever thought utility maximization was a good way to do population ethics. It seems like an instance of “when all you have is a hammer everything looks like a nail.” Utility maximization worked so well for scenarios where the population was unchanged that ethicists tried to apply it to population ethics, even though it doesn’t really work well there.
I personally don’t find the classical RC as repugnant as I used to. The implications of total utilitarianism that I still find deal-breakingly abhorrent are conclusions that involve killing a high utility population in order to replace it with a slightly higher utility population, especially in cases where the only reason the new population has higher utility is that its members have simpler, easier to satisfy values. The Genocidal Conclusion is the big one, but I also find micro-level versions of these conclusions horrible, like that it might be good to wirehead someone against their will, or kill someone destined to lead a great life and replace them with someone with a slightly better life. It was these conclusions that led me to ideal consequentialism, the fact that it might also be able to avoid the RC was just icing on the cake. If ideal consequentialism can somehow avoid “kill and replace” conclusions, but can’t avoid the RC, I’d still be satisfied with it.
I don’t think it’s any weirder than saying “maximize preference satisfaction for creatures capable of having preferences, but maximize pleasure for creatures that are not.” And that doesn’t seem a controversial statement at all.
I agree that that is the weakest part of the OP. But I don’t think people are motivated by the reasons you suggested. Most people I know seem to think that they have a moral duty to provide a high level of care to any children they might have, and have a moral duty to not have children if they are unwilling or unable to provide that level of care. Their motivation is driven by moral conviction, not by selfishness or by practical considerations.
In general people’s preference ordering seems to be:
1) New person exists and I am able to care for them without inflicting large harms on myself.
2) New person doesn’t exist
3) New person exists and I inflict large harms on myself to care for them.
4) New person exists and I don’t care for them.
If this is the case then it might be that mere or benign addition is logically impossible. You can’t add new lives barely worth living without harming the interests of existing people.
That does seem to be so, but why? I’d change the order of 1) and 2). What’s so good about having more people? Better improve the lives of existing people instead.
My theory at the moment is that we have some sort of value that might be called “Harmony of self-interest and moral interests.” This value motivates us to try to make sure the world we live in is one where we do not have to make large sacrifices of our own self-interest in order to improve the lives of others. This in turn causes us to oppose the creation of new people with lives that are worse than our own, even if we could, in theory, maintain our current standard of living while allowing them to live in poverty. This neatly blocks the mere addition paradox since it makes it impossible to perform “mere additions” of new people without harming the interests of those who exist.
I suspect the reason this theory is not addressed heavily in moral literature is the tendency to conflate utility with “happiness.” Since it obviously is possible to “merely add” a new person without impacting the happiness of existing people (for instance, you could conceal the new person’s existence from others so they won’t feel sorry for them) it is mistakenly believed you can also do so without affecting their utility. But even brief introspection reveals that happiness and utility are not identical. If someone spread dirty rumors about me behind my back, cheated on me, or harmed my family when I wasn’t around, and I never found out, my happiness would remain the same. But I’d still have been harmed.
You’re right, of course. But at this point in our history it isn’t actually a choice of one or the other. Gradually adding reasonable amounts of people to the world actually improves the lives of existing people. This is because doing so allows the economy to develop new divisions of labor that increase the total amount of wealth. That is why we are so much richer than we were in the Middle Ages, even though the population is larger. Plus humans are social animals, having more people means having more potential friends.
Eventually we may find some way to change this. For instance, if we invented a friendly AI that was cheaper to manufacture than a human, more efficient at working, and devoted all its efforts to improving existing lives, then mass manufacturing more copies of it would increase total wealth better than making more people (though we would still be justified in making more people if the relationships we formed with them greatly improved our lives).
But at the moment there is no serious dilemma between adding a reasonable amount of new people and improving existing lives. The main reason I am going after total utilitarianism so hard is that I like my moral theories to be completely satisfactory, and I disapprove of moral theories that give the wrong answers, even if they only do so in a scenario that I will never encounter in real life.