I agree with Unnamed that this post misunderstands Parfit’s argument by tying it empirical claims about resources that have no relevance.
Just imagine God is offering you choices between different universes with inhabitants of the stipulated level of wellbeing: he offers you A, then offers you to take A+, then B, then B+, etc. If you are interested in maximizing aggregate value you’ll happily go along with each step to Z (indeed, if you are offered all the worlds from A to Z at once an aggregate maximizer will go straight for Z. This is what the repugnant conclusion is all about: it has nothing whatsoever to do with whether or not Z (or the ‘mechanism’ of mere addition to get from A to Z) is feasible under resource constraint, but that if this were possible, maximizing aggregate value obliges we take this repugnant conclusion. I don’t want to be mean, but this is a really basic error.
The OP offers something much better when offering a pluralist view to try and get out of the mere addition paradox by saying we should have separate term in our utility function for average level of well-being (further, an average of currently existing people), and that will stop us reaching the repugnant conclusion. However, it only delays the inevitable. Given the ‘average term’ doesn’t dominate (or is lexically prior to) the total utility term, there will be acceptable deals this average total pluralist should accept where we lose some average but gain more than enough total utility to make up for it. Indeed, for a person affecting view we can make it so that the ‘original’ set of people in A get even better:
A : 10 people at wellbeing 10 A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5 B: 1 million and ten people at wellbeing 9.8.
A to A+ and A+ to B increase total utility. Moving from A to A+ is a drop in average utility by a bit under 0.5 points, but multiples the total utility by around 100 000, and all the people in A have double their utility. So it seems a pluralist average/total person view is should accept these moves, and so should we’re off to the repugnant conclusion again (and if they don’t, we can make even stronger examples like 10^10 new people in A with wellbeing 9.99 and everyone originally in A gets 1 million utility, etc.)
Aside 1: Person affecting views (caring about people who ‘already’ exist) can get you out of the repugnant conclusion, but has their own costs: Intransitivity. If you only care about people who exist, then A → A+ is permissible (no one is harmed), A+ --> B is permissible (because we are redistributing well being among people who already exist), but A --> B is not permissible. You can also set up cycles whereby A>B>C>A.
Aside 2: I second the sentiment that the masses of upvotes this post has received reflects poorly on the LW collective philosophical acumen (‘masses’, relatively speaking: I don’t think this post deserves a really negative score, but I don’t think a post that has such a big error in it should be this positive, still less be exhorted to be ‘on the front page’). I’m currently writing a paper on population ethics (although I’m by no means an expert on the field), but seeing this post get so many upvotes despite the fatal misunderstanding of plausibly the most widely discussed population ethics case signals you guys don’t really understand the basics. This undermines the not-uncommon LW trope that analytic philosophy is not ‘on the same level’ as bone fide LW rationality, and makes me more likely to account for variance between LW and the ‘mainstream view’ on ethics, philosophy of mind, quantum mechanics (or, indeed, decision theory or AI) as LWers being on the wrong side of the Dunning-Kruger effect.
Indeed, for a person affecting view we can make it so that the ‘original’ set of people in A get even better:
A : 10 people at wellbeing 10
A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5
B: 1 million and ten people at wellbeing 9.8.
A to A+ and A+ to B increase total utility. Moving from A to A+ is a drop in average utility by a bit under 0.5 points, but multiples the total utility by around 100 000, and all the people in A have double their utility. So it seems a pluralist average/total person view is should accept these moves, and so should we’re off to the repugnant conclusion again (and if they don’t, we can make even stronger examples like 10^10 new people in A with wellbeing 9.99 and everyone originally in A gets 1 million utility, etc.)
I’ve been thinking about this argument (which is formally called the Benign Addition Paradox) for a few months, and I’m no longer sure it holds up. I began to think about if I would support doing such a thing in real life. For instance, I wondered if I would push a button that would create a bunch of people who are forced to be my slaves for a couple days per week, but are freed for just long enough each week that their lives could be said to be worthwhile. I realized that I would not.
Why? Because if I created those people with lower utility than me, I would immediately possess an obligation to free them and then transfer some of my utility to them, which would reduce my level of utility. So, if we adopt a person-affecting view, we can adopt the following rule: Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.
So A+ is worse than A because the people who previously existed in A have a moral duty to transfer some of their utility to the new people who were added. They have a duty to convert A+ into B, which would harm them.
Now, you might immediately bring up Parfit’s classic argument where the new people are geographically separated from the existing people, and therefore incapable of being helped. In that case, hasn’t no harm been done, since the existing people are physically incapable of fulfilling the moral obligation they have? No, I would argue. It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.
I think that the geographic separation argument seems plausible because it contaminates what is an essentially consequentialist argument with virtue ethics. The geographic separation is no one’s fault, no one choose to cause it, so it seems like it’s morally benign. Imagine, instead, that you had the option of pushing a button that would have two effects:
1) It would create a new group of people who would be your slaves for a few days each week, but be free long enough that their life could be said to be barely worthwhile.
2) It would create an invincible, unstoppable AI that will thwart any attempt to equalize utility between the new people and existing people. It will even thwart an attempt by you to equalize utility if you change your mind.
I don’t know about you, but I sure as hell wouldn’t push that button, even though it does not differ from the geographic separation argument in any important way.
Of course, this argument does create some weird implications. For instance, it implies that there might be some aliens out there with a much higher standard of living than we have, and we are inadvertently harming them by reproducing. However, it’s possible that the reason that this seems so counterintuitive is that when contemplating it we are mapping it to the real world, not the simplified world we have been using to make our arguments in so far. In the real world we can raise the following practical objections:
1) We do not currently live in a world where utility is Pareto efficient. In the various addition paradox arguments it is assumed to be, but that is a simplifying assumption that does not reflect the real world. Generally when we create a new person in this day and age we increase utility, both by creating new family members and friends for people, and by allowing greater division of labor to grow the economy. So adding new people might actually help the aliens by reducing their moral obligation.
2) We already exist, and stopping people from having children generally harms them. So even if the aliens would be better off if we had never existed, now that we exist our desire to reproduce has to be taken into account.
3) If we ever actually meet the aliens, it seems likely that through mutual trade we could make each other both better off.
Of course, as I said before, these are all practical objections that don’t affect the principle of the thing. If the whole “possibly harming distant aliens by reproducing” thing still seems too counterintuitive to you, you could reject the person-affecting principle, either in favor of an impersonal type of morality, or in favor of some sort of pluralist ethics that takes both impersonal and person-affecting morality into account.
You’ve been one of my best critics in this, so please let me know if you think I’m onto something, or if I’m totally off-base.
Aside: Another objection to the Benign Addition Paradox I’ve come up with goes like this. A: 10 human beings at wellbeing 10.
A+: 10 human beings at wellbeing 50 &1million sadistic demon-creatures at wellbeing 11. The demon-creatures derive 9 wellbeing each from torturing humans or watching humans being tortured.
B: 10 human beings at wellbeing −10,000 (from being tortured by demons) & 1 million sadistic demon creatures at wellbeing 20 (9 of which they get from torturing the 10 humans).
All these moves raise total utility, average utility, and each transition benefits all the persons involved, yet B seems obviously worse than A. The most obvious solutions I could think of were:
1) The “conferring a moral obligation on someone harms them” argument I already elucidated.
2) Not counting any utility derived from sadism towards the total.
That is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like “obligation,” in a consequentialist discussion.
I’ll try to recast the language in a more consequentialist style:
Instead of saying that, from a person-affecting perspective:
“Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.”
We can instead say:
“An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them.”
Instead of saying: “It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.”
We can instead say:
“It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action.”
If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don’t think that the second premise is at all controversial, but the first one might be.
I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it’s a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn’t.
Don’t have as much time as I would like, but short and (not particularly) sweet:
I think there is a mix up between evaluative and normative concerns here. We could say that the repugnant conclusion world is evaluated better than the current world, but some fact about how we get there (via benign addition or similar) is normatively unacceptable. But even then that seems a big bullet to bite—most of us think the RC is worse than a smaller population with high happiness (even if lower aggregate), not that it is better but it would be immoral for us to get there.
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well. This has the unfortunate side-effect of violating irrelevance of independent alternatives (if only A and A+ are on offer, we should say A+ > A, but once we introduce B, A > A+). Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick). That said, everything in population ethics has nasty conclusions...
However, I don’t buy the idea that we can rule out benign addition because the addition of moral obligation harms someone independent of the drop in utility they take for fulfilling it. It seems plausible that a fulfilled moral obligation makes the world a better place. There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners), or indeed, depending on the redistribution ethic you take, everyone who isn’t the most well-off person. Even if you don’t and say it is outweighed by other concerns, this still seems to be misdiagnosing what should be morally salient here—there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
I think there is a mix up between evaluative and normative concerns here.
That’s right, my new argument doesn’t avoid the RC for questions like “if two populations were to spontaneously appear at exactly the same time which would be better?”
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well.
What I’m actually arguing is that A+ is [person-affecting] worse than A, even when B is unavailable. This is due to following the axiom of transitivity backwards instead of forwards. If A>B and A+<B then A+<A. If I was to give a more concrete reason for why A+<A I would say that the fact that the A people are unaware that the + people exist is irrelevant, they are still harmed. This is not without precedent in ethics, most people think that a person who has an affair harms their spouse, even if their spouse never finds out.
However, after reading this essay by Eliezer, (after I wrote my November 8th comment) I am beginning to think the intransitivity that the person affecting view seems to create in the Benign Addition Paradox is an illusion. “Is B better than A?” and “Is B better than A+” are not the same question if you adopt a person affecting view, because the persons being affected are different in each question. If you ask two different questions you shouldn’t expect transitive answers.
There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners
I know, the example I gave was that we all might be harming unimaginably affluent aliens by reproducing. I think you are right that even taking the objections to it that I gave into account, it’s a pretty weird conclusion.
there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
I don’t know, I’ve heard people complain about poor people reproducing and increasing the burden on the welfare system before. Most of the time I find these complainers repulsive, I think their complaints are motivated by ugly, mean-spirited snobbery and status signalling, rather than genuine ethical concerns. But I suspect that a tiny minority of the complainers might have been complaining out of genuine concern that they were being harmed.
Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick).
Again, I agree. My main point in making this argument was to try to demonstrate that a pure person-affecting viewpoint could be saved from the benign addition paradox. I think that even if I succeeded in that, the other weird conclusions I drew (i.e., we might be hurting super-rich aliens by reproducing) demonstrate that a pure person-affecting view is not morally tenable. I suspect the best solution might be to develop some pluralist synthesis of person-affecting and objective views.
It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus—why believe that?
Intransitivity
I didn’t find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers (“is A better than A+, is B better than A+?”), and most person affecting views have big problems with transitivity: consider this example.
World 1: A = 2, B = 1
World 2: B = 2, C = 1
World 3: C = 2, A = 1
By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).
One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn’t such a big bullet to bite (you retain within-choice ordering).
Synthesis
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the ‘total term’ has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.
most person affecting views have big problems with transitivity
That is because I don’t think the person affecting view asks the same question each time (that was the point of Eliezer’s essay). The person-affecting view doesn’t ask “Which society is better, in some abstract sense?” It asks “Does transitioning from one society to the other harm the collective self-interest of the people in the original society?” That’s obviously going to result in intransitivity.
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble.....Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
I think I might have been conflating the “person affecting view” with the “prior existence” view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.
Basically, I find it unacceptable for ethics to conclude something like “It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life.” This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).
On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.
I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it’s wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.
This doesn’t completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That’s what I want to avoid, and I think the various pluralist ideas I’ve introduced successfully do so.
You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.
I agree with Unnamed that this post misunderstands Parfit’s argument by tying it empirical claims about resources that have no relevance.
My argument was against the Mere Addition Paradox, which works by progressively adding more and more people, and the common belief that one of the implications of the MAP is that we have a moral duty to devote all our resources to creating extremely large amounts people.
My main goal is to integrate the common intuition that A+ is better than A with the intuition that creating a vast number of people with low quality of life is bad. Parfit supports the intuition that A+ is better than A by pointing out that the extra people are not doing the inhabitants of A any harm by existing. I point out the reason that this is true is that the extra inhabitants come with their own resources, and that a society with those extra resources, but less people (A++), would be even better.
Just imagine God is offering you choices between different universes with inhabitants of the stipulated level of wellbeing: he offers you A, then offers you to take A+, then B, then B+, etc.
If each world had the same amount of resources then I’d choose A, it’s the most efficient one at converting resources into overall value.
My understanding of Parfit’s point is that it let you argue that all other things being equal, a huge population with low quality of life is better than a small one with high quality of life. This is what I am trying to refute. Like Unnamed, you don’t seem to think this is necessarily what the MAP implies.
This is what the repugnant conclusion is all about: it has nothing whatsoever to do with whether or not Z (or the ‘mechanism’ of mere addition to get from A to Z) is feasible under resource constraint, but that if this were possible, maximizing aggregate value obliges we take this repugnant conclusion. I don’t want to be mean, but this is a really basic error.
Again, my main point in writing this was to attack the chain of logic that leads from the intuition that adding a few people to A+ will do no harm to the Repugnant Conclusion. In other words, to attack the paradoxical nature of the MAP. I am aware that there are other arguments for the RC that require other responses such as the one about maximizing aggregate utility. Would you buy those cable packages if the government wasn’t forcing you to?
Perhaps I should have started with the pluralist values, since they were sort of the underpinning of my argument. I am basically advocating a system where both creating new lives worth living, improving the utility of those who exist, and possibly other values such as equality, contribute to Overall Value. However, they have diminishing returns relative to each other (if saying that the value of creating a life worth living changes gives you the creeps just keep the value of doing that constant and change the value of the others, it’s essentially the same). I’m not sure if increasing total utility should be a contributing value on its own, or if it is just a side-effect of increasing both the number of lives worth living and the average utility simultaneously.
So the more lives worth living you have, the greater the contributions that enhancing the utility existing lives contributes to overall value. For instance, in a very small population using resources to create a life worth living might contribute 1 Overall Value Point (OVP) while using those same resources to improve existing lives might only produce 0.5 OVPs. However, as the population grows larger, improving existing lives generates more and more OVPs, while creating lives worth living shrinks or remains constant.*
So maybe, if you added a vast amount of lives worth living to a you could generate the same amount of OVP that you could by increasing the average utility just a little. But it would be a fantastically inefficient way to generate OVP. A world where some of the resources used to sustain all those lives were instead used to enhance the lives of those who already exist would be a world with vastly more overall value.
Given the ‘average term’ doesn’t dominate (or is lexically prior to) the total utility term, there will be acceptable deals this average total pluralist should accept where we lose some average but gain more than enough total utility to make up for it.
Is this any different from the Zeno’s paradoxes of motion? I.E. you’re basically saying that there is no point where the changes are big enough to become undesirable, so eventually we’ll get to a point that everyone agrees is undesirable. How is that any different from saying Achilles will never catch the tortoise?
*I imagine that actually the values might also change relative to the resources available. Having 8 billion lives worth living on one planet seems like a good amount, but having just 8 billion lives worth living in a whole galaxy seems like a waste.
1) I don’t think anyone in the entire population ethics literature reads Parfit as you do: the moral problem is not one of feasibility via resource constraint, but rather just that Z is a morally preferable state of affairs to A, even if it is not feasible. Again, the paradoxical nature of the MAP is not harmed even if it demands utterly infeasible or even nomologically impossible, but that were we able to actualize Z we should do it.
Regardless, I don’t see how the ‘resource constraint complaint’ you make would trouble the reading of Parfit you make. Parfit could just stipulate that the ‘gain’ in resources required from A to A+ is just an efficiency gain, and so A → Z (or A->B, A->Z) does not involve any increase in consumption. Or we could stipulate the original population in A, although giving up some resources are made happier by knowing there is this second group of people, etc. etc. So it hardly seems necessarily the case that A to A+ demands increased consumption. Denying these alternatives looks like hypothetical fighting.
2) I think the pluralist point stands independently of the resource constraint complaint. But you seem imply a fact you value efficient resource consumption independently: you prefer A because it is a more efficient use of resources, you note there might be diminishing returns to the value of ‘added lives’ so adding lives becomes a merely inefficient way of adding value, etc. Yet I don’t think we should care about efficiency save as an instrument of getting value. All things equal a world with 50 Utils burning 2 million utils is better than one with 10 utils burning 10. So (again) objections to feasibility or efficiency shouldn’t harm the MAP route to the repugnant conclusion.
3) I take your hope for escaping the MAP is getting some sort of weighted sum or combination of total utility, the utility of those who already exist, and possibly average utility of lives will get us our ‘total value’. However, unless you hold that the ‘average term’ or the ‘person affecting’ term are lexically prior to utility (so no amount of utility can compensate for a drop in either), you are still susceptible to a variant of the MAP I gave above:
A : 10 people at wellbeing 10 A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5 B: 1 million and ten people at wellbeing 9.8.
So the A to A+ move has a small drop in average but a massive gain in utility, and persons already existing gain a boost in their wellbeing (and I can twist the dials even more astronomically). So if we can add these people, redistributing between them such that total value and equality increases seems plausible. And so we’re off to the races. It might be the case that each move demands arbitrarily massive (and inefficient) use resources to actualize—but, again, this is irrelevant to a moral paradox. The only way the diminishing marginal returns point would help avoid MAP if they were asymptotic to some upper bound. However, cashing things out that way looks implausible, and also is vulnerable to intransitivity.
I don’t see the similarity to Zeno’s paradoxes of motion—or, at least, I don’t see how this variant is more similar to Zeno than the original MAP is. Each step from A to A+ to B …. to Z, either originally or in my variant to make life difficult for your view is a step that increases total value. Given transitivity, Z will be better than A. If you think this is unacceptably Zeno like, then you could just make that complaint to the MAP simpliciter (although, FWIW, I think there are sufficient disanalogies as Zeno only works by taking each ‘case’ asymptotically closer to the singularity when tortoise and achilles meet, by contrast MAP is expanding across relevant metrics, so it seems more analogous to a Zeno case where Achilles is ahead of the Tortoise).
I don’t think anyone in the entire population ethics literature reads Parfit as you do: the moral problem is not one of feasibility via resource constraint, but rather just that Z is a morally preferable state of affairs to A, even if it is not feasible.
The view I am criticizing is not that Z may be preferable to A, under some circumstances. It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked.
Again, the paradoxical nature of the MAP is not harmed even if it demands utterly infeasible or even nomologically impossible, but that were we able to actualize Z we should do it.
Again, my complaint with the paradox is not that, if Z and A are our only choices, that A is preferable to Z. Rather my complaint is the interpretation that if we were given some other alternative, Y that has a much larger population than A, but a smaller population and higher quality of life than Z, that Z would be preferable to Y as well.
All things equal a world with 50 Utils burning 2 million utils is better than one with 10 utils burning 10. So (again) objections to feasibility or efficiency shouldn’t harm the MAP route to the repugnant conclusion.
Again, I admitted that my solution might allow a MAP route to the repugnant conclusion under some instances like the one you describe. My main argument is that under circumstances where our choices are not constrained in such a manner, it is better to pick a society with a higher quality of life and lower population.
So the A to A+ move has a small drop in average but a massive gain in utility, and persons already existing gain a boost in their wellbeing (and I can twist the dials even more astronomically). So if we can add these people, redistributing between them such that total value and equality increases seems plausible. And so we’re off to the races. It might be the case that each move demands arbitrarily massive (and inefficient) use resources to actualize—but, again, this is irrelevant to a moral paradox.
Again, my objection is not that going this route is preferable is the best choice if it is the only choice we are allowed. My objection is to people who interpret Parfit to mean that even under circumstances where we are not in such a hypothetical and have more option to choose from, we should still choose the world with lives barely worth living (i.e. Robin Hanson). Again, those people may be interpreting Parfit incorrectly, which in turn makes my criticism seem like an incorrect interpretation of Parfit. But I think it is a common enough view that it deserves criticism.
In light of your and Unnamed’s comments I have edited my post and added an explanatory paragraph at the beginning, which says:
“EDIT: To make this clearer, the interpretation of the Mere Addition Paradox this post is intended to criticize is the belief that two societies that differ in no way other than that one has a higher population and lower quality of life than the other, that that society is necessarily better than the one with the lower population and higher quality of life. Several commenters have argued that this is not a correct interpretation of the Mere Addition Paradox. They seem to claim that a more correct interpretation is that a sufficiently large population with a lower quality of life is better than a smaller one with a higher quality of life, but that it may need to differ in other ways (such as access to resources) to be truly better. They may be right, but I think that it is still a common enough interpretation that it needs attacking. The main practical difference between the interpretation that I am attacking and the interpretation they hold is that the former confers a moral obligation to create as many people as possible, regardless of its effects on quality of life, but the later does not.”
Let me know if that deals sufficiently with your objections.
″ It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked.”
Generally it’s a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.
Your edit doesn’t help much at all. You talk about what others “seem to claim”, but the argument that you have claimed Parfit is making is so obviously nonsensical, that it would lead me to wonder why anyone cites his paper at all, or why any philosophers or mathematicians have bothered to refute or support it’s conclusions with more than a passing snark. A quick google search on the term “Repugnant Conclusion” leads to a wikipedia page that is far more informative than anything you have written here.
Generally it’s a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.
It doesn’t seem any less obviously stupid to me then the more moderate conclusion you claim that Parfit has drawn. If you really believe that creating a new lives barely worth living (or “lives someone would barely choose to live,” in your words) is better than increasing the utility of existing lives then the next logical step is to confiscate all the resources people are using to live standards of life higher than “a life someone would barely choose to live” and use them to make more people instead. That would result in a society identical to the previous one except that it has a lower quality of life and a higher population.
Perhaps it would have sounded a little better if I had said “It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A, providing that Z’s larger population is large enough that it has higher total utility than A.” I disagree with this of course, it seems to me that total and average utility are both valuable, and one shouldn’t dominate the other.
Also, I’m sorry to have retracted the comment you commented on, I did that before I noticed you had commented on it. I decided that I could explain my ideas more briefly and clearly in a new comment and posted that one in its place.
Okay, I think I finally see where our inferential differences are and why we seem to be talking past each other. I’m retracting my previous comment in favor of this one, which I think explains my view much more clearly.
I interpreted the Repugnant Conclusion to mean that a world with a large population with lives barely worth living is the optimal world, given the various constraints placed on it. In other words, given a world with a set amount of resources, the optimal way to convert those resources to value is to create a huge population with lives barely worth living. I totally disagree with this.
You interpreted the Repugnant Conclusion to mean that a world with a huge population of lives barely worth living may be a better world, but not necessarily the optimal world. I may agree with this.
To use a metaphor imagine a 25 horsepower engine that works at 100% efficiency, generating 25 horsepower. Then imagine a 100 horsepower engine that works at 50% efficiency, generating 50 horsepower. The second engine is better at generating horsepower than the first one, but it is less optimal at generating horsepower, it does not generate it the best it possibly could.
So when you say:
All things equal a world with 50 Utils burning 2 million utils is better than one with 10 utils burning 10.
We can agree say (if you accept my pluralist theory) that the first world is better, but the second one is more optimal. The first world has generated more value, but the second has done a more efficient job of it.
So, if you accept my pluralist theory, we might also say that a population Z, consisting of a galaxy full of 3 quadrillion of people that uses there sources of the galaxy to give them lives barely worth living, would be better than A, a society consisting of planet full of ten billion people that uses the planet’s resources to give its inhabitants very excellent lives. However, Z would be less morally optimal than A because A uses all the resources of the planet to give people excellent lives, while Z squanders its resources creating more people. We could then say that Y, a galaxy full of 1 quadrillion people with very excellent lives is both better than Z and more optimal than Z. We could also say that Y is better than A, and equally optimal as A. However, Y might be worse (but more optimal) than a galaxy with a septillion people living lives barely worth living. Similarly, we might say that A is both more optimal than, and better than B, a planet of 15 billion people living lives barely worth living.
The arguments I have made in the OP have been directed at the idea that a population full of lives barely worth living is the optimal population, the population that converts the resources it has into value most efficiently (assuming you accept my pluralist moral theory’s definition of efficiency). You have been arguing that even if that population is the most efficient at generating value, there might be another population so much huger that it could generate more value, even if it is much less efficient at doing so. I do not see anything contradictory about those two statements. I think that I mistakenly thought you were arguing that such a society would also be more optimal.
And if that is all the Repugnant Conclusion is I fail to see what all the fuss is about. The reason it seemed so repugnant to me was that I thought it argued that a world full of people with lives barely worth living was the very best sort of world, and we should do everything we can to bring such a world about. However, you seem to imply that that isn’t what it means at all. If the Mere Addition Paradox and the Repugnant Conclusion does not imply that we have a moral imperative to bring a vastly populated world about then all it is is a weird thought experiment with no bearing on how people should behave. A curiosity, nothing more.
Even if your argument is a more accurate interpretation of Parfit, I think that idea that a world full of people barely worth living is the optimal one is still a common enough idea that it merits a counterargument. And I think the reason the OP is so heavily upvoted is that many people held the same impression of Parfit that I did.
I agree with Unnamed that this post misunderstands Parfit’s argument by tying it empirical claims about resources that have no relevance.
Just imagine God is offering you choices between different universes with inhabitants of the stipulated level of wellbeing: he offers you A, then offers you to take A+, then B, then B+, etc. If you are interested in maximizing aggregate value you’ll happily go along with each step to Z (indeed, if you are offered all the worlds from A to Z at once an aggregate maximizer will go straight for Z. This is what the repugnant conclusion is all about: it has nothing whatsoever to do with whether or not Z (or the ‘mechanism’ of mere addition to get from A to Z) is feasible under resource constraint, but that if this were possible, maximizing aggregate value obliges we take this repugnant conclusion. I don’t want to be mean, but this is a really basic error.
The OP offers something much better when offering a pluralist view to try and get out of the mere addition paradox by saying we should have separate term in our utility function for average level of well-being (further, an average of currently existing people), and that will stop us reaching the repugnant conclusion. However, it only delays the inevitable. Given the ‘average term’ doesn’t dominate (or is lexically prior to) the total utility term, there will be acceptable deals this average total pluralist should accept where we lose some average but gain more than enough total utility to make up for it. Indeed, for a person affecting view we can make it so that the ‘original’ set of people in A get even better:
A : 10 people at wellbeing 10
A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5
B: 1 million and ten people at wellbeing 9.8.
A to A+ and A+ to B increase total utility. Moving from A to A+ is a drop in average utility by a bit under 0.5 points, but multiples the total utility by around 100 000, and all the people in A have double their utility. So it seems a pluralist average/total person view is should accept these moves, and so should we’re off to the repugnant conclusion again (and if they don’t, we can make even stronger examples like 10^10 new people in A with wellbeing 9.99 and everyone originally in A gets 1 million utility, etc.)
Aside 1: Person affecting views (caring about people who ‘already’ exist) can get you out of the repugnant conclusion, but has their own costs: Intransitivity. If you only care about people who exist, then A → A+ is permissible (no one is harmed), A+ --> B is permissible (because we are redistributing well being among people who already exist), but A --> B is not permissible. You can also set up cycles whereby A>B>C>A.
Aside 2: I second the sentiment that the masses of upvotes this post has received reflects poorly on the LW collective philosophical acumen (‘masses’, relatively speaking: I don’t think this post deserves a really negative score, but I don’t think a post that has such a big error in it should be this positive, still less be exhorted to be ‘on the front page’). I’m currently writing a paper on population ethics (although I’m by no means an expert on the field), but seeing this post get so many upvotes despite the fatal misunderstanding of plausibly the most widely discussed population ethics case signals you guys don’t really understand the basics. This undermines the not-uncommon LW trope that analytic philosophy is not ‘on the same level’ as bone fide LW rationality, and makes me more likely to account for variance between LW and the ‘mainstream view’ on ethics, philosophy of mind, quantum mechanics (or, indeed, decision theory or AI) as LWers being on the wrong side of the Dunning-Kruger effect.
I’ve been thinking about this argument (which is formally called the Benign Addition Paradox) for a few months, and I’m no longer sure it holds up. I began to think about if I would support doing such a thing in real life. For instance, I wondered if I would push a button that would create a bunch of people who are forced to be my slaves for a couple days per week, but are freed for just long enough each week that their lives could be said to be worthwhile. I realized that I would not.
Why? Because if I created those people with lower utility than me, I would immediately possess an obligation to free them and then transfer some of my utility to them, which would reduce my level of utility. So, if we adopt a person-affecting view, we can adopt the following rule: Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.
So A+ is worse than A because the people who previously existed in A have a moral duty to transfer some of their utility to the new people who were added. They have a duty to convert A+ into B, which would harm them.
Now, you might immediately bring up Parfit’s classic argument where the new people are geographically separated from the existing people, and therefore incapable of being helped. In that case, hasn’t no harm been done, since the existing people are physically incapable of fulfilling the moral obligation they have? No, I would argue. It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.
I think that the geographic separation argument seems plausible because it contaminates what is an essentially consequentialist argument with virtue ethics. The geographic separation is no one’s fault, no one choose to cause it, so it seems like it’s morally benign. Imagine, instead, that you had the option of pushing a button that would have two effects:
1) It would create a new group of people who would be your slaves for a few days each week, but be free long enough that their life could be said to be barely worthwhile.
2) It would create an invincible, unstoppable AI that will thwart any attempt to equalize utility between the new people and existing people. It will even thwart an attempt by you to equalize utility if you change your mind.
I don’t know about you, but I sure as hell wouldn’t push that button, even though it does not differ from the geographic separation argument in any important way.
Of course, this argument does create some weird implications. For instance, it implies that there might be some aliens out there with a much higher standard of living than we have, and we are inadvertently harming them by reproducing. However, it’s possible that the reason that this seems so counterintuitive is that when contemplating it we are mapping it to the real world, not the simplified world we have been using to make our arguments in so far. In the real world we can raise the following practical objections:
1) We do not currently live in a world where utility is Pareto efficient. In the various addition paradox arguments it is assumed to be, but that is a simplifying assumption that does not reflect the real world. Generally when we create a new person in this day and age we increase utility, both by creating new family members and friends for people, and by allowing greater division of labor to grow the economy. So adding new people might actually help the aliens by reducing their moral obligation.
2) We already exist, and stopping people from having children generally harms them. So even if the aliens would be better off if we had never existed, now that we exist our desire to reproduce has to be taken into account.
3) If we ever actually meet the aliens, it seems likely that through mutual trade we could make each other both better off.
Of course, as I said before, these are all practical objections that don’t affect the principle of the thing. If the whole “possibly harming distant aliens by reproducing” thing still seems too counterintuitive to you, you could reject the person-affecting principle, either in favor of an impersonal type of morality, or in favor of some sort of pluralist ethics that takes both impersonal and person-affecting morality into account.
You’ve been one of my best critics in this, so please let me know if you think I’m onto something, or if I’m totally off-base.
Aside: Another objection to the Benign Addition Paradox I’ve come up with goes like this.
A: 10 human beings at wellbeing 10.
A+: 10 human beings at wellbeing 50 &1million sadistic demon-creatures at wellbeing 11. The demon-creatures derive 9 wellbeing each from torturing humans or watching humans being tortured.
B: 10 human beings at wellbeing −10,000 (from being tortured by demons) & 1 million sadistic demon creatures at wellbeing 20 (9 of which they get from torturing the 10 humans).
All these moves raise total utility, average utility, and each transition benefits all the persons involved, yet B seems obviously worse than A. The most obvious solutions I could think of were:
1) The “conferring a moral obligation on someone harms them” argument I already elucidated.
2) Not counting any utility derived from sadism towards the total.
I’m interested in what you think.
It is traditionally held in ethics that “ought implies can”—that is, that you don’t have to do any things that you cannot in fact do.
That is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like “obligation,” in a consequentialist discussion.
I’ll try to recast the language in a more consequentialist style: Instead of saying that, from a person-affecting perspective: “Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.”
We can instead say: “An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them.”
Instead of saying: “It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.”
We can instead say: “It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action.”
If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don’t think that the second premise is at all controversial, but the first one might be.
I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it’s a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn’t.
Don’t have as much time as I would like, but short and (not particularly) sweet:
I think there is a mix up between evaluative and normative concerns here. We could say that the repugnant conclusion world is evaluated better than the current world, but some fact about how we get there (via benign addition or similar) is normatively unacceptable. But even then that seems a big bullet to bite—most of us think the RC is worse than a smaller population with high happiness (even if lower aggregate), not that it is better but it would be immoral for us to get there.
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well. This has the unfortunate side-effect of violating irrelevance of independent alternatives (if only A and A+ are on offer, we should say A+ > A, but once we introduce B, A > A+). Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick). That said, everything in population ethics has nasty conclusions...
However, I don’t buy the idea that we can rule out benign addition because the addition of moral obligation harms someone independent of the drop in utility they take for fulfilling it. It seems plausible that a fulfilled moral obligation makes the world a better place. There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners), or indeed, depending on the redistribution ethic you take, everyone who isn’t the most well-off person. Even if you don’t and say it is outweighed by other concerns, this still seems to be misdiagnosing what should be morally salient here—there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
That’s right, my new argument doesn’t avoid the RC for questions like “if two populations were to spontaneously appear at exactly the same time which would be better?”
What I’m actually arguing is that A+ is [person-affecting] worse than A, even when B is unavailable. This is due to following the axiom of transitivity backwards instead of forwards. If A>B and A+<B then A+<A. If I was to give a more concrete reason for why A+<A I would say that the fact that the A people are unaware that the + people exist is irrelevant, they are still harmed. This is not without precedent in ethics, most people think that a person who has an affair harms their spouse, even if their spouse never finds out.
However, after reading this essay by Eliezer, (after I wrote my November 8th comment) I am beginning to think the intransitivity that the person affecting view seems to create in the Benign Addition Paradox is an illusion. “Is B better than A?” and “Is B better than A+” are not the same question if you adopt a person affecting view, because the persons being affected are different in each question. If you ask two different questions you shouldn’t expect transitive answers.
I know, the example I gave was that we all might be harming unimaginably affluent aliens by reproducing. I think you are right that even taking the objections to it that I gave into account, it’s a pretty weird conclusion.
I don’t know, I’ve heard people complain about poor people reproducing and increasing the burden on the welfare system before. Most of the time I find these complainers repulsive, I think their complaints are motivated by ugly, mean-spirited snobbery and status signalling, rather than genuine ethical concerns. But I suspect that a tiny minority of the complainers might have been complaining out of genuine concern that they were being harmed.
Again, I agree. My main point in making this argument was to try to demonstrate that a pure person-affecting viewpoint could be saved from the benign addition paradox. I think that even if I succeeded in that, the other weird conclusions I drew (i.e., we might be hurting super-rich aliens by reproducing) demonstrate that a pure person-affecting view is not morally tenable. I suspect the best solution might be to develop some pluralist synthesis of person-affecting and objective views.
It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus—why believe that?
Intransitivity
I didn’t find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers (“is A better than A+, is B better than A+?”), and most person affecting views have big problems with transitivity: consider this example.
World 1: A = 2, B = 1 World 2: B = 2, C = 1 World 3: C = 2, A = 1
By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).
One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn’t such a big bullet to bite (you retain within-choice ordering).
Synthesis
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the ‘total term’ has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
W1: A=10, B=5 W2: B=6, C=2 W3: C=3, A=1 W4: A=2, B=1
Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.
That is because I don’t think the person affecting view asks the same question each time (that was the point of Eliezer’s essay). The person-affecting view doesn’t ask “Which society is better, in some abstract sense?” It asks “Does transitioning from one society to the other harm the collective self-interest of the people in the original society?” That’s obviously going to result in intransitivity.
I think I might have been conflating the “person affecting view” with the “prior existence” view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.
Basically, I find it unacceptable for ethics to conclude something like “It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life.” This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).
On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.
I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it’s wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.
This doesn’t completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That’s what I want to avoid, and I think the various pluralist ideas I’ve introduced successfully do so.
You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.
My argument was against the Mere Addition Paradox, which works by progressively adding more and more people, and the common belief that one of the implications of the MAP is that we have a moral duty to devote all our resources to creating extremely large amounts people.
My main goal is to integrate the common intuition that A+ is better than A with the intuition that creating a vast number of people with low quality of life is bad. Parfit supports the intuition that A+ is better than A by pointing out that the extra people are not doing the inhabitants of A any harm by existing. I point out the reason that this is true is that the extra inhabitants come with their own resources, and that a society with those extra resources, but less people (A++), would be even better.
If each world had the same amount of resources then I’d choose A, it’s the most efficient one at converting resources into overall value.
My understanding of Parfit’s point is that it let you argue that all other things being equal, a huge population with low quality of life is better than a small one with high quality of life. This is what I am trying to refute. Like Unnamed, you don’t seem to think this is necessarily what the MAP implies.
Again, my main point in writing this was to attack the chain of logic that leads from the intuition that adding a few people to A+ will do no harm to the Repugnant Conclusion. In other words, to attack the paradoxical nature of the MAP. I am aware that there are other arguments for the RC that require other responses such as the one about maximizing aggregate utility. Would you buy those cable packages if the government wasn’t forcing you to?
Perhaps I should have started with the pluralist values, since they were sort of the underpinning of my argument. I am basically advocating a system where both creating new lives worth living, improving the utility of those who exist, and possibly other values such as equality, contribute to Overall Value. However, they have diminishing returns relative to each other (if saying that the value of creating a life worth living changes gives you the creeps just keep the value of doing that constant and change the value of the others, it’s essentially the same). I’m not sure if increasing total utility should be a contributing value on its own, or if it is just a side-effect of increasing both the number of lives worth living and the average utility simultaneously.
So the more lives worth living you have, the greater the contributions that enhancing the utility existing lives contributes to overall value. For instance, in a very small population using resources to create a life worth living might contribute 1 Overall Value Point (OVP) while using those same resources to improve existing lives might only produce 0.5 OVPs. However, as the population grows larger, improving existing lives generates more and more OVPs, while creating lives worth living shrinks or remains constant.*
So maybe, if you added a vast amount of lives worth living to a you could generate the same amount of OVP that you could by increasing the average utility just a little. But it would be a fantastically inefficient way to generate OVP. A world where some of the resources used to sustain all those lives were instead used to enhance the lives of those who already exist would be a world with vastly more overall value.
Is this any different from the Zeno’s paradoxes of motion? I.E. you’re basically saying that there is no point where the changes are big enough to become undesirable, so eventually we’ll get to a point that everyone agrees is undesirable. How is that any different from saying Achilles will never catch the tortoise?
*I imagine that actually the values might also change relative to the resources available. Having 8 billion lives worth living on one planet seems like a good amount, but having just 8 billion lives worth living in a whole galaxy seems like a waste.
1) I don’t think anyone in the entire population ethics literature reads Parfit as you do: the moral problem is not one of feasibility via resource constraint, but rather just that Z is a morally preferable state of affairs to A, even if it is not feasible. Again, the paradoxical nature of the MAP is not harmed even if it demands utterly infeasible or even nomologically impossible, but that were we able to actualize Z we should do it.
Regardless, I don’t see how the ‘resource constraint complaint’ you make would trouble the reading of Parfit you make. Parfit could just stipulate that the ‘gain’ in resources required from A to A+ is just an efficiency gain, and so A → Z (or A->B, A->Z) does not involve any increase in consumption. Or we could stipulate the original population in A, although giving up some resources are made happier by knowing there is this second group of people, etc. etc. So it hardly seems necessarily the case that A to A+ demands increased consumption. Denying these alternatives looks like hypothetical fighting.
2) I think the pluralist point stands independently of the resource constraint complaint. But you seem imply a fact you value efficient resource consumption independently: you prefer A because it is a more efficient use of resources, you note there might be diminishing returns to the value of ‘added lives’ so adding lives becomes a merely inefficient way of adding value, etc. Yet I don’t think we should care about efficiency save as an instrument of getting value. All things equal a world with 50 Utils burning 2 million utils is better than one with 10 utils burning 10. So (again) objections to feasibility or efficiency shouldn’t harm the MAP route to the repugnant conclusion.
3) I take your hope for escaping the MAP is getting some sort of weighted sum or combination of total utility, the utility of those who already exist, and possibly average utility of lives will get us our ‘total value’. However, unless you hold that the ‘average term’ or the ‘person affecting’ term are lexically prior to utility (so no amount of utility can compensate for a drop in either), you are still susceptible to a variant of the MAP I gave above:
So the A to A+ move has a small drop in average but a massive gain in utility, and persons already existing gain a boost in their wellbeing (and I can twist the dials even more astronomically). So if we can add these people, redistributing between them such that total value and equality increases seems plausible. And so we’re off to the races. It might be the case that each move demands arbitrarily massive (and inefficient) use resources to actualize—but, again, this is irrelevant to a moral paradox. The only way the diminishing marginal returns point would help avoid MAP if they were asymptotic to some upper bound. However, cashing things out that way looks implausible, and also is vulnerable to intransitivity.
I don’t see the similarity to Zeno’s paradoxes of motion—or, at least, I don’t see how this variant is more similar to Zeno than the original MAP is. Each step from A to A+ to B …. to Z, either originally or in my variant to make life difficult for your view is a step that increases total value. Given transitivity, Z will be better than A. If you think this is unacceptably Zeno like, then you could just make that complaint to the MAP simpliciter (although, FWIW, I think there are sufficient disanalogies as Zeno only works by taking each ‘case’ asymptotically closer to the singularity when tortoise and achilles meet, by contrast MAP is expanding across relevant metrics, so it seems more analogous to a Zeno case where Achilles is ahead of the Tortoise).
The view I am criticizing is not that Z may be preferable to A, under some circumstances. It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked.
Again, my complaint with the paradox is not that, if Z and A are our only choices, that A is preferable to Z. Rather my complaint is the interpretation that if we were given some other alternative, Y that has a much larger population than A, but a smaller population and higher quality of life than Z, that Z would be preferable to Y as well.
Again, I admitted that my solution might allow a MAP route to the repugnant conclusion under some instances like the one you describe. My main argument is that under circumstances where our choices are not constrained in such a manner, it is better to pick a society with a higher quality of life and lower population.
Again, my objection is not that going this route is preferable is the best choice if it is the only choice we are allowed. My objection is to people who interpret Parfit to mean that even under circumstances where we are not in such a hypothetical and have more option to choose from, we should still choose the world with lives barely worth living (i.e. Robin Hanson). Again, those people may be interpreting Parfit incorrectly, which in turn makes my criticism seem like an incorrect interpretation of Parfit. But I think it is a common enough view that it deserves criticism.
In light of your and Unnamed’s comments I have edited my post and added an explanatory paragraph at the beginning, which says:
“EDIT: To make this clearer, the interpretation of the Mere Addition Paradox this post is intended to criticize is the belief that two societies that differ in no way other than that one has a higher population and lower quality of life than the other, that that society is necessarily better than the one with the lower population and higher quality of life. Several commenters have argued that this is not a correct interpretation of the Mere Addition Paradox. They seem to claim that a more correct interpretation is that a sufficiently large population with a lower quality of life is better than a smaller one with a higher quality of life, but that it may need to differ in other ways (such as access to resources) to be truly better. They may be right, but I think that it is still a common enough interpretation that it needs attacking. The main practical difference between the interpretation that I am attacking and the interpretation they hold is that the former confers a moral obligation to create as many people as possible, regardless of its effects on quality of life, but the later does not.”
Let me know if that deals sufficiently with your objections.
″ It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked.”
Generally it’s a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.
Your edit doesn’t help much at all. You talk about what others “seem to claim”, but the argument that you have claimed Parfit is making is so obviously nonsensical, that it would lead me to wonder why anyone cites his paper at all, or why any philosophers or mathematicians have bothered to refute or support it’s conclusions with more than a passing snark. A quick google search on the term “Repugnant Conclusion” leads to a wikipedia page that is far more informative than anything you have written here.
It doesn’t seem any less obviously stupid to me then the more moderate conclusion you claim that Parfit has drawn. If you really believe that creating a new lives barely worth living (or “lives someone would barely choose to live,” in your words) is better than increasing the utility of existing lives then the next logical step is to confiscate all the resources people are using to live standards of life higher than “a life someone would barely choose to live” and use them to make more people instead. That would result in a society identical to the previous one except that it has a lower quality of life and a higher population.
Perhaps it would have sounded a little better if I had said “It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A, providing that Z’s larger population is large enough that it has higher total utility than A.” I disagree with this of course, it seems to me that total and average utility are both valuable, and one shouldn’t dominate the other.
Also, I’m sorry to have retracted the comment you commented on, I did that before I noticed you had commented on it. I decided that I could explain my ideas more briefly and clearly in a new comment and posted that one in its place.
Okay, I think I finally see where our inferential differences are and why we seem to be talking past each other. I’m retracting my previous comment in favor of this one, which I think explains my view much more clearly.
I interpreted the Repugnant Conclusion to mean that a world with a large population with lives barely worth living is the optimal world, given the various constraints placed on it. In other words, given a world with a set amount of resources, the optimal way to convert those resources to value is to create a huge population with lives barely worth living. I totally disagree with this.
You interpreted the Repugnant Conclusion to mean that a world with a huge population of lives barely worth living may be a better world, but not necessarily the optimal world. I may agree with this.
To use a metaphor imagine a 25 horsepower engine that works at 100% efficiency, generating 25 horsepower. Then imagine a 100 horsepower engine that works at 50% efficiency, generating 50 horsepower. The second engine is better at generating horsepower than the first one, but it is less optimal at generating horsepower, it does not generate it the best it possibly could.
So when you say:
We can agree say (if you accept my pluralist theory) that the first world is better, but the second one is more optimal. The first world has generated more value, but the second has done a more efficient job of it.
So, if you accept my pluralist theory, we might also say that a population Z, consisting of a galaxy full of 3 quadrillion of people that uses there sources of the galaxy to give them lives barely worth living, would be better than A, a society consisting of planet full of ten billion people that uses the planet’s resources to give its inhabitants very excellent lives. However, Z would be less morally optimal than A because A uses all the resources of the planet to give people excellent lives, while Z squanders its resources creating more people. We could then say that Y, a galaxy full of 1 quadrillion people with very excellent lives is both better than Z and more optimal than Z. We could also say that Y is better than A, and equally optimal as A. However, Y might be worse (but more optimal) than a galaxy with a septillion people living lives barely worth living. Similarly, we might say that A is both more optimal than, and better than B, a planet of 15 billion people living lives barely worth living.
The arguments I have made in the OP have been directed at the idea that a population full of lives barely worth living is the optimal population, the population that converts the resources it has into value most efficiently (assuming you accept my pluralist moral theory’s definition of efficiency). You have been arguing that even if that population is the most efficient at generating value, there might be another population so much huger that it could generate more value, even if it is much less efficient at doing so. I do not see anything contradictory about those two statements. I think that I mistakenly thought you were arguing that such a society would also be more optimal.
And if that is all the Repugnant Conclusion is I fail to see what all the fuss is about. The reason it seemed so repugnant to me was that I thought it argued that a world full of people with lives barely worth living was the very best sort of world, and we should do everything we can to bring such a world about. However, you seem to imply that that isn’t what it means at all. If the Mere Addition Paradox and the Repugnant Conclusion does not imply that we have a moral imperative to bring a vastly populated world about then all it is is a weird thought experiment with no bearing on how people should behave. A curiosity, nothing more.
Even if your argument is a more accurate interpretation of Parfit, I think that idea that a world full of people barely worth living is the optimal one is still a common enough idea that it merits a counterargument. And I think the reason the OP is so heavily upvoted is that many people held the same impression of Parfit that I did.