Indeed, for a person affecting view we can make it so that the ‘original’ set of people in A get even better:
A : 10 people at wellbeing 10
A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5
B: 1 million and ten people at wellbeing 9.8.
A to A+ and A+ to B increase total utility. Moving from A to A+ is a drop in average utility by a bit under 0.5 points, but multiples the total utility by around 100 000, and all the people in A have double their utility. So it seems a pluralist average/total person view is should accept these moves, and so should we’re off to the repugnant conclusion again (and if they don’t, we can make even stronger examples like 10^10 new people in A with wellbeing 9.99 and everyone originally in A gets 1 million utility, etc.)
I’ve been thinking about this argument (which is formally called the Benign Addition Paradox) for a few months, and I’m no longer sure it holds up. I began to think about if I would support doing such a thing in real life. For instance, I wondered if I would push a button that would create a bunch of people who are forced to be my slaves for a couple days per week, but are freed for just long enough each week that their lives could be said to be worthwhile. I realized that I would not.
Why? Because if I created those people with lower utility than me, I would immediately possess an obligation to free them and then transfer some of my utility to them, which would reduce my level of utility. So, if we adopt a person-affecting view, we can adopt the following rule: Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.
So A+ is worse than A because the people who previously existed in A have a moral duty to transfer some of their utility to the new people who were added. They have a duty to convert A+ into B, which would harm them.
Now, you might immediately bring up Parfit’s classic argument where the new people are geographically separated from the existing people, and therefore incapable of being helped. In that case, hasn’t no harm been done, since the existing people are physically incapable of fulfilling the moral obligation they have? No, I would argue. It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.
I think that the geographic separation argument seems plausible because it contaminates what is an essentially consequentialist argument with virtue ethics. The geographic separation is no one’s fault, no one choose to cause it, so it seems like it’s morally benign. Imagine, instead, that you had the option of pushing a button that would have two effects:
1) It would create a new group of people who would be your slaves for a few days each week, but be free long enough that their life could be said to be barely worthwhile.
2) It would create an invincible, unstoppable AI that will thwart any attempt to equalize utility between the new people and existing people. It will even thwart an attempt by you to equalize utility if you change your mind.
I don’t know about you, but I sure as hell wouldn’t push that button, even though it does not differ from the geographic separation argument in any important way.
Of course, this argument does create some weird implications. For instance, it implies that there might be some aliens out there with a much higher standard of living than we have, and we are inadvertently harming them by reproducing. However, it’s possible that the reason that this seems so counterintuitive is that when contemplating it we are mapping it to the real world, not the simplified world we have been using to make our arguments in so far. In the real world we can raise the following practical objections:
1) We do not currently live in a world where utility is Pareto efficient. In the various addition paradox arguments it is assumed to be, but that is a simplifying assumption that does not reflect the real world. Generally when we create a new person in this day and age we increase utility, both by creating new family members and friends for people, and by allowing greater division of labor to grow the economy. So adding new people might actually help the aliens by reducing their moral obligation.
2) We already exist, and stopping people from having children generally harms them. So even if the aliens would be better off if we had never existed, now that we exist our desire to reproduce has to be taken into account.
3) If we ever actually meet the aliens, it seems likely that through mutual trade we could make each other both better off.
Of course, as I said before, these are all practical objections that don’t affect the principle of the thing. If the whole “possibly harming distant aliens by reproducing” thing still seems too counterintuitive to you, you could reject the person-affecting principle, either in favor of an impersonal type of morality, or in favor of some sort of pluralist ethics that takes both impersonal and person-affecting morality into account.
You’ve been one of my best critics in this, so please let me know if you think I’m onto something, or if I’m totally off-base.
Aside: Another objection to the Benign Addition Paradox I’ve come up with goes like this. A: 10 human beings at wellbeing 10.
A+: 10 human beings at wellbeing 50 &1million sadistic demon-creatures at wellbeing 11. The demon-creatures derive 9 wellbeing each from torturing humans or watching humans being tortured.
B: 10 human beings at wellbeing −10,000 (from being tortured by demons) & 1 million sadistic demon creatures at wellbeing 20 (9 of which they get from torturing the 10 humans).
All these moves raise total utility, average utility, and each transition benefits all the persons involved, yet B seems obviously worse than A. The most obvious solutions I could think of were:
1) The “conferring a moral obligation on someone harms them” argument I already elucidated.
2) Not counting any utility derived from sadism towards the total.
That is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like “obligation,” in a consequentialist discussion.
I’ll try to recast the language in a more consequentialist style:
Instead of saying that, from a person-affecting perspective:
“Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.”
We can instead say:
“An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them.”
Instead of saying: “It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.”
We can instead say:
“It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action.”
If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don’t think that the second premise is at all controversial, but the first one might be.
I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it’s a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn’t.
Don’t have as much time as I would like, but short and (not particularly) sweet:
I think there is a mix up between evaluative and normative concerns here. We could say that the repugnant conclusion world is evaluated better than the current world, but some fact about how we get there (via benign addition or similar) is normatively unacceptable. But even then that seems a big bullet to bite—most of us think the RC is worse than a smaller population with high happiness (even if lower aggregate), not that it is better but it would be immoral for us to get there.
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well. This has the unfortunate side-effect of violating irrelevance of independent alternatives (if only A and A+ are on offer, we should say A+ > A, but once we introduce B, A > A+). Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick). That said, everything in population ethics has nasty conclusions...
However, I don’t buy the idea that we can rule out benign addition because the addition of moral obligation harms someone independent of the drop in utility they take for fulfilling it. It seems plausible that a fulfilled moral obligation makes the world a better place. There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners), or indeed, depending on the redistribution ethic you take, everyone who isn’t the most well-off person. Even if you don’t and say it is outweighed by other concerns, this still seems to be misdiagnosing what should be morally salient here—there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
I think there is a mix up between evaluative and normative concerns here.
That’s right, my new argument doesn’t avoid the RC for questions like “if two populations were to spontaneously appear at exactly the same time which would be better?”
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well.
What I’m actually arguing is that A+ is [person-affecting] worse than A, even when B is unavailable. This is due to following the axiom of transitivity backwards instead of forwards. If A>B and A+<B then A+<A. If I was to give a more concrete reason for why A+<A I would say that the fact that the A people are unaware that the + people exist is irrelevant, they are still harmed. This is not without precedent in ethics, most people think that a person who has an affair harms their spouse, even if their spouse never finds out.
However, after reading this essay by Eliezer, (after I wrote my November 8th comment) I am beginning to think the intransitivity that the person affecting view seems to create in the Benign Addition Paradox is an illusion. “Is B better than A?” and “Is B better than A+” are not the same question if you adopt a person affecting view, because the persons being affected are different in each question. If you ask two different questions you shouldn’t expect transitive answers.
There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners
I know, the example I gave was that we all might be harming unimaginably affluent aliens by reproducing. I think you are right that even taking the objections to it that I gave into account, it’s a pretty weird conclusion.
there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
I don’t know, I’ve heard people complain about poor people reproducing and increasing the burden on the welfare system before. Most of the time I find these complainers repulsive, I think their complaints are motivated by ugly, mean-spirited snobbery and status signalling, rather than genuine ethical concerns. But I suspect that a tiny minority of the complainers might have been complaining out of genuine concern that they were being harmed.
Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick).
Again, I agree. My main point in making this argument was to try to demonstrate that a pure person-affecting viewpoint could be saved from the benign addition paradox. I think that even if I succeeded in that, the other weird conclusions I drew (i.e., we might be hurting super-rich aliens by reproducing) demonstrate that a pure person-affecting view is not morally tenable. I suspect the best solution might be to develop some pluralist synthesis of person-affecting and objective views.
It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus—why believe that?
Intransitivity
I didn’t find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers (“is A better than A+, is B better than A+?”), and most person affecting views have big problems with transitivity: consider this example.
World 1: A = 2, B = 1
World 2: B = 2, C = 1
World 3: C = 2, A = 1
By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).
One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn’t such a big bullet to bite (you retain within-choice ordering).
Synthesis
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the ‘total term’ has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.
most person affecting views have big problems with transitivity
That is because I don’t think the person affecting view asks the same question each time (that was the point of Eliezer’s essay). The person-affecting view doesn’t ask “Which society is better, in some abstract sense?” It asks “Does transitioning from one society to the other harm the collective self-interest of the people in the original society?” That’s obviously going to result in intransitivity.
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble.....Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
I think I might have been conflating the “person affecting view” with the “prior existence” view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.
Basically, I find it unacceptable for ethics to conclude something like “It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life.” This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).
On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.
I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it’s wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.
This doesn’t completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That’s what I want to avoid, and I think the various pluralist ideas I’ve introduced successfully do so.
You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.
I’ve been thinking about this argument (which is formally called the Benign Addition Paradox) for a few months, and I’m no longer sure it holds up. I began to think about if I would support doing such a thing in real life. For instance, I wondered if I would push a button that would create a bunch of people who are forced to be my slaves for a couple days per week, but are freed for just long enough each week that their lives could be said to be worthwhile. I realized that I would not.
Why? Because if I created those people with lower utility than me, I would immediately possess an obligation to free them and then transfer some of my utility to them, which would reduce my level of utility. So, if we adopt a person-affecting view, we can adopt the following rule: Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.
So A+ is worse than A because the people who previously existed in A have a moral duty to transfer some of their utility to the new people who were added. They have a duty to convert A+ into B, which would harm them.
Now, you might immediately bring up Parfit’s classic argument where the new people are geographically separated from the existing people, and therefore incapable of being helped. In that case, hasn’t no harm been done, since the existing people are physically incapable of fulfilling the moral obligation they have? No, I would argue. It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.
I think that the geographic separation argument seems plausible because it contaminates what is an essentially consequentialist argument with virtue ethics. The geographic separation is no one’s fault, no one choose to cause it, so it seems like it’s morally benign. Imagine, instead, that you had the option of pushing a button that would have two effects:
1) It would create a new group of people who would be your slaves for a few days each week, but be free long enough that their life could be said to be barely worthwhile.
2) It would create an invincible, unstoppable AI that will thwart any attempt to equalize utility between the new people and existing people. It will even thwart an attempt by you to equalize utility if you change your mind.
I don’t know about you, but I sure as hell wouldn’t push that button, even though it does not differ from the geographic separation argument in any important way.
Of course, this argument does create some weird implications. For instance, it implies that there might be some aliens out there with a much higher standard of living than we have, and we are inadvertently harming them by reproducing. However, it’s possible that the reason that this seems so counterintuitive is that when contemplating it we are mapping it to the real world, not the simplified world we have been using to make our arguments in so far. In the real world we can raise the following practical objections:
1) We do not currently live in a world where utility is Pareto efficient. In the various addition paradox arguments it is assumed to be, but that is a simplifying assumption that does not reflect the real world. Generally when we create a new person in this day and age we increase utility, both by creating new family members and friends for people, and by allowing greater division of labor to grow the economy. So adding new people might actually help the aliens by reducing their moral obligation.
2) We already exist, and stopping people from having children generally harms them. So even if the aliens would be better off if we had never existed, now that we exist our desire to reproduce has to be taken into account.
3) If we ever actually meet the aliens, it seems likely that through mutual trade we could make each other both better off.
Of course, as I said before, these are all practical objections that don’t affect the principle of the thing. If the whole “possibly harming distant aliens by reproducing” thing still seems too counterintuitive to you, you could reject the person-affecting principle, either in favor of an impersonal type of morality, or in favor of some sort of pluralist ethics that takes both impersonal and person-affecting morality into account.
You’ve been one of my best critics in this, so please let me know if you think I’m onto something, or if I’m totally off-base.
Aside: Another objection to the Benign Addition Paradox I’ve come up with goes like this.
A: 10 human beings at wellbeing 10.
A+: 10 human beings at wellbeing 50 &1million sadistic demon-creatures at wellbeing 11. The demon-creatures derive 9 wellbeing each from torturing humans or watching humans being tortured.
B: 10 human beings at wellbeing −10,000 (from being tortured by demons) & 1 million sadistic demon creatures at wellbeing 20 (9 of which they get from torturing the 10 humans).
All these moves raise total utility, average utility, and each transition benefits all the persons involved, yet B seems obviously worse than A. The most obvious solutions I could think of were:
1) The “conferring a moral obligation on someone harms them” argument I already elucidated.
2) Not counting any utility derived from sadism towards the total.
I’m interested in what you think.
It is traditionally held in ethics that “ought implies can”—that is, that you don’t have to do any things that you cannot in fact do.
That is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like “obligation,” in a consequentialist discussion.
I’ll try to recast the language in a more consequentialist style: Instead of saying that, from a person-affecting perspective: “Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.”
We can instead say: “An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them.”
Instead of saying: “It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.”
We can instead say: “It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action.”
If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don’t think that the second premise is at all controversial, but the first one might be.
I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it’s a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn’t.
Don’t have as much time as I would like, but short and (not particularly) sweet:
I think there is a mix up between evaluative and normative concerns here. We could say that the repugnant conclusion world is evaluated better than the current world, but some fact about how we get there (via benign addition or similar) is normatively unacceptable. But even then that seems a big bullet to bite—most of us think the RC is worse than a smaller population with high happiness (even if lower aggregate), not that it is better but it would be immoral for us to get there.
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well. This has the unfortunate side-effect of violating irrelevance of independent alternatives (if only A and A+ are on offer, we should say A+ > A, but once we introduce B, A > A+). Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick). That said, everything in population ethics has nasty conclusions...
However, I don’t buy the idea that we can rule out benign addition because the addition of moral obligation harms someone independent of the drop in utility they take for fulfilling it. It seems plausible that a fulfilled moral obligation makes the world a better place. There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners), or indeed, depending on the redistribution ethic you take, everyone who isn’t the most well-off person. Even if you don’t and say it is outweighed by other concerns, this still seems to be misdiagnosing what should be morally salient here—there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
That’s right, my new argument doesn’t avoid the RC for questions like “if two populations were to spontaneously appear at exactly the same time which would be better?”
What I’m actually arguing is that A+ is [person-affecting] worse than A, even when B is unavailable. This is due to following the axiom of transitivity backwards instead of forwards. If A>B and A+<B then A+<A. If I was to give a more concrete reason for why A+<A I would say that the fact that the A people are unaware that the + people exist is irrelevant, they are still harmed. This is not without precedent in ethics, most people think that a person who has an affair harms their spouse, even if their spouse never finds out.
However, after reading this essay by Eliezer, (after I wrote my November 8th comment) I am beginning to think the intransitivity that the person affecting view seems to create in the Benign Addition Paradox is an illusion. “Is B better than A?” and “Is B better than A+” are not the same question if you adopt a person affecting view, because the persons being affected are different in each question. If you ask two different questions you shouldn’t expect transitive answers.
I know, the example I gave was that we all might be harming unimaginably affluent aliens by reproducing. I think you are right that even taking the objections to it that I gave into account, it’s a pretty weird conclusion.
I don’t know, I’ve heard people complain about poor people reproducing and increasing the burden on the welfare system before. Most of the time I find these complainers repulsive, I think their complaints are motivated by ugly, mean-spirited snobbery and status signalling, rather than genuine ethical concerns. But I suspect that a tiny minority of the complainers might have been complaining out of genuine concern that they were being harmed.
Again, I agree. My main point in making this argument was to try to demonstrate that a pure person-affecting viewpoint could be saved from the benign addition paradox. I think that even if I succeeded in that, the other weird conclusions I drew (i.e., we might be hurting super-rich aliens by reproducing) demonstrate that a pure person-affecting view is not morally tenable. I suspect the best solution might be to develop some pluralist synthesis of person-affecting and objective views.
It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus—why believe that?
Intransitivity
I didn’t find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers (“is A better than A+, is B better than A+?”), and most person affecting views have big problems with transitivity: consider this example.
World 1: A = 2, B = 1 World 2: B = 2, C = 1 World 3: C = 2, A = 1
By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).
One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn’t such a big bullet to bite (you retain within-choice ordering).
Synthesis
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the ‘total term’ has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
W1: A=10, B=5 W2: B=6, C=2 W3: C=3, A=1 W4: A=2, B=1
Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.
That is because I don’t think the person affecting view asks the same question each time (that was the point of Eliezer’s essay). The person-affecting view doesn’t ask “Which society is better, in some abstract sense?” It asks “Does transitioning from one society to the other harm the collective self-interest of the people in the original society?” That’s obviously going to result in intransitivity.
I think I might have been conflating the “person affecting view” with the “prior existence” view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.
Basically, I find it unacceptable for ethics to conclude something like “It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life.” This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).
On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.
I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it’s wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.
This doesn’t completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That’s what I want to avoid, and I think the various pluralist ideas I’ve introduced successfully do so.
You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.