It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus—why believe that?
Intransitivity
I didn’t find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers (“is A better than A+, is B better than A+?”), and most person affecting views have big problems with transitivity: consider this example.
World 1: A = 2, B = 1
World 2: B = 2, C = 1
World 3: C = 2, A = 1
By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).
One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn’t such a big bullet to bite (you retain within-choice ordering).
Synthesis
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the ‘total term’ has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.
most person affecting views have big problems with transitivity
That is because I don’t think the person affecting view asks the same question each time (that was the point of Eliezer’s essay). The person-affecting view doesn’t ask “Which society is better, in some abstract sense?” It asks “Does transitioning from one society to the other harm the collective self-interest of the people in the original society?” That’s obviously going to result in intransitivity.
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble.....Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
I think I might have been conflating the “person affecting view” with the “prior existence” view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.
Basically, I find it unacceptable for ethics to conclude something like “It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life.” This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).
On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.
I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it’s wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.
This doesn’t completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That’s what I want to avoid, and I think the various pluralist ideas I’ve introduced successfully do so.
You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.
It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus—why believe that?
Intransitivity
I didn’t find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers (“is A better than A+, is B better than A+?”), and most person affecting views have big problems with transitivity: consider this example.
World 1: A = 2, B = 1 World 2: B = 2, C = 1 World 3: C = 2, A = 1
By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).
One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn’t such a big bullet to bite (you retain within-choice ordering).
Synthesis
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the ‘total term’ has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
W1: A=10, B=5 W2: B=6, C=2 W3: C=3, A=1 W4: A=2, B=1
Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.
That is because I don’t think the person affecting view asks the same question each time (that was the point of Eliezer’s essay). The person-affecting view doesn’t ask “Which society is better, in some abstract sense?” It asks “Does transitioning from one society to the other harm the collective self-interest of the people in the original society?” That’s obviously going to result in intransitivity.
I think I might have been conflating the “person affecting view” with the “prior existence” view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.
Basically, I find it unacceptable for ethics to conclude something like “It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life.” This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).
On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.
I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it’s wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.
This doesn’t completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That’s what I want to avoid, and I think the various pluralist ideas I’ve introduced successfully do so.
You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.