One definite exception is “should I say that many-worlds is true?”
Eliezer’s claimed exception is “Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space. You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.”
This argument doesn’t seem very strong to me. I could just as well say, “I don’t need to worry about a high average quality of life, because the average is fixed, and is as high as it can be in any case. I just want to see as many people in my world as I can, in the worlds that are my responsibility.”
It looks to me like Eliezer already preferred average utilitarianism even before knowing about many-worlds, or at least independently of this fact, and is using many-worlds to justify his preference.
Eliezer has argued in the past against discount rates: and with some reasonableness, whether this is ultimately correct or not (I don’t know.) But the principles of this argument would imply that we also should discount the value of people in the worlds we are not in; and so given that the average utility over all worlds is constant, average utilitarianism implies that our choice of worlds does not matter, which implies that none of our choices matter.
Besides (in the usual single world): is Eliezer willing to kill off everyone except the happiest person, therefore raising the average?
is Eliezer willing to kill off everyone except the happiest person, therefore raising the average?
If you’re averaging over time as well as space, that isn’t an option. All the people you kill will just drag down your average, and the one person who is really happy at the end of it all will barely register in the grand scheme of lives across time. In practice average utilitarianism just reduces to regular old utilitarianism, just with the zero point set at the average utility for a life across a history much vaster than you can affect, instead of set at nonexistence or whatever
Perhaps average utilitarianism would consider a world which only ever had one super happy person in it, as better than our world. But that seems less obviously false to me than the idea we should kill everyone to achieve that, which average utilitarianism wouldn’t recommend when properly considered.
I agree that many worlds has little bearing on this question though. Unless it’s to claim that you should expect the effective zero point to be different, because for whatever reason you think that our branch is particularly good or particularly bad.
One definite exception is “should I say that many-worlds is true?”
Eliezer’s claimed exception is “Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space. You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.”
This argument doesn’t seem very strong to me. I could just as well say, “I don’t need to worry about a high average quality of life, because the average is fixed, and is as high as it can be in any case. I just want to see as many people in my world as I can, in the worlds that are my responsibility.”
It looks to me like Eliezer already preferred average utilitarianism even before knowing about many-worlds, or at least independently of this fact, and is using many-worlds to justify his preference.
Eliezer has argued in the past against discount rates: and with some reasonableness, whether this is ultimately correct or not (I don’t know.) But the principles of this argument would imply that we also should discount the value of people in the worlds we are not in; and so given that the average utility over all worlds is constant, average utilitarianism implies that our choice of worlds does not matter, which implies that none of our choices matter.
Besides (in the usual single world): is Eliezer willing to kill off everyone except the happiest person, therefore raising the average?
If you’re averaging over time as well as space, that isn’t an option. All the people you kill will just drag down your average, and the one person who is really happy at the end of it all will barely register in the grand scheme of lives across time. In practice average utilitarianism just reduces to regular old utilitarianism, just with the zero point set at the average utility for a life across a history much vaster than you can affect, instead of set at nonexistence or whatever
Perhaps average utilitarianism would consider a world which only ever had one super happy person in it, as better than our world. But that seems less obviously false to me than the idea we should kill everyone to achieve that, which average utilitarianism wouldn’t recommend when properly considered.
I agree that many worlds has little bearing on this question though. Unless it’s to claim that you should expect the effective zero point to be different, because for whatever reason you think that our branch is particularly good or particularly bad.