Do you generally endorse average utilitarianism? E.g., if you can press a button to create a new world, completely isolated from all others, containing 10^10 people 10x happier than typical present-day Americans, do you press it if what currently exists is a world with 10^10 people only 9x happier than typical present-day Americans and refrain from pressing it if it’s 11x instead?
First of all, the creation of people is a complex moral decision. Whether you espouse average utilitarianism or total utilitarianism or whatever other decision theory, if you ask someone “Would you press a button that would create a person”, they’d normally be HESITANT, no matter whether you said it would be a very happy person or a moderately happy person. We tend to think of creating people as a big deal, that brings a big responsibility.
Secondly, my average utilitarianism is about the satisfaction of preferences, not happiness. This may seem a nitpick, though.
Thirdly, I can’t help but notice that you’re using the example of the creation of a world that in reality would increase average utility, even as you’re using a hypothetical that states that in that particular case it would decrease average utility. This feels as a scenario designed to confuse the moral intuition into giving the wrong answer.
So using the current reality instead (rather than the one where people are 9x happier): Would I choose to create another universe happier than this one? In general, yes. Would I choose to create another universe, half as happy as this one? I general, no, not unless there’s some additional value that the presence of that universe would provide to us, enough so that it would make up for the loss in average utility.
the creation of people is a complex moral decision
True enough. But it seems to me that hesitation in such cases is usually because of uncertainty either about whether the new people would really have good lives or about their effect on others around them. In the scenarios I described, everyone involved gets a good life when ask their interactions with others are taken into account. So yeah, creating livres is complex, but I don’t see that that invalidates my question at all.
preferences, not happiness
That happens to be my, er, preference too. I think I do think it’s a nitpick; we can just take “10x happier” as a sort of shorthand for some corresponding statement about preferences.
designed to confuse the moral intuition
I promise I had absolutely no such intention. I took the levels higher than typical ones in our world to avoid distracting digressions about whether the typical life in our world is in fact better than nothing. (Note that this isn’t the same question as whether it’s worth continuing such a life once it’s already in progress.)
Your example of a world half as happy as this seems like it has a similar but opposite problem: depending on what “half as happy” actually means, you might be describing a change that would be rejected by total utilitarianism as well as average. That’s the problem I was trying to avoid.
Would I choose to create another universe happier than this one? In general, yes.
Okay, Now I reveal that just yesterday, we’ve discovered yet another universe which already exists and is a lot happier than the one you would choose to create. In fact it’s so much happier that creating that universe would now drive the average down instead of up.
If you’re using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?
If you’re using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?
With the standard caveats, yes that seems reasonable. Given the existence of that ultrahappy universe an average human life will be more likely to exist in happier circumstances than the ones in the multiversal reality I’d create if I chose to add that less-than-averagely-happy universe.
Same way as I’d not take 20% of actual existing happy people and force them to live less happy lives.
Think about all sentient lives as if they were part of a single mind, called “Sentience”. We design portions of Sentience’s life. We want as much a proportion of Sentience’s existence to be as happy as possible, satisfying Sentience’s preferences.
Do you generally endorse average utilitarianism? E.g., if you can press a button to create a new world, completely isolated from all others, containing 10^10 people 10x happier than typical present-day Americans, do you press it if what currently exists is a world with 10^10 people only 9x happier than typical present-day Americans and refrain from pressing it if it’s 11x instead?
The answer is complex
First of all, the creation of people is a complex moral decision. Whether you espouse average utilitarianism or total utilitarianism or whatever other decision theory, if you ask someone “Would you press a button that would create a person”, they’d normally be HESITANT, no matter whether you said it would be a very happy person or a moderately happy person. We tend to think of creating people as a big deal, that brings a big responsibility.
Secondly, my average utilitarianism is about the satisfaction of preferences, not happiness. This may seem a nitpick, though.
Thirdly, I can’t help but notice that you’re using the example of the creation of a world that in reality would increase average utility, even as you’re using a hypothetical that states that in that particular case it would decrease average utility. This feels as a scenario designed to confuse the moral intuition into giving the wrong answer.
So using the current reality instead (rather than the one where people are 9x happier): Would I choose to create another universe happier than this one? In general, yes. Would I choose to create another universe, half as happy as this one? I general, no, not unless there’s some additional value that the presence of that universe would provide to us, enough so that it would make up for the loss in average utility.
True enough. But it seems to me that hesitation in such cases is usually because of uncertainty either about whether the new people would really have good lives or about their effect on others around them. In the scenarios I described, everyone involved gets a good life when ask their interactions with others are taken into account. So yeah, creating livres is complex, but I don’t see that that invalidates my question at all.
That happens to be my, er, preference too. I think I do think it’s a nitpick; we can just take “10x happier” as a sort of shorthand for some corresponding statement about preferences.
I promise I had absolutely no such intention. I took the levels higher than typical ones in our world to avoid distracting digressions about whether the typical life in our world is in fact better than nothing. (Note that this isn’t the same question as whether it’s worth continuing such a life once it’s already in progress.)
Your example of a world half as happy as this seems like it has a similar but opposite problem: depending on what “half as happy” actually means, you might be describing a change that would be rejected by total utilitarianism as well as average. That’s the problem I was trying to avoid.
Okay, Now I reveal that just yesterday, we’ve discovered yet another universe which already exists and is a lot happier than the one you would choose to create. In fact it’s so much happier that creating that universe would now drive the average down instead of up.
If you’re using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?
With the standard caveats, yes that seems reasonable. Given the existence of that ultrahappy universe an average human life will be more likely to exist in happier circumstances than the ones in the multiversal reality I’d create if I chose to add that less-than-averagely-happy universe.
Same way as I’d not take 20% of actual existing happy people and force them to live less happy lives.
Think about all sentient lives as if they were part of a single mind, called “Sentience”. We design portions of Sentience’s life. We want as much a proportion of Sentience’s existence to be as happy as possible, satisfying Sentience’s preferences.