I have a couple of intuitions about the structure of human preferences over a large universe.
The first intuition is that your preferences over one part of the universe (or universe-history) should be independent of what happens in another part of the universe, if the “distance” between the two parts is great enough. In other words, if you prefer A happening to B happening in one part of the universe, this preference shouldn’t be reversed no matter what you learn about a distant part of the universe. (“Distance” might be spatial, temporal, or possibly involving other notions like distance between Everett branches.)
The above implies that your preferences can be approximated by partitioning the universe into regions, valuing the regions independently and then summing up the individual values. Errors may occur at borders of such regions, but they can be made arbitrarily small by making the regions large enough.
The second intuition is that your preferences are such that when approximated by such a valuing method, each independent region is valued the same way. In other words, if U(A)>U(B) is true of one region, then the same is true of another region.
There may be better ways of stating or formalizing what I mean (improvements are welcome), but I hope it gets the point across. I’m interested in what others think. If these are reasonable assumptions, perhaps some interesting implications can be derived from them.
P.S. The first intuition directly rules out average utilitarianism. In average utilitarianism, my life might not be considered worth living, just because a bunch of people on the opposite side of the universe happens to have more fun.
I have a couple of intuitions about the structure of human preferences over a large universe.
The first intuition is that your preferences over one part of the universe (or universe-history) should be independent of what happens in another part of the universe, if the “distance” between the two parts is great enough. In other words, if you prefer A happening to B happening in one part of the universe, this preference shouldn’t be reversed no matter what you learn about a distant part of the universe. (“Distance” might be spatial, temporal, or possibly involving other notions like distance between Everett branches.)
The above implies that your preferences can be approximated by partitioning the universe into regions, valuing the regions independently and then summing up the individual values. Errors may occur at borders of such regions, but they can be made arbitrarily small by making the regions large enough.
The second intuition is that your preferences are such that when approximated by such a valuing method, each independent region is valued the same way. In other words, if U(A)>U(B) is true of one region, then the same is true of another region.
There may be better ways of stating or formalizing what I mean (improvements are welcome), but I hope it gets the point across. I’m interested in what others think. If these are reasonable assumptions, perhaps some interesting implications can be derived from them.
P.S. The first intuition directly rules out average utilitarianism. In average utilitarianism, my life might not be considered worth living, just because a bunch of people on the opposite side of the universe happens to have more fun.