Without some perceptible difference, your utility cannot be different.
This definition of “utility” (and your definition of “preference”) is different from the one that most LWers use, different from the one that economists use, and different from the one that (at least some) professional philosophers use.
Ecomomists use it to define any preference ordering over worlds, and don’t require it to be defined only over your own experiences. Some ethical theories in philosophy (e.g. hedonistic utilitarianism) define it as a direct function of your experiences, but others, (e.g. preference utilitarianism) define it as something that can be affected by things you don’t know about. As evidence for the latter, this SEP page states:
If a person desires or prefers to have true friends and true accomplishments and not to be deluded, then hooking this person up to the experience machine need not maximize desire satisfaction. Utilitarians who adopt this theory of value can then claim that an agent morally ought to do an act if and only if that act maximizes desire satisfaction or preference fulfillment (that is, the degree to which the act achieves whatever is desired or preferred). What maximizes desire satisfaction or preference fulfillment need not maximize sensations of pleasure when what is desired or preferred is not a sensation of pleasure. This position is usually described as preference utilitarianism.
If you’re a hedonistic utilitarian, feel free to argue for hedonistic utilitarianism, but do that directly instead of making claims about what other people are or aren’t allowed to have preferences about.
I will admit that I find the concept of preferences over indistinguishable / imaginary universes or differences in hypothetical universes to be incoherent. One can have a preference for invisible pink unicorns, but that preference is neither more nor less satisfied by any actual-world time segment.
If you have a pointer to any literature about utility impact of irrelevant preferences, I’d like to take a look. All I’ve seen in the past is about how preferences irrelevant to a decision should not impact an aggregation result.
Does it help if you don’t think about a ‘preference’ as something ontologically fundamental, but just as a convenient shorthand for something that an agent is optimising for? It’s certainly possible for an agent to optimise for something even if they’ll never receive any evidence of if they succeeded. gjm gives a few examples in the sibling-comment to mine.
That’s roughly how I think of preferences. It’s absolutely possible (and, in fact, common) for humans to make choices based on things that have no perceptible existence. It’s harmless (but silly (note: I _LIKE_ silly, in part because it’s silly to do so)) to have such preferences, and usually harmless to act on them.
In the context of the OP, and world-value comparisons across distinguishable segments of universes, there is simply no impact from unrealized/undetectable preferences across those universe-segments that don’t contain any variation on that preference.
It’s harmless (but silly (note: I LIKE silly, in part because it’s silly to do so)) to have such preferences, and usually harmless to act on them.
I don’t really understand why preferences about things that you can’t observe are more silly than other preferences, but that’s ok. I mostly wanted to clear up the terminology, and note that it seems more like common usage of ‘preference’ and ‘utility’ to say “That’s a silly preference to have, because X, Y, Z” and “I think we should only care about things that can affect us” instead of saying “Your satisfaction of that preference has nothing to do with their confidence, it’s all about whether you actually find out” and “Without some perceptible difference, your utility cannot be different”.
This definition of “utility” (and your definition of “preference”) is different from the one that most LWers use, different from the one that economists use, and different from the one that (at least some) professional philosophers use.
Ecomomists use it to define any preference ordering over worlds, and don’t require it to be defined only over your own experiences. Some ethical theories in philosophy (e.g. hedonistic utilitarianism) define it as a direct function of your experiences, but others, (e.g. preference utilitarianism) define it as something that can be affected by things you don’t know about. As evidence for the latter, this SEP page states:
If you’re a hedonistic utilitarian, feel free to argue for hedonistic utilitarianism, but do that directly instead of making claims about what other people are or aren’t allowed to have preferences about.
I will admit that I find the concept of preferences over indistinguishable / imaginary universes or differences in hypothetical universes to be incoherent. One can have a preference for invisible pink unicorns, but that preference is neither more nor less satisfied by any actual-world time segment.
If you have a pointer to any literature about utility impact of irrelevant preferences, I’d like to take a look. All I’ve seen in the past is about how preferences irrelevant to a decision should not impact an aggregation result.
Does it help if you don’t think about a ‘preference’ as something ontologically fundamental, but just as a convenient shorthand for something that an agent is optimising for? It’s certainly possible for an agent to optimise for something even if they’ll never receive any evidence of if they succeeded. gjm gives a few examples in the sibling-comment to mine.
That’s roughly how I think of preferences. It’s absolutely possible (and, in fact, common) for humans to make choices based on things that have no perceptible existence. It’s harmless (but silly (note: I _LIKE_ silly, in part because it’s silly to do so)) to have such preferences, and usually harmless to act on them.
In the context of the OP, and world-value comparisons across distinguishable segments of universes, there is simply no impact from unrealized/undetectable preferences across those universe-segments that don’t contain any variation on that preference.
I don’t really understand why preferences about things that you can’t observe are more silly than other preferences, but that’s ok. I mostly wanted to clear up the terminology, and note that it seems more like common usage of ‘preference’ and ‘utility’ to say “That’s a silly preference to have, because X, Y, Z” and “I think we should only care about things that can affect us” instead of saying “Your satisfaction of that preference has nothing to do with their confidence, it’s all about whether you actually find out” and “Without some perceptible difference, your utility cannot be different”.