I will admit that I find the concept of preferences over indistinguishable / imaginary universes or differences in hypothetical universes to be incoherent. One can have a preference for invisible pink unicorns, but that preference is neither more nor less satisfied by any actual-world time segment.
If you have a pointer to any literature about utility impact of irrelevant preferences, I’d like to take a look. All I’ve seen in the past is about how preferences irrelevant to a decision should not impact an aggregation result.
Does it help if you don’t think about a ‘preference’ as something ontologically fundamental, but just as a convenient shorthand for something that an agent is optimising for? It’s certainly possible for an agent to optimise for something even if they’ll never receive any evidence of if they succeeded. gjm gives a few examples in the sibling-comment to mine.
That’s roughly how I think of preferences. It’s absolutely possible (and, in fact, common) for humans to make choices based on things that have no perceptible existence. It’s harmless (but silly (note: I _LIKE_ silly, in part because it’s silly to do so)) to have such preferences, and usually harmless to act on them.
In the context of the OP, and world-value comparisons across distinguishable segments of universes, there is simply no impact from unrealized/undetectable preferences across those universe-segments that don’t contain any variation on that preference.
It’s harmless (but silly (note: I LIKE silly, in part because it’s silly to do so)) to have such preferences, and usually harmless to act on them.
I don’t really understand why preferences about things that you can’t observe are more silly than other preferences, but that’s ok. I mostly wanted to clear up the terminology, and note that it seems more like common usage of ‘preference’ and ‘utility’ to say “That’s a silly preference to have, because X, Y, Z” and “I think we should only care about things that can affect us” instead of saying “Your satisfaction of that preference has nothing to do with their confidence, it’s all about whether you actually find out” and “Without some perceptible difference, your utility cannot be different”.
I will admit that I find the concept of preferences over indistinguishable / imaginary universes or differences in hypothetical universes to be incoherent. One can have a preference for invisible pink unicorns, but that preference is neither more nor less satisfied by any actual-world time segment.
If you have a pointer to any literature about utility impact of irrelevant preferences, I’d like to take a look. All I’ve seen in the past is about how preferences irrelevant to a decision should not impact an aggregation result.
Does it help if you don’t think about a ‘preference’ as something ontologically fundamental, but just as a convenient shorthand for something that an agent is optimising for? It’s certainly possible for an agent to optimise for something even if they’ll never receive any evidence of if they succeeded. gjm gives a few examples in the sibling-comment to mine.
That’s roughly how I think of preferences. It’s absolutely possible (and, in fact, common) for humans to make choices based on things that have no perceptible existence. It’s harmless (but silly (note: I _LIKE_ silly, in part because it’s silly to do so)) to have such preferences, and usually harmless to act on them.
In the context of the OP, and world-value comparisons across distinguishable segments of universes, there is simply no impact from unrealized/undetectable preferences across those universe-segments that don’t contain any variation on that preference.
I don’t really understand why preferences about things that you can’t observe are more silly than other preferences, but that’s ok. I mostly wanted to clear up the terminology, and note that it seems more like common usage of ‘preference’ and ‘utility’ to say “That’s a silly preference to have, because X, Y, Z” and “I think we should only care about things that can affect us” instead of saying “Your satisfaction of that preference has nothing to do with their confidence, it’s all about whether you actually find out” and “Without some perceptible difference, your utility cannot be different”.