(That is, if you cared about something closer to the reality of what happens to your sister, rather than your experience of it, you’d have hesitated in that choice long enough to ask Omega whether she would prefer death to being imprisoned on Mars.)
Be charitable in your interpretation, and remember the Least Convenient Possible World principle. I was presuming that the setup was such that being alive on Mars wouldn’t be a ‘fate worse than death’ for her; if it were, I’d choose differently. If you prefer, take the same hypothetical but with me on Mars, choosing whether she stayed alive on Earth; or let choice B include subjecting her to an awful fate rather than death.
That is, the model you make of the future may refer to a hypothetical reality, but the thing you actually evaluate is not that reality, but your own reaction to that reality—your present-tense experience in response to a constructed fiction made of previous experiences.
I would say rather that my reaction is my evaluation of an imagined future world. The essence of many decision algorithms is to model possible futures and compare them to some criteria. In this case, I have complicated unconscious affective criteria for imagined futures (which dovetail well with my affective criteria for states of affairs I directly experience), and my affective reaction generally determines my actions.
We do not have preferences that are not about experience or our emotional labeling thereof; to the extent that we have “rational” preferences it is because they will ultimately lead to some desired emotion or sensation.
To the extent this is true (as in the sense of my previous sentence), it is a tautology. I understand what you’re arguing against: the notion that what we actually execute matches a rational consequentialist calculus of our conscious ideals. I am not asserting this; I believe that our affective algorithms do often operate under more selfish and basic criteria, and that they fixate on the most salient possibilities instead of weighing probabilities properly, among other things.
However, these affective algorithms do appear to respond more strongly to certain facets of “how I expect the world to be” than to facets of “how I expect to think the world is” when the two conflict (with an added penalty for the expectation of being deceived), and I don’t find that problematic on any level.
If you prefer, take the same hypothetical but with me on Mars, choosing whether she stayed alive on Earth; or let choice B include subjecting her to an awful fate rather than death.
As I said, it’s still going to be about your experience during the moments until your memory is erased.
I understand what you’re arguing against: the notion that what we actually execute matches a rational consequentialist calculus of our conscious ideals.
I took that as a given, actually. ;-) What I’m really arguing against is the naive self-applied mind projection fallacy that causes people to see themselves as decision-making agents—i.e., beings with “souls”, if you will. Asserting that your preferences are “about” the territory is the same sort of error as saying that the thermostat “wants” it to be a certain temperature. The “wanting” is not in the thermostat, it’s in the thermostat’s maker.
Of course, it makes for convenient language to say it wants, but we should not confuse this with thinking the thermostat can really “want” anything but for its input and setting to match. And the same goes for humans.
(This is not a mere fine point of tautological philosophy; human preferences in general suffer from high degrees of subgoal stomp, chaotic loops, and other undesirable consequences arising as a direct result of this erroneous projection. Understanding the actual nature of preferences makes it easier to dissolve these confusions.)
Be charitable in your interpretation, and remember the Least Convenient Possible World principle. I was presuming that the setup was such that being alive on Mars wouldn’t be a ‘fate worse than death’ for her; if it were, I’d choose differently. If you prefer, take the same hypothetical but with me on Mars, choosing whether she stayed alive on Earth; or let choice B include subjecting her to an awful fate rather than death.
I would say rather that my reaction is my evaluation of an imagined future world. The essence of many decision algorithms is to model possible futures and compare them to some criteria. In this case, I have complicated unconscious affective criteria for imagined futures (which dovetail well with my affective criteria for states of affairs I directly experience), and my affective reaction generally determines my actions.
To the extent this is true (as in the sense of my previous sentence), it is a tautology. I understand what you’re arguing against: the notion that what we actually execute matches a rational consequentialist calculus of our conscious ideals. I am not asserting this; I believe that our affective algorithms do often operate under more selfish and basic criteria, and that they fixate on the most salient possibilities instead of weighing probabilities properly, among other things.
However, these affective algorithms do appear to respond more strongly to certain facets of “how I expect the world to be” than to facets of “how I expect to think the world is” when the two conflict (with an added penalty for the expectation of being deceived), and I don’t find that problematic on any level.
As I said, it’s still going to be about your experience during the moments until your memory is erased.
I took that as a given, actually. ;-) What I’m really arguing against is the naive self-applied mind projection fallacy that causes people to see themselves as decision-making agents—i.e., beings with “souls”, if you will. Asserting that your preferences are “about” the territory is the same sort of error as saying that the thermostat “wants” it to be a certain temperature. The “wanting” is not in the thermostat, it’s in the thermostat’s maker.
Of course, it makes for convenient language to say it wants, but we should not confuse this with thinking the thermostat can really “want” anything but for its input and setting to match. And the same goes for humans.
(This is not a mere fine point of tautological philosophy; human preferences in general suffer from high degrees of subgoal stomp, chaotic loops, and other undesirable consequences arising as a direct result of this erroneous projection. Understanding the actual nature of preferences makes it easier to dissolve these confusions.)