A consequentialist agent makes decisions based on the effect they have, as depicted in its map. Different agents may use different maps that describe different worlds, or rather more abstract considerations about worlds that don’t pin down any particular world. Which worlds appear on an agent’s map determines which worlds matter to it, so it seems natural to consider the relevance of such worlds an aspect of agent’s preference.
The role played by these worlds in an idealized agent’s decision-making doesn’t require them to be “real”, simulated in a “real” world, or even logically consistent. Anything would do for an agent with the appropriate preference, properties of an impossible world may well matter more than what happens in the real world.
You called attention to the idea that a choice apparently between an effect on the real world, and an effect on a simulated world, may instead be a choice between effects in two simulated worlds. Why is it relevant whether a certain world is “real” or simulated? In many situations that come up in thought experiments, simulated worlds matter less, because they have less measure, in the same way as an outcome predicated on a thousand coins all falling the same way matters less than what happens in all the other cases combined. Following the reasoning similar to expected utility considerations, you would be primarily concerned with the outcomes other than the thousand-tails one; and for the choice between influence in a world that might be simulated as a result of an unlikely collection of events, and influence in the real world, you would be primarily concerned with influence in the real world. So finding out that the choice is instead between two simulated worlds may matter a great deal, shifting focus of attention from the real world (now unavailable, not influenced by your decisions) to both of the simulated worlds, a priori expected to be similarly valuable.
My point was that the next step in this direction is to note that being simulated in an unlikely manner, as opposed to not even being simulated, is not obviously an important distinction. At some point the estimate of moral relevance may fail to remain completely determined by how a world (as a theoretical construct giving semantics to agent’s map, or agent’s preference) relates to some “real” world. At that point, discussing contrived mechanisms that give rise to the simulation may become useless as an argument about which worlds have how much moral relevance, even if we grant that the worlds closer to the real world in their origin are much more important in human preference.
Here is my attempt to rephrase Vladimir’s comment:
Consider a possible world W that someone could simulate, but which, in fact, no one ever will simulate. An agent A can still care about what happens in W. The agent could even try to influence what happens in W acausally.
A natural rejoinder is, How is A going to influence W unless A itself simulates W? How else can A play out the acausal consequences of its choices?
The reply is, A can have some idea about what happens in W without reasoning about W in so fine-grained a way as to deserve the word “simulation”. Coarse-grained reasoning could still suffice for A to influence W.
Imagine that one day, Omega comes to you and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don’t want to give up your $100. But see, Omega tells you that if the coin came up heads instead of tails, it’d give you $10000, but only if you’d agree to give it $100 if the coin came up tails.
Now consider a variant in which, in the counterfactual heads world, instead of giving you $10,000, Omega would have given you an all-expenses-paid month-long vacation to the destination of your choice.
You don’t need to simulate all the details of how that vacation would have played out. You don’t even need to simulate where you would have chosen to go. (And let us assume that Omega also never simulates any of these things.) Even if no such simulations ever run, you might still find the prospect of counterfactual-you getting that vacation so enticing that you give Omega the $100 in the actual tails world.
I’ve read this twice and failed to parse… Do you mind rephrasing in a clearer way, maybe with examples or something?
A consequentialist agent makes decisions based on the effect they have, as depicted in its map. Different agents may use different maps that describe different worlds, or rather more abstract considerations about worlds that don’t pin down any particular world. Which worlds appear on an agent’s map determines which worlds matter to it, so it seems natural to consider the relevance of such worlds an aspect of agent’s preference.
The role played by these worlds in an idealized agent’s decision-making doesn’t require them to be “real”, simulated in a “real” world, or even logically consistent. Anything would do for an agent with the appropriate preference, properties of an impossible world may well matter more than what happens in the real world.
You called attention to the idea that a choice apparently between an effect on the real world, and an effect on a simulated world, may instead be a choice between effects in two simulated worlds. Why is it relevant whether a certain world is “real” or simulated? In many situations that come up in thought experiments, simulated worlds matter less, because they have less measure, in the same way as an outcome predicated on a thousand coins all falling the same way matters less than what happens in all the other cases combined. Following the reasoning similar to expected utility considerations, you would be primarily concerned with the outcomes other than the thousand-tails one; and for the choice between influence in a world that might be simulated as a result of an unlikely collection of events, and influence in the real world, you would be primarily concerned with influence in the real world. So finding out that the choice is instead between two simulated worlds may matter a great deal, shifting focus of attention from the real world (now unavailable, not influenced by your decisions) to both of the simulated worlds, a priori expected to be similarly valuable.
My point was that the next step in this direction is to note that being simulated in an unlikely manner, as opposed to not even being simulated, is not obviously an important distinction. At some point the estimate of moral relevance may fail to remain completely determined by how a world (as a theoretical construct giving semantics to agent’s map, or agent’s preference) relates to some “real” world. At that point, discussing contrived mechanisms that give rise to the simulation may become useless as an argument about which worlds have how much moral relevance, even if we grant that the worlds closer to the real world in their origin are much more important in human preference.
Here is my attempt to rephrase Vladimir’s comment:
Consider a possible world W that someone could simulate, but which, in fact, no one ever will simulate. An agent A can still care about what happens in W. The agent could even try to influence what happens in W acausally.
A natural rejoinder is, How is A going to influence W unless A itself simulates W? How else can A play out the acausal consequences of its choices?
The reply is, A can have some idea about what happens in W without reasoning about W in so fine-grained a way as to deserve the word “simulation”. Coarse-grained reasoning could still suffice for A to influence W.
For example, recall Vladimir’s counterfactual mugging:
Now consider a variant in which, in the counterfactual heads world, instead of giving you $10,000, Omega would have given you an all-expenses-paid month-long vacation to the destination of your choice.
You don’t need to simulate all the details of how that vacation would have played out. You don’t even need to simulate where you would have chosen to go. (And let us assume that Omega also never simulates any of these things.) Even if no such simulations ever run, you might still find the prospect of counterfactual-you getting that vacation so enticing that you give Omega the $100 in the actual tails world.