A consequentialist agent makes decisions based on the effect they have, as depicted in its map. Different agents may use different maps that describe different worlds, or rather more abstract considerations about worlds that don’t pin down any particular world. Which worlds appear on an agent’s map determines which worlds matter to it, so it seems natural to consider the relevance of such worlds an aspect of agent’s preference.
The role played by these worlds in an idealized agent’s decision-making doesn’t require them to be “real”, simulated in a “real” world, or even logically consistent. Anything would do for an agent with the appropriate preference, properties of an impossible world may well matter more than what happens in the real world.
You called attention to the idea that a choice apparently between an effect on the real world, and an effect on a simulated world, may instead be a choice between effects in two simulated worlds. Why is it relevant whether a certain world is “real” or simulated? In many situations that come up in thought experiments, simulated worlds matter less, because they have less measure, in the same way as an outcome predicated on a thousand coins all falling the same way matters less than what happens in all the other cases combined. Following the reasoning similar to expected utility considerations, you would be primarily concerned with the outcomes other than the thousand-tails one; and for the choice between influence in a world that might be simulated as a result of an unlikely collection of events, and influence in the real world, you would be primarily concerned with influence in the real world. So finding out that the choice is instead between two simulated worlds may matter a great deal, shifting focus of attention from the real world (now unavailable, not influenced by your decisions) to both of the simulated worlds, a priori expected to be similarly valuable.
My point was that the next step in this direction is to note that being simulated in an unlikely manner, as opposed to not even being simulated, is not obviously an important distinction. At some point the estimate of moral relevance may fail to remain completely determined by how a world (as a theoretical construct giving semantics to agent’s map, or agent’s preference) relates to some “real” world. At that point, discussing contrived mechanisms that give rise to the simulation may become useless as an argument about which worlds have how much moral relevance, even if we grant that the worlds closer to the real world in their origin are much more important in human preference.
A consequentialist agent makes decisions based on the effect they have, as depicted in its map. Different agents may use different maps that describe different worlds, or rather more abstract considerations about worlds that don’t pin down any particular world. Which worlds appear on an agent’s map determines which worlds matter to it, so it seems natural to consider the relevance of such worlds an aspect of agent’s preference.
The role played by these worlds in an idealized agent’s decision-making doesn’t require them to be “real”, simulated in a “real” world, or even logically consistent. Anything would do for an agent with the appropriate preference, properties of an impossible world may well matter more than what happens in the real world.
You called attention to the idea that a choice apparently between an effect on the real world, and an effect on a simulated world, may instead be a choice between effects in two simulated worlds. Why is it relevant whether a certain world is “real” or simulated? In many situations that come up in thought experiments, simulated worlds matter less, because they have less measure, in the same way as an outcome predicated on a thousand coins all falling the same way matters less than what happens in all the other cases combined. Following the reasoning similar to expected utility considerations, you would be primarily concerned with the outcomes other than the thousand-tails one; and for the choice between influence in a world that might be simulated as a result of an unlikely collection of events, and influence in the real world, you would be primarily concerned with influence in the real world. So finding out that the choice is instead between two simulated worlds may matter a great deal, shifting focus of attention from the real world (now unavailable, not influenced by your decisions) to both of the simulated worlds, a priori expected to be similarly valuable.
My point was that the next step in this direction is to note that being simulated in an unlikely manner, as opposed to not even being simulated, is not obviously an important distinction. At some point the estimate of moral relevance may fail to remain completely determined by how a world (as a theoretical construct giving semantics to agent’s map, or agent’s preference) relates to some “real” world. At that point, discussing contrived mechanisms that give rise to the simulation may become useless as an argument about which worlds have how much moral relevance, even if we grant that the worlds closer to the real world in their origin are much more important in human preference.