As remarked many times on this site and elsewhere, if you are given evidence that Omega is capable of simulating an environment as rich as our observed Universe, you should apply the Copernican principle and assign high probability that our world is not special and is already a simulation. The Matrix-like dualism (real/simulated) is a very low-probability alternative, which only seems likely because we are used to anthropocentrically thinking of our world as “real”.
Once you realize that, Option 1 becomes “pick a different simulation” and Option 2 “improve current simulation”.
A proponent might argue: ‘the current simulation is a hopeless case, why stay?’ And a proponent might counter: ″you run away from responsibilities’
Note that this is nearly isomorphic to the standard moral question of emigration, once you drop the no-longer useful qualifier “simulation”. Is it immoral and unpatriotic to leave your home country and try your luck elsewhere? (Provided you cannot influence your former reality once you leave.)
That’s not quite the question I am trying to convey with my conundrum. What I wanted Option 1 and Option 2 to represent is a hypothetical conflict in which you must choose between maximizing your utility potential at the cost of living in simulation or maximizing your knowledge of the truth in this reality. My point with in sharing this scenario did not have anything to do with the probability of such a scenario occurring. Now, everybody is free to interpret my scenario any way they like but I just wanted to explain what I had in mind. Thank you for your criticism and ideas. By the way.
Which simulations (or “real worlds”) matter (and how much) depends on one’s preference. A hypothetical world that’s not even being simulated may theoretically matter more than any real or simulated world, in the sense that an idealized agent with that preference would make decisions that are primarily concerned with optimizing properties of that hypothetical world (and won’t care what happens in other real or simulated worlds). Such an agent would need to estimate consequences of decisions in the hypothetical world, but this estimate doesn’t need to be particularly detailed, just as thinking with human brain doesn’t constitute simulation of the real world. (Also the agent itself doesn’t need to exist in a “real” or simulated world for the point about its preference being concerned primarily with hypotheticals to hold.)
A consequentialist agent makes decisions based on the effect they have, as depicted in its map. Different agents may use different maps that describe different worlds, or rather more abstract considerations about worlds that don’t pin down any particular world. Which worlds appear on an agent’s map determines which worlds matter to it, so it seems natural to consider the relevance of such worlds an aspect of agent’s preference.
The role played by these worlds in an idealized agent’s decision-making doesn’t require them to be “real”, simulated in a “real” world, or even logically consistent. Anything would do for an agent with the appropriate preference, properties of an impossible world may well matter more than what happens in the real world.
You called attention to the idea that a choice apparently between an effect on the real world, and an effect on a simulated world, may instead be a choice between effects in two simulated worlds. Why is it relevant whether a certain world is “real” or simulated? In many situations that come up in thought experiments, simulated worlds matter less, because they have less measure, in the same way as an outcome predicated on a thousand coins all falling the same way matters less than what happens in all the other cases combined. Following the reasoning similar to expected utility considerations, you would be primarily concerned with the outcomes other than the thousand-tails one; and for the choice between influence in a world that might be simulated as a result of an unlikely collection of events, and influence in the real world, you would be primarily concerned with influence in the real world. So finding out that the choice is instead between two simulated worlds may matter a great deal, shifting focus of attention from the real world (now unavailable, not influenced by your decisions) to both of the simulated worlds, a priori expected to be similarly valuable.
My point was that the next step in this direction is to note that being simulated in an unlikely manner, as opposed to not even being simulated, is not obviously an important distinction. At some point the estimate of moral relevance may fail to remain completely determined by how a world (as a theoretical construct giving semantics to agent’s map, or agent’s preference) relates to some “real” world. At that point, discussing contrived mechanisms that give rise to the simulation may become useless as an argument about which worlds have how much moral relevance, even if we grant that the worlds closer to the real world in their origin are much more important in human preference.
Here is my attempt to rephrase Vladimir’s comment:
Consider a possible world W that someone could simulate, but which, in fact, no one ever will simulate. An agent A can still care about what happens in W. The agent could even try to influence what happens in W acausally.
A natural rejoinder is, How is A going to influence W unless A itself simulates W? How else can A play out the acausal consequences of its choices?
The reply is, A can have some idea about what happens in W without reasoning about W in so fine-grained a way as to deserve the word “simulation”. Coarse-grained reasoning could still suffice for A to influence W.
Imagine that one day, Omega comes to you and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don’t want to give up your $100. But see, Omega tells you that if the coin came up heads instead of tails, it’d give you $10000, but only if you’d agree to give it $100 if the coin came up tails.
Now consider a variant in which, in the counterfactual heads world, instead of giving you $10,000, Omega would have given you an all-expenses-paid month-long vacation to the destination of your choice.
You don’t need to simulate all the details of how that vacation would have played out. You don’t even need to simulate where you would have chosen to go. (And let us assume that Omega also never simulates any of these things.) Even if no such simulations ever run, you might still find the prospect of counterfactual-you getting that vacation so enticing that you give Omega the $100 in the actual tails world.
As remarked many times on this site and elsewhere, if you are given evidence that Omega is capable of simulating an environment as rich as our observed Universe, you should apply the Copernican principle and assign high probability that our world is not special and is already a simulation. The Matrix-like dualism (real/simulated) is a very low-probability alternative, which only seems likely because we are used to anthropocentrically thinking of our world as “real”.
Once you realize that, Option 1 becomes “pick a different simulation” and Option 2 “improve current simulation”.
This is a very succinct and clear phrasing. In this form it seems clear to me that the choice depends on individual preferrences and character.
A proponent might argue: ‘the current simulation is a hopeless case, why stay?’ And a proponent might counter: ″you run away from responsibilities’
Note that this is nearly isomorphic to the standard moral question of emigration, once you drop the no-longer useful qualifier “simulation”. Is it immoral and unpatriotic to leave your home country and try your luck elsewhere? (Provided you cannot influence your former reality once you leave.)
That’s not quite the question I am trying to convey with my conundrum. What I wanted Option 1 and Option 2 to represent is a hypothetical conflict in which you must choose between maximizing your utility potential at the cost of living in simulation or maximizing your knowledge of the truth in this reality. My point with in sharing this scenario did not have anything to do with the probability of such a scenario occurring. Now, everybody is free to interpret my scenario any way they like but I just wanted to explain what I had in mind. Thank you for your criticism and ideas. By the way.
Which simulations (or “real worlds”) matter (and how much) depends on one’s preference. A hypothetical world that’s not even being simulated may theoretically matter more than any real or simulated world, in the sense that an idealized agent with that preference would make decisions that are primarily concerned with optimizing properties of that hypothetical world (and won’t care what happens in other real or simulated worlds). Such an agent would need to estimate consequences of decisions in the hypothetical world, but this estimate doesn’t need to be particularly detailed, just as thinking with human brain doesn’t constitute simulation of the real world. (Also the agent itself doesn’t need to exist in a “real” or simulated world for the point about its preference being concerned primarily with hypotheticals to hold.)
I’ve read this twice and failed to parse… Do you mind rephrasing in a clearer way, maybe with examples or something?
A consequentialist agent makes decisions based on the effect they have, as depicted in its map. Different agents may use different maps that describe different worlds, or rather more abstract considerations about worlds that don’t pin down any particular world. Which worlds appear on an agent’s map determines which worlds matter to it, so it seems natural to consider the relevance of such worlds an aspect of agent’s preference.
The role played by these worlds in an idealized agent’s decision-making doesn’t require them to be “real”, simulated in a “real” world, or even logically consistent. Anything would do for an agent with the appropriate preference, properties of an impossible world may well matter more than what happens in the real world.
You called attention to the idea that a choice apparently between an effect on the real world, and an effect on a simulated world, may instead be a choice between effects in two simulated worlds. Why is it relevant whether a certain world is “real” or simulated? In many situations that come up in thought experiments, simulated worlds matter less, because they have less measure, in the same way as an outcome predicated on a thousand coins all falling the same way matters less than what happens in all the other cases combined. Following the reasoning similar to expected utility considerations, you would be primarily concerned with the outcomes other than the thousand-tails one; and for the choice between influence in a world that might be simulated as a result of an unlikely collection of events, and influence in the real world, you would be primarily concerned with influence in the real world. So finding out that the choice is instead between two simulated worlds may matter a great deal, shifting focus of attention from the real world (now unavailable, not influenced by your decisions) to both of the simulated worlds, a priori expected to be similarly valuable.
My point was that the next step in this direction is to note that being simulated in an unlikely manner, as opposed to not even being simulated, is not obviously an important distinction. At some point the estimate of moral relevance may fail to remain completely determined by how a world (as a theoretical construct giving semantics to agent’s map, or agent’s preference) relates to some “real” world. At that point, discussing contrived mechanisms that give rise to the simulation may become useless as an argument about which worlds have how much moral relevance, even if we grant that the worlds closer to the real world in their origin are much more important in human preference.
Here is my attempt to rephrase Vladimir’s comment:
Consider a possible world W that someone could simulate, but which, in fact, no one ever will simulate. An agent A can still care about what happens in W. The agent could even try to influence what happens in W acausally.
A natural rejoinder is, How is A going to influence W unless A itself simulates W? How else can A play out the acausal consequences of its choices?
The reply is, A can have some idea about what happens in W without reasoning about W in so fine-grained a way as to deserve the word “simulation”. Coarse-grained reasoning could still suffice for A to influence W.
For example, recall Vladimir’s counterfactual mugging:
Now consider a variant in which, in the counterfactual heads world, instead of giving you $10,000, Omega would have given you an all-expenses-paid month-long vacation to the destination of your choice.
You don’t need to simulate all the details of how that vacation would have played out. You don’t even need to simulate where you would have chosen to go. (And let us assume that Omega also never simulates any of these things.) Even if no such simulations ever run, you might still find the prospect of counterfactual-you getting that vacation so enticing that you give Omega the $100 in the actual tails world.