On the Impossibility of Interaction with Simulated Agents

Is it possible to interact with simulated agents?

It is certainly possible from the perspective of the simulator. It could make changes to the state of the simulated environment, and run the simulation rules forward to see what would happen.

It can’t be done

But consider the perspective of the simulated. Each simulated agent’s experience is the sum of all possible ways that that agent’s subjective experience could happen, including all possible ways it could exist in fundamental base reality, and all possible ways it could be simulated.

The total measure of those cases that correspond to the ones in which any particular simulators choose to intervene in the simulation in any particular way is almost certainly so small as to be discountable for any agent with legitimate concerns of its own. It is far more likely, from the agent’s perspective, that it will die of a sudden failure of a critical body component, in a way completely consistent with the physical laws it has observed up to that point, than it is that God will appear to it.

But what if I do it anyway?

Suppose, however, that it is God’s intention to appear, and He does so, by altering the simulation state appropriately. Consider the subsequent situation from the perspective of the simulated agent.

The agent perceives something in the world that it thinks is a communication from outside the world. It is, once again, a member of the set of all possible ways the agent could be having having the same subjective experience. What measure-fraction of agents who think this are actually in simulations being intervened in in that way? Given that the simulated agents are, like ourselves, sufficiently shoddily constructed, there are a large number of ways an agent could become convinced that it had observed divine intervention that completely accord with the physical laws that have governed the simulation up to that point.

Even supposing the agent is particularly well-built, smart, and rational, and very little of its subjective experience is out of correspondence with its environment, the ways in which verifiably real, highly unlikely phenomena could be genuine signs from God the Simulator need to compete against the ways in which those phenomena could be merely highly unlikely mundane occurrences. Is it more likely that the agent is living in a simulation where someone outside it has elected to perform a miracle, or is it more likely that something within the agent’s universe (perhaps another conspecific agent, space alien, quantum fluctuation, or previously undiscovered physical effect) has caused the phenomenon? Even if one regards simulated universes to be quite common, only the ways to have this subjective experience in simulated universes that have had this particular intervention applied will count here, whereas ways to have the same subjective experience in simulations without the intervention contribute to the other side of the question.

(Of course, all of these have caused the phenomenon, to some degree, because we are indexing on the composite of all agents undergoing this particular subjective experience.)

But what if I do it a lot?

Some thinkers have attempted to get around the problem of a particular simulation being unlikely from a simulated agent’s perspective by multiplying the count of the simulated agents, such as in Simulation Capture or The Dr. Evil Problem. Variously, either you or Dr. Evil are simulated repeatedly, and threatened with the torture of those simulations, to try and unify your subjective experience with one where you ought to believe you will experience that torture with nontrivial probability, depending on how you act given that experience. This does not in fact work.

Thus far I’ve dealt informally with the mathematics of the relative measures of distinct ways to have a fixed subjective experience, as a way to model what additional elements might come into that subjective experience when it extends itself into its subjective future. In Bayesian terms, there is a missing prior over ways to have a subjective experience. Independent of the count of distinct ways matching a certain criterion, some ways to have a subjective experience might have more “realness” than others.

But any method of attributing a realness measure to ways to have subjective experiences would have to work from the perspective of the agent having the experiences in order to matter to it. Anything based on simulation count might be well-formed from the perspective of the simulator, but is uninterpretable from the perspective of the simulated, who is, remember, unified over all ways in which it could be having its given subjective experience.

So what can I do?

A propensity for simulators to create many simulations might matter to a potentially-simulated agent, though it leaves open the problem of how realness is to be distributed over the simulators. An operationalized model of the Mathematical Universe theory might allow a subjective experience to acquire reference to how realness is distributed among types of ways it might be being had. Acuasal trade with the ensemble of potential simulators, weighted according to their contributions of realness to a subjective experience, might be possible.

Overall, though, simulated agents are essentially subjectively immune to interaction with their simulators.