Does the simulation argument even need simulations?

The simulation argument, as I understand it:

  1. Subjectively, existing as a human in the real, physical universe is indistinguishable from existing as a simulated human in a simulated universe

  2. Anthropically, there is no reason to privilege one over the other: if there exist k real humans and l simulated humans undergoing one’s subjective experience, one’s odds of being a real human are k/​(k+l)

  3. Any civilization capable of simulating a universe is quite likely to simulate an enormous number of them

    1. Even if most capable civilizations simulate only a few universes for e.g. ethical reasons, civilizations that have no such concerns could simulate such enormous numbers of universes that the expected number of universes simulated by any simulation-capable civilization is still huge

  4. Our present civilization is likely to reach the point where it can simulate a universe reasonably soon

  5. By 3. and 4., there exist (at some point in history) huge numbers of simulated universes, and therefore huge numbers of simulated humans living in simulated universes

  6. By 2. and 5., our odds of being real humans are tiny (unless we reject 4, by assuming that humanity will never reach the stage of running such simulations)

    When we talk about a simulation we’re usually thinking of a computer; crudely, we’d represent the universe as a giant array of bytes in RAM, and have some enormously complicated program that could compute the next state of the simulated universe from the previous one[1]. Fundamentally, we’re just storing one big number, then performing a calculation and store another number, and so on. In fact our program is simply another number (witness the DeCSS “illegal prime”). This is effectively the GLUT concept applied to the whole universe.

    But numbers are just… numbers. If we have a computer calculating the fibonacci sequence, it’s hard to see that running the calculating program makes this sequence any more real than if we had just conceptualized the rule[2] - or even, to a mathematical Platonist, if we’d never thought of it at all. And we do know the rule (modulo having a theory of quantum gravity), and the initial state of the universe is (to the best of our knowledge) small and simple enough that we could describe it, or another similar but subtly different universe, in terms small enough to write down. At that point, what we have seems in some sense to be a simulated universe, just as real as if we’d run a computer to calculate it all.

    Possible ways out that I can see:

    1. Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]

    2. Accept the other conclusion: either simulations are impractical even for posthuman civilizations, or posthuman civilization is unlikely. But if all that’s required for a simulation is a mathematical form for the true laws of physics, and knowledge of some early state of the universe, this means humanity is unlikely to ever learn these two things, which is… disturbing, to say the least. This stance also seems to require rejecting mathematical Platonism and adopting some form of finitist/​constructivist position, in which a mathematical notion does not exist until we have constructed it

    3. Argue that something important to the anthropic argument is lost in the move from a computer calculation to a mathematical expression. This seems to require rejecting the Church-Turing thesis and means most established programming theory would be useless in the programming of a simulation[4]

    4. Some other counter to the simulation argument. To me the anthropic part (i.e. step 2) seems the least certain; it appears to be false under e.g. UDASSA, though I don’t know enough about anthropics to say more

    Thoughts?

    [1] As I understand it there is no contradiction with relativity; we perform the simulation in some particular frame, but obtain the same events whichever frame we choose

    [2] This equivalence is not just speculative. Going back to thinking about computer programs, Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) follows lazy evaluation: a value is not calculated unless it is used. Thus if our simulation contained some regions that had no causal effect on subsequent steps (e.g. some people on a spaceship falling into a black hole), the simulation wouldn’t bother to evaluate them[5]

    If we upload people who then make phone calls to their relatives to convince them to upload, clearly those people must have been calculated—or at least, enough of them to talk on the phone. But what about a loner who chooses to talk to no-one? Such a person could be more efficiently stored as their initial state plus a counter of how many times the function needs to be run to evaluate them, if anyone were to talk to them. If no-one has their contact details any more, we wouldn’t even need to store that much. What about when all humans have uploaded? Sure, you could calculate the world-state for each step explicitly, but that would be wasteful. Our simulated world would still produce the correct outputs if all it did was increment a tick counter

    Practically every programming runtime performs some (more limited) form of this, using dataflow analysis, instruction reordering and dead code elimination—usually without the programmer having to explicitly request it. Thus if your theory of anthropics says that an “optimized” simulation is counted differently from a “full” one, then there is little hope of constructing such a thing without developing a significant amount of new tools and programming techniques[4]

    [3] Indeed, with an appropriate anthropic argument this might explain why the rules of physics are mathematically simple. I am planning another post on this line of thought

    [4] This is worrying if one is in favour of uploading, particularly forcibly—it would be extremely problematic morally if uploads were in some sense “less real” than biological people

    [5] One possible way out is that the laws of physics appear to be information-preserving; to simulate the state of the universe at time t=100 you can’t discard any part of the state of the universe at time t=50, and must in some sense have calculated all the intermediate steps (though not necessarily explicitly—the state at t=20 could be spread out between several calculations, never appearing in memory as a single number). I don’t think this affects the wider argument though