Does the simulation argument even need simulations?

The simu­la­tion ar­gu­ment, as I un­der­stand it:

  1. Sub­jec­tively, ex­ist­ing as a hu­man in the real, phys­i­cal uni­verse is in­dis­t­in­guish­able from ex­ist­ing as a simu­lated hu­man in a simu­lated universe

  2. An­throp­i­cally, there is no rea­son to priv­ilege one over the other: if there ex­ist k real hu­mans and l simu­lated hu­mans un­der­go­ing one’s sub­jec­tive ex­pe­rience, one’s odds of be­ing a real hu­man are k/​(k+l)

  3. Any civ­i­liza­tion ca­pa­ble of simu­lat­ing a uni­verse is quite likely to simu­late an enor­mous num­ber of them

    1. Even if most ca­pa­ble civ­i­liza­tions simu­late only a few uni­verses for e.g. eth­i­cal rea­sons, civ­i­liza­tions that have no such con­cerns could simu­late such enor­mous num­bers of uni­verses that the ex­pected num­ber of uni­verses simu­lated by any simu­la­tion-ca­pa­ble civ­i­liza­tion is still huge

  4. Our pre­sent civ­i­liza­tion is likely to reach the point where it can simu­late a uni­verse rea­son­ably soon

  5. By 3. and 4., there ex­ist (at some point in his­tory) huge num­bers of simu­lated uni­verses, and there­fore huge num­bers of simu­lated hu­mans liv­ing in simu­lated universes

  6. By 2. and 5., our odds of be­ing real hu­mans are tiny (un­less we re­ject 4, by as­sum­ing that hu­man­ity will never reach the stage of run­ning such simu­la­tions)

    When we talk about a simu­la­tion we’re usu­ally think­ing of a com­puter; crudely, we’d rep­re­sent the uni­verse as a gi­ant ar­ray of bytes in RAM, and have some enor­mously com­pli­cated pro­gram that could com­pute the next state of the simu­lated uni­verse from the pre­vi­ous one[1]. Fun­da­men­tally, we’re just stor­ing one big num­ber, then perform­ing a calcu­la­tion and store an­other num­ber, and so on. In fact our pro­gram is sim­ply an­other num­ber (wit­ness the DeCSS “ille­gal prime”). This is effec­tively the GLUT con­cept ap­plied to the whole uni­verse.

    But num­bers are just… num­bers. If we have a com­puter calcu­lat­ing the fibonacci se­quence, it’s hard to see that run­ning the calcu­lat­ing pro­gram makes this se­quence any more real than if we had just con­cep­tu­al­ized the rule[2] - or even, to a math­e­mat­i­cal Pla­ton­ist, if we’d never thought of it at all. And we do know the rule (mod­ulo hav­ing a the­ory of quan­tum grav­ity), and the ini­tial state of the uni­verse is (to the best of our knowl­edge) small and sim­ple enough that we could de­scribe it, or an­other similar but sub­tly differ­ent uni­verse, in terms small enough to write down. At that point, what we have seems in some sense to be a simu­lated uni­verse, just as real as if we’d run a com­puter to calcu­late it all.

    Pos­si­ble ways out that I can see:

    1. Bite the bul­let: we are most likely not even a com­puter simu­la­tion, just a math­e­mat­i­cal con­struct[3]

    2. Ac­cept the other con­clu­sion: ei­ther simu­la­tions are im­prac­ti­cal even for posthu­man civ­i­liza­tions, or posthu­man civ­i­liza­tion is un­likely. But if all that’s re­quired for a simu­la­tion is a math­e­mat­i­cal form for the true laws of physics, and knowl­edge of some early state of the uni­verse, this means hu­man­ity is un­likely to ever learn these two things, which is… dis­turb­ing, to say the least. This stance also seems to re­quire re­ject­ing math­e­mat­i­cal Pla­ton­ism and adopt­ing some form of fini­tist/​con­struc­tivist po­si­tion, in which a math­e­mat­i­cal no­tion does not ex­ist un­til we have con­structed it

    3. Ar­gue that some­thing im­por­tant to the an­thropic ar­gu­ment is lost in the move from a com­puter calcu­la­tion to a math­e­mat­i­cal ex­pres­sion. This seems to re­quire re­ject­ing the Church-Tur­ing the­sis and means most es­tab­lished pro­gram­ming the­ory would be use­less in the pro­gram­ming of a simu­la­tion[4]

    4. Some other counter to the simu­la­tion ar­gu­ment. To me the an­thropic part (i.e. step 2) seems the least cer­tain; it ap­pears to be false un­der e.g. UDASSA, though I don’t know enough about an­throp­ics to say more


    [1] As I un­der­stand it there is no con­tra­dic­tion with rel­a­tivity; we perform the simu­la­tion in some par­tic­u­lar frame, but ob­tain the same events whichever frame we choose

    [2] This equiv­alence is not just spec­u­la­tive. Go­ing back to think­ing about com­puter pro­grams, Haskell (prob­a­bly the lan­guage most likely to be used for a uni­verse simu­la­tion, at least at pre­sent tech­nol­ogy lev­els) fol­lows lazy eval­u­a­tion: a value is not calcu­lated un­less it is used. Thus if our simu­la­tion con­tained some re­gions that had no causal effect on sub­se­quent steps (e.g. some peo­ple on a space­ship fal­ling into a black hole), the simu­la­tion wouldn’t bother to eval­u­ate them[5]

    If we up­load peo­ple who then make phone calls to their rel­a­tives to con­vince them to up­load, clearly those peo­ple must have been calcu­lated—or at least, enough of them to talk on the phone. But what about a loner who chooses to talk to no-one? Such a per­son could be more effi­ciently stored as their ini­tial state plus a counter of how many times the func­tion needs to be run to eval­u­ate them, if any­one were to talk to them. If no-one has their con­tact de­tails any more, we wouldn’t even need to store that much. What about when all hu­mans have up­loaded? Sure, you could calcu­late the world-state for each step ex­plic­itly, but that would be waste­ful. Our simu­lated world would still pro­duce the cor­rect out­puts if all it did was in­cre­ment a tick counter

    Prac­ti­cally ev­ery pro­gram­ming run­time performs some (more limited) form of this, us­ing dataflow anal­y­sis, in­struc­tion re­order­ing and dead code elimi­na­tion—usu­ally with­out the pro­gram­mer hav­ing to ex­plic­itly re­quest it. Thus if your the­ory of an­throp­ics says that an “op­ti­mized” simu­la­tion is counted differ­ently from a “full” one, then there is lit­tle hope of con­struct­ing such a thing with­out de­vel­op­ing a sig­nifi­cant amount of new tools and pro­gram­ming tech­niques[4]

    [3] In­deed, with an ap­pro­pri­ate an­thropic ar­gu­ment this might ex­plain why the rules of physics are math­e­mat­i­cally sim­ple. I am plan­ning an­other post on this line of thought

    [4] This is wor­ry­ing if one is in favour of up­load­ing, par­tic­u­larly forcibly—it would be ex­tremely prob­le­matic morally if up­loads were in some sense “less real” than biolog­i­cal people

    [5] One pos­si­ble way out is that the laws of physics ap­pear to be in­for­ma­tion-pre­serv­ing; to simu­late the state of the uni­verse at time t=100 you can’t dis­card any part of the state of the uni­verse at time t=50, and must in some sense have calcu­lated all the in­ter­me­di­ate steps (though not nec­es­sar­ily ex­plic­itly—the state at t=20 could be spread out be­tween sev­eral calcu­la­tions, never ap­pear­ing in mem­ory as a sin­gle num­ber). I don’t think this af­fects the wider ar­gu­ment though