# Does the simulation argument even need simulations?

The simu­la­tion ar­gu­ment, as I un­der­stand it:

1. Sub­jec­tively, ex­ist­ing as a hu­man in the real, phys­i­cal uni­verse is in­dis­t­in­guish­able from ex­ist­ing as a simu­lated hu­man in a simu­lated universe

2. An­throp­i­cally, there is no rea­son to priv­ilege one over the other: if there ex­ist k real hu­mans and l simu­lated hu­mans un­der­go­ing one’s sub­jec­tive ex­pe­rience, one’s odds of be­ing a real hu­man are k/​(k+l)

3. Any civ­i­liza­tion ca­pa­ble of simu­lat­ing a uni­verse is quite likely to simu­late an enor­mous num­ber of them

1. Even if most ca­pa­ble civ­i­liza­tions simu­late only a few uni­verses for e.g. eth­i­cal rea­sons, civ­i­liza­tions that have no such con­cerns could simu­late such enor­mous num­bers of uni­verses that the ex­pected num­ber of uni­verses simu­lated by any simu­la­tion-ca­pa­ble civ­i­liza­tion is still huge

4. Our pre­sent civ­i­liza­tion is likely to reach the point where it can simu­late a uni­verse rea­son­ably soon

5. By 3. and 4., there ex­ist (at some point in his­tory) huge num­bers of simu­lated uni­verses, and there­fore huge num­bers of simu­lated hu­mans liv­ing in simu­lated universes

6. By 2. and 5., our odds of be­ing real hu­mans are tiny (un­less we re­ject 4, by as­sum­ing that hu­man­ity will never reach the stage of run­ning such simu­la­tions)

When we talk about a simu­la­tion we’re usu­ally think­ing of a com­puter; crudely, we’d rep­re­sent the uni­verse as a gi­ant ar­ray of bytes in RAM, and have some enor­mously com­pli­cated pro­gram that could com­pute the next state of the simu­lated uni­verse from the pre­vi­ous one[1]. Fun­da­men­tally, we’re just stor­ing one big num­ber, then perform­ing a calcu­la­tion and store an­other num­ber, and so on. In fact our pro­gram is sim­ply an­other num­ber (wit­ness the DeCSS “ille­gal prime”). This is effec­tively the GLUT con­cept ap­plied to the whole uni­verse.

But num­bers are just… num­bers. If we have a com­puter calcu­lat­ing the fibonacci se­quence, it’s hard to see that run­ning the calcu­lat­ing pro­gram makes this se­quence any more real than if we had just con­cep­tu­al­ized the rule[2] - or even, to a math­e­mat­i­cal Pla­ton­ist, if we’d never thought of it at all. And we do know the rule (mod­ulo hav­ing a the­ory of quan­tum grav­ity), and the ini­tial state of the uni­verse is (to the best of our knowl­edge) small and sim­ple enough that we could de­scribe it, or an­other similar but sub­tly differ­ent uni­verse, in terms small enough to write down. At that point, what we have seems in some sense to be a simu­lated uni­verse, just as real as if we’d run a com­puter to calcu­late it all.

Pos­si­ble ways out that I can see:

1. Bite the bul­let: we are most likely not even a com­puter simu­la­tion, just a math­e­mat­i­cal con­struct[3]

2. Ac­cept the other con­clu­sion: ei­ther simu­la­tions are im­prac­ti­cal even for posthu­man civ­i­liza­tions, or posthu­man civ­i­liza­tion is un­likely. But if all that’s re­quired for a simu­la­tion is a math­e­mat­i­cal form for the true laws of physics, and knowl­edge of some early state of the uni­verse, this means hu­man­ity is un­likely to ever learn these two things, which is… dis­turb­ing, to say the least. This stance also seems to re­quire re­ject­ing math­e­mat­i­cal Pla­ton­ism and adopt­ing some form of fini­tist/​con­struc­tivist po­si­tion, in which a math­e­mat­i­cal no­tion does not ex­ist un­til we have con­structed it

3. Ar­gue that some­thing im­por­tant to the an­thropic ar­gu­ment is lost in the move from a com­puter calcu­la­tion to a math­e­mat­i­cal ex­pres­sion. This seems to re­quire re­ject­ing the Church-Tur­ing the­sis and means most es­tab­lished pro­gram­ming the­ory would be use­less in the pro­gram­ming of a simu­la­tion[4]

4. Some other counter to the simu­la­tion ar­gu­ment. To me the an­thropic part (i.e. step 2) seems the least cer­tain; it ap­pears to be false un­der e.g. UDASSA, though I don’t know enough about an­throp­ics to say more

Thoughts?

[1] As I un­der­stand it there is no con­tra­dic­tion with rel­a­tivity; we perform the simu­la­tion in some par­tic­u­lar frame, but ob­tain the same events whichever frame we choose

[2] This equiv­alence is not just spec­u­la­tive. Go­ing back to think­ing about com­puter pro­grams, Haskell (prob­a­bly the lan­guage most likely to be used for a uni­verse simu­la­tion, at least at pre­sent tech­nol­ogy lev­els) fol­lows lazy eval­u­a­tion: a value is not calcu­lated un­less it is used. Thus if our simu­la­tion con­tained some re­gions that had no causal effect on sub­se­quent steps (e.g. some peo­ple on a space­ship fal­ling into a black hole), the simu­la­tion wouldn’t bother to eval­u­ate them[5]

If we up­load peo­ple who then make phone calls to their rel­a­tives to con­vince them to up­load, clearly those peo­ple must have been calcu­lated—or at least, enough of them to talk on the phone. But what about a loner who chooses to talk to no-one? Such a per­son could be more effi­ciently stored as their ini­tial state plus a counter of how many times the func­tion needs to be run to eval­u­ate them, if any­one were to talk to them. If no-one has their con­tact de­tails any more, we wouldn’t even need to store that much. What about when all hu­mans have up­loaded? Sure, you could calcu­late the world-state for each step ex­plic­itly, but that would be waste­ful. Our simu­lated world would still pro­duce the cor­rect out­puts if all it did was in­cre­ment a tick counter

Prac­ti­cally ev­ery pro­gram­ming run­time performs some (more limited) form of this, us­ing dataflow anal­y­sis, in­struc­tion re­order­ing and dead code elimi­na­tion—usu­ally with­out the pro­gram­mer hav­ing to ex­plic­itly re­quest it. Thus if your the­ory of an­throp­ics says that an “op­ti­mized” simu­la­tion is counted differ­ently from a “full” one, then there is lit­tle hope of con­struct­ing such a thing with­out de­vel­op­ing a sig­nifi­cant amount of new tools and pro­gram­ming tech­niques[4]

[3] In­deed, with an ap­pro­pri­ate an­thropic ar­gu­ment this might ex­plain why the rules of physics are math­e­mat­i­cally sim­ple. I am plan­ning an­other post on this line of thought

[4] This is wor­ry­ing if one is in favour of up­load­ing, par­tic­u­larly forcibly—it would be ex­tremely prob­le­matic morally if up­loads were in some sense “less real” than biolog­i­cal people

[5] One pos­si­ble way out is that the laws of physics ap­pear to be in­for­ma­tion-pre­serv­ing; to simu­late the state of the uni­verse at time t=100 you can’t dis­card any part of the state of the uni­verse at time t=50, and must in some sense have calcu­lated all the in­ter­me­di­ate steps (though not nec­es­sar­ily ex­plic­itly—the state at t=20 could be spread out be­tween sev­eral calcu­la­tions, never ap­pear­ing in mem­ory as a sin­gle num­ber). I don’t think this af­fects the wider ar­gu­ment though

1. Bite the bul­let: we are most likely not even a com­puter simu­la­tion, just a math­e­mat­i­cal con­struct[3]

Bit­ing the bul­let here is roughly equiv­a­lent to ac­cept­ing Teg­mark’s Ul­ti­mate Ensem­ble. This was dis­cussed on LW in ata’s post from 2010, The math­e­mat­i­cal uni­verse: the map that is the ter­ri­tory.

See Teg­mark (2008). In par­tic­u­lar, Sec­tion 6, “Im­pli­ca­tions for the simu­la­tion ar­gu­ment”. A rele­vant ex­tract:

For ex­am­ple, since ev­ery uni­verse simu­la­tion cor­re­sponds to a math­e­mat­i­cal struc­ture, and there­fore already ex­ists in the Level IV mul­ti­verse [the mul­ti­verse of all math­e­mat­i­cal struc­tures], does it in some mean­ingful sense ex­ist “more” if it is in ad­di­tion run on a com­puter? This ques­tion is fur­ther com­pli­cated by the fact that eter­nal in­fla­tion pre­dicts an in­finite space with in­finitely many planets, civ­i­liza­tions, and com­put­ers, and that the Level IV mul­ti­verse in­cludes an in­finite num­ber of pos­si­ble simu­la­tions. The above-men­tioned fact that our uni­verse (to­gether with the en­tire Level III mul­ti­verse) may be simu­lat­able by quite a short com­puter pro­gram (Sect. 6.2) calls into ques­tion whether it makes any on­tolog­i­cal differ­ence whether simu­la­tions are “run” or not. If, as ar­gued above, the com­puter need only de­scribe and not com­pute the his­tory, then the com­plete de­scrip­tion would prob­a­bly fit on a sin­gle mem­ory stick, and no CPU power would be re­quired. It would ap­pear ab­surd that the ex­is­tence of this mem­ory stick would have any im­pact what­so­ever on whether the mul­ti­verse it de­scribes ex­ists “for real”. Even if the ex­is­tence of the mem­ory stick mat­tered, some el­e­ments of this mul­ti­verse will con­tain an iden­ti­cal mem­ory stick that would “re­cur­sively” sup­port its own phys­i­cal ex­is­tence. This would not in­volve any Catch-22 “chicken-and-egg” prob­lem re­gard­ing whether the stick or the mul­ti­verse ex­isted first, since the mul­ti­verse el­e­ments are 4-di­men­sional space­times, whereas “cre­ation” is of course only a mean­ingful no­tion within a space­time.

A while ago, I posted a LW dis­cus­sion link to John Regehr’s blog post about similar ideas: Does a simu­la­tion re­ally need to be run?.

• My thought is that your hy­poth­e­sis is pretty similar to the Dust The­ory.

http://​​sci­encefic­tion.com/​​2011/​​05/​​23/​​sci­ence-fea­ture-dust-the­ory/​​

And Greg Egan’s counter-ar­gu­ment to the Dust The­ory is pretty de­cent:

How­ever, I think the uni­verse we live in pro­vides strong em­piri­cal ev­i­dence against the “pure” Dust The­ory, be­cause it is far too or­derly and obeys far sim­pler and more ho­mo­ge­neous phys­i­cal laws than it would need to, merely in or­der to con­tain ob­servers with an en­dur­ing sense of their own ex­is­tence. If ev­ery ar­range­ment of the dust that con­tained such ob­servers was re­al­ised, then there would be billions of times more ar­range­ments in which the ob­servers were sur­rounded by chaotic events, than ar­range­ments in which there were uniform phys­i­cal laws.

I think the same counter-ar­gu­ment ap­plies to your hy­poth­e­sis.

• A steel­manned ver­sion of Egan’s coun­ter­ar­gu­ment can be found in what Teg­mark calls the (cos­molog­i­cal) mea­sure prob­lem. Egan’s origi­nal coun­ter­ar­gu­ment is too weak be­cause we can sim­ply pos­tu­late that there is an ap­pro­pri­ate mea­sure over the wor­lds of in­ter­est; we already do that for the many-wor­lds in­ter­pre­ta­tion!

In Teg­mark (2008) (see my other com­ment):

One such is­sue is the above-men­tioned mea­sure prob­lem, which is in essence the prob­lem of how to deal with an­noy­ing in­fini­ties and pre­dict con­di­tional prob­a­bil­ities for what an ob­server should per­ceive given past ob­ser­va­tions.

[...]

A sec­ond testable pre­dic­tion of the MUH [Math­e­mat­i­cal Uni­verse Hy­poth­e­sis] is that the Level IV mul­ti­verse [the mul­ti­verse of all math­e­mat­i­cal struc­tures] ex­ists, so that out of all uni­verses con­tain­ing ob­servers like us, we should ex­pect to find our­selves in a rather typ­i­cal one. Ri­gor­ously car­ry­ing out this test re­quires solv­ing the mea­sure prob­lem, i.e., com­put­ing con­di­tional prob­a­bil­ities for ob­serv­able quan­tities given other ob­ser­va­tions (such as our ex­is­tence) and an as­sumed the­ory (such as the MUH, or the hy­poth­e­sis that only some spe­cific math­e­mat­i­cal struc­ture like string the­ory or the Lie su­per­alge­bra mb(3|8) [142] ex­ists). Fur­ther work on all as­pects of the mea­sure prob­lem is ur­gently needed re­gard­less of whether the MUH is cor­rect, as this is nec­es­sary for ob­ser­va­tion­ally test­ing any the­ory that in­volves par­allel uni­verses at any level, in­clud­ing cos­molog­i­cal in­fla­tion and the string the­ory land­scape [67–71]. Although we are still far from un­der­stand­ing se­lec­tion effects linked to the re­quire­ments for life, we can start test­ing mul­ti­verse pre­dic­tions by as­sess­ing how typ­i­cal our uni­verse is as re­gards dark mat­ter, dark en­ergy and neu­trinos, be­cause these sub­stances af­fect only bet­ter un­der­stood pro­cesses like galaxy for­ma­tion. Early such tests have sug­gested (albeit us­ing ques­tion­able as­sump­tions) that the ob­served abun­dance of these three sub­stances is in­deed rather typ­i­cal of what you might mea­sure from a ran­dom sta­ble so­lar sys­tem in a mul­ti­verse where these abun­dances vary from uni­verse to uni­verse [42, 134–139].

Teg­mark makes a few re­marks on us­ing al­gorith­mic com­plex­ity as the mea­sure:

It is un­clear whether some sort of mea­sure over the Level IV mul­ti­verse is re­quired to fully re­solve the mea­sure prob­lem, but if this is the case and the CUH [Com­putable Uni­verse Hy­poth­e­sis] is cor­rect, then the mea­sure could de­pend on the al­gorith­mic com­plex­ity of the math­e­mat­i­cal struc­tures, which would be finite. La­bel­ing them all by finite bit strings s in­ter­preted as real num­bers on the unit in­ter­val [0, 1) (with the bits giv­ing the bi­nary dec­i­mals), the most ob­vi­ous mea­sure for a given struc­ture S would be the frac­tion of the unit in­ter­val cov­ered by real num­bers whose bit strings be­gin with strings s defin­ing S. A string of length n bits thus gets weight 2^(−n), which means that the mea­sure re­wards sim­pler struc­tures. The analo­gous mea­sure for com­puter pro­grams is ad­vo­cated in [16]. A ma­jor con­cern about such mea­sures is of course that they de­pend on the choice of rep­re­sen­ta­tion of struc­tures or com­pu­ta­tions as bit strings, and no ob­vi­ous can­di­date cur­rently ex­ists for which rep­re­sen­ta­tion to use.

Each of the analo­gous prob­lems in eter­nal in­fla­tion and the string the­ory land­scape is also called the mea­sure prob­lem (in eter­nal in­fla­tion: how to as­sign mea­sure over the po­ten­tially in­finite num­ber of in­fla­tion­ary bub­bles; in the string the­ory land­scape: how to as­sign mea­sure over the as­tro­nom­i­cal num­ber of false vacua).

In the many-wor­lds in­ter­pre­ta­tion, the analo­gous mea­sure prob­lem is re­solved by the Born prob­a­bil­ities.

• Egan’s origi­nal coun­ter­ar­gu­ment is too weak be­cause we can sim­ply pos­tu­late that there is an ap­pro­pri­ate mea­sure over the wor­lds of in­ter­est;

I don’t un­der­stand this at all. Can you give an ex­am­ple of such an ap­pro­pri­ate mea­sure?

• An ex­am­ple of a mea­sure in this con­text would be the com­plex­ity mea­sure that Teg­mark men­tioned, as long as we agree on a way to en­code math­e­mat­i­cal struc­tures (the nonunique­ness of rep­re­sen­ta­tion is one of the is­sues that Teg­mark brought up).

Whether this is an ap­pro­pri­ate mea­sure (i.e., whether it cor­rectly “pre­dicts con­di­tional prob­a­bil­ities for what an ob­server should per­ceive given past ob­ser­va­tions”) is un­known; if we knew how to find out, then we could di­rectly re­solve the mea­sure prob­lem!

An ex­am­ple of a con­text where we can give the ex­plicit mea­sure is in the many-words in­ter­pre­ta­tion, where as I men­tioned, the Born prob­a­bil­ities re­solve the analo­gous mea­sure prob­lem.

• An ex­am­ple of a con­text where we can give the ex­plicit mea­sure is in the many-words in­ter­pre­ta­tion, where as I men­tioned, the Born prob­a­bil­ities re­solve the analo­gous mea­sure prob­lem.

So you are say­ing that the “Born prob­a­bil­ities” are an ex­am­ple of an “ap­pro­pri­ate mea­sure” which, if “pos­tu­lated,” re­buts Egan’s ar­gu­ment?

Is that cor­rect?

• The Born prob­a­bil­ities ap­ply to a differ­ent con­text—the mul­ti­ple Everett branches of MWI, rather than the in­ter­pre­ta­tive uni­verses available un­der dust the­ory. If we had an equiv­a­lent of the Born prob­a­bil­ities—a mea­sure—for dust the­ory, then we’d be able to re­solve Egan’s ar­gu­ment one way or an­other (de­pend­ing on which way the num­bers came out un­der this mea­sure).

Since we don’t yet know what the mea­sure is, it’s not clear whether Egan’s ar­gu­ment holds—un­der the “Teng­mark com­pu­ta­tional com­plex­ity mea­sure” Egan would be wrong, un­der the “naive mea­sure” Egan is right. But we need some ex­ter­nal ev­i­dence to know which mea­sure to use. (By con­trast in the QM case we know the Born prob­a­bil­ities are the cor­rect ones to use, be­cause they cor­re­spond to ex­per­i­men­tal re­sults (and also be­cause e.g. they’re pre­served un­der a QM sys­tem’s uni­tary evolu­tion)).

• I would guess you are prob­a­bly cor­rect that Egan’s ar­gu­ment hinges on this point. In essence, Egan seems to be mak­ing an in­for­mal claim about the rel­a­tively like­li­hood of an or­derly dust uni­verse ver­sus a chaotic one.

Boiled down to its es­sen­tials, Vin­cen­tYu’s ar­gu­ment seems to be that if Egan’s in­for­mal claim is in­cor­rect, then Egan’s ar­gu­ment fails. Well duh.

• Here’s a vi­sual rep­re­sen­ta­tion of the dust the­ory by Ran­dall Mun­roe: http://​​xkcd.com/​​505/​​

• I’m not sure I agree with that ar­gu­ment. The fact that quan­tum me­chan­ics ex­ists, and there are speci­fi­cally al­lowed states, is ex­actly the type of thing I’d ex­pect from a uni­verse driven by a com­puter simu­la­tion. Discrete val­ues are much eas­ier than con­tin­u­ous sets.

On the other hand, su­per­po­si­tion and en­tan­gle­ment seem sub­op­ti­mal.

• The fact that quan­tum me­chan­ics ex­ists, and there are speci­fi­cally al­lowed states, is ex­actly the type of thing I’d ex­pect from a uni­verse driven by a com­puter simu­la­tion.

I’m not sure I un­der­stand your point. Are you say­ing that a simu­la­tion which is just a math­e­mat­i­cal con­struct would prob­a­bly not re­sult in a quan­tized uni­verse?

• I was in­tend­ing to say the op­po­site; that a quan­tized world would seem like it would take less com­pu­ta­tional power than a con­tin­u­ous one, there­fore the fact that we live in a quan­tized world is ev­i­dence of be­ing in a simu­la­tion.

• I was in­tend­ing to say the op­po­site; that a quan­tized world would seem like it would take less com­pu­ta­tional power than a con­tin­u­ous one, there­fore the fact that we live in a quan­tized world is ev­i­dence of be­ing in a simu­la­tion.

That’s not an un­rea­son­able point, but I think it goes more to the is­sue of simu­la­tion ver­sus non-simu­la­tion than the is­sue of com­puter-based simu­la­tion ver­sus math­e­mat­i­cal con­struct simu­la­tion.

• Well, I sup­pose we could pos­tu­late some­thing like a con­tin­u­ous ver­sion of quan­tum me­chan­ics for a host uni­verse if we’d like.

• Glad to see this has been thought of; that ar­gu­ment was where I was headed in [3] (and this whole line of thought greatly an­noyed me when read­ing Per­mu­ta­tion City, so I’m glad Egan’s at least looked at it a bit).

This gets us a con­tra­dic­tion, not a re­fu­ta­tion, and one man’s modus po­nens is an­other man’s modus tol­lens. Can we use this to ar­gue for a flaw in the origi­nal simu­la­tion ar­gu­ment? I think it again comes down to an­throp­ics: why are our sub­jec­tive ex­pe­riences re­verse-an­throp­i­cally more likely than those of dust ar­range­ments? And into which class would simu­lated peo­ple fall?

• Can we use this to ar­gue for a flaw in the origi­nal simu­la­tion ar­gu­ment?

I don’t think so since it’s rea­son­able to hy­poth­e­size that man-made simu­la­tions would, gen­er­ally speak­ing, by more on the or­derly side as op­posed to be­ing full of ran­dom non­sense.

But it’s still an in­ter­est­ing ques­tion. One can imag­ine a room with 2 large com­put­ers. The first com­puter has been care­fully pro­grammed to simu­late 1950s Los An­ge­les. There are peo­ple in the simu­la­tion who are com­pletely con­vinced that the live in Los An­ge­les in the 1950s.

The sec­ond com­puter is just do­ing ran­dom com­pu­ta­tions. But ar­guably there is some cryp­to­graphic in­ter­pre­ta­tion of those com­pu­ta­tions which also yields a simu­la­tion of 1950s Los An­ge­les.

• I’d like to see that ar­gu­ment. If you can find a map­ping that doesn’t end up en­cod­ing the simu­la­tion in the map­ping, I’d be sur­prised.

• I’d like to see that ar­gu­ment. If you can find a map­ping that doesn’t end up en­cod­ing the simu­la­tion in the map­ping, I’d be sur­prised.

Well why should it mat­ter if the simu­la­tion is en­coded in the map­ping?

• If it is, that screens off any fea­tures of what it’s map­ping; you can no longer be sur­prised that ‘ran­dom noise’ pro­duces such out­put.

• Again, so what?

Let me ad­just the origi­nal thought ex­per­i­ment:

The op­er­a­tion first com­puter is en­crypted us­ing a very large one-time pad.

• Episte­mol­ogy 101: Proper be­liefs are (prob­a­bil­is­tic) con­strants over an­ti­ci­pated ob­ser­va­tions.
How does the be­lief that we are liv­ing in a com­puter simu­la­tion/​a pro­jec­tion of the Pla­tonic Hype­r­u­ra­nium/​a dream of a god con­strain what we ex­pect to ob­serve?

• Only in ob­jec­tive modal sense. Beliefs are prob­a­bil­is­tic con­straints over ob­ser­va­tions an­ti­ci­pated given a con­text. So in the ex­am­ple with stars mov­ing away, the stars are still ob­serv­ables be­cause there is coun­ter­fac­tual con­text where we ob­serve them from nearby (by trav­el­ing with them etc.)

• I don’t think that can be right. We be­lieve in the con­tinued ex­is­tence of stars that have moved so far away that we can’t pos­si­bly ob­serve them (due to in­fla­tion).

• Yet, that be­lief con­strains our ob­ser­va­tions.

• How does it? What would be ob­serve differ­ently if some mys­te­ri­ous god de­stroyed those stars as soon as they moved out of causal con­tact with hu­man­ity?

• No, but the hy­poth­e­sis of a mys­terius god de­stroy­ing stars ex­actly when our best cos­molog­i­cal mod­els pre­dict we should stop see­ing them is un­par­si­mo­nious.

And any­way, dis­tant stars never ap­pear to cross the cos­molog­i­cal event hori­zon from our refer­ence frame. Their light be­comes red­shifted so much that we can’t de­tect it any­more.

• No, but the hy­poth­e­sis of a mys­terius god de­stroy­ing stars ex­actly when our best cos­molog­i­cal mod­els pre­dict we should stop see­ing them is un­par­si­mo­nious.

Sure. But be­liev­ing or not be­liev­ing in it doesn’t con­strain what we ex­pect to ob­serve, just the same as “the be­lief that we are liv­ing in a com­puter simu­la­tion/​a pro­jec­tion of the Pla­tonic Hype­r­u­ra­nium/​a dream of a god”. What’s differ­ent from the situ­a­tion in your first post?

• Point of or­der:

com­puter simu­la­tion/​a pro­jec­tion of the Pla­tonic Hype­r­u­ra­nium/​a dream of a god

i feel like we shouldn’t be putting these two so close to­gether.

“All math­e­mat­i­cal state­ments are equally real”

and

“We are be­ing simu­lated”

seem like two differ­ent claims that shouldn’t be blurred to­gether—the first is a mat­ter of on­tol­ogy and se­man­tics, the sec­ond is a mat­ter of fact. If all math­e­mat­i­cal struc­tures are equally real it might have weird moral im­pli­ca­tions, es­pe­cially for simu­la­tions, but even if we suc­cess­fully re­ject the idea that all math­e­mat­i­cal struc­tures are equally real it does not rule out the simu­la­tion hy­poth­e­sis, and if we ac­cept the idea that all math­e­mat­i­cal struc­tures are equally real it does not con­firm the simu­la­tion hy­poth­e­sis.

• Episte­mol­ogy 101, part two: choose the sim­plest hy­poth­e­sis among those which are ob­ser­va­tion­ally undis­t­in­guish­able from each other.

• I think the hy­poth­e­sis that hu­man civ­i­liza­tion will at some point de­rive the ul­ti­mate laws of physics, along with enough ob­ser­va­tions about the state of the early uni­verse to con­struct a rea­son­able simu­la­tion thereof, is sim­pler than the al­ter­na­tive—to say that we won’t seems to re­quire some ad­di­tional as­sump­tion that sci­en­tific progress would stop.

If we ac­cept the ex­is­tence of a large num­ber of simu­lated uni­verses, then while I don’t have a good the­ory of an­throp­ics, ra­tio­nal­ists should win, and blindly as­sum­ing that one is not in a simu­la­tion seems like it leads to los­ing a lot of the time (e.g. my ex­am­ple of bet­ting a cookie with Bob el­se­where in these com­ments).

• I think the hy­poth­e­sis that hu­man civ­i­liza­tion will at some point de­rive the ul­ti­mate laws of physics, along with enough ob­ser­va­tions about the state of the early uni­verse to con­struct a rea­son­able simu­la­tion thereof, is sim­pler than the al­ter­na­tive—to say that we won’t seems to re­quire some ad­di­tional as­sump­tion that sci­en­tific progress would stop.

It is not pos­si­ble, and it never will be pos­si­ble, to simu­late within our uni­verse some­thing as com­plex our own uni­verse it­self, un­less we dis­cover a way to perform in­finite com­pu­ta­tions us­ing finite time, mat­ter and en­ergy, which would vi­o­late many known laws of physics.

We already are able to simu­late “uni­verses” sim­pler than our own (e.g. videogames), but this doesn’t im­ply, even prob­a­bil­is­ti­cally, that our uni­verse is it­self a simu­la­tion. Anal­ogy is not a sound ar­gu­ment.

• We already are able to simu­late “uni­verses” sim­pler than our own (e.g. videogames), but this doesn’t im­ply, even prob­a­bil­is­ti­cally, that our uni­verse is it­self a simu­la­tion.

Why not? Be­cause you as­sign them a low an­thropic weight­ing, or some other rea­son? (I also had an ar­gu­ment that the Dyson com­pu­ta­tion ap­plies, but I think that’s ac­tu­ally beside the point)

If the sim­plest pos­si­ble ex­pla­na­tion for our sen­sory ob­ser­va­tions in­cludes a uni­verse that con­tains simu­la­tions of other uni­verses, it’s a rea­son­able ques­tion which kind we are in, even if they don’t all have the same phys­i­cal laws or the same amount of mat­ter. There’s no a propi rea­son to priv­ilege one hy­poth­e­sis or the other.

• The hy­poth­e­sis that there ex­ist an­other uni­verse, cer­tainly much differ­ent from ours in many as­pects, quite pos­si­bly with a differ­ent set of phys­i­cal laws, is more com­plex that the hy­poth­e­sis that no such uni­verse ex­ists. Futher­more, you could iter­ate the simu­la­tion ar­gu­ment ad in­fini­tum, “tur­tles all the way down”, yield­ing an in­finitely com­plex hy­poth­e­sis.

• A de­scrip­tion of our own uni­verse nec­es­sar­ily in­cludes in­ner uni­verses, cer­tainly much differ­ent from ours in many as­pects, quite pos­si­bly with differ­ent sets of phys­i­cal laws, and many com­plex enough to have their own in­ner uni­verses. So it’s not at all ob­vi­ous that the min­i­mum mes­sage length to de­scribe an outer uni­verse con­tain­ing ours as a simu­la­tion is greater than that to de­scribe our uni­verse.

• A de­scrip­tion of our own uni­verse nec­es­sar­ily in­cludes in­ner uni­verses, cer­tainly much differ­ent from ours in many as­pects, quite pos­si­bly with differ­ent sets of phys­i­cal laws, and many com­plex enough to have their own in­ner uni­verses.

Yes, but we ob­serve our own uni­verse.

So it’s not at all ob­vi­ous that the min­i­mum mes­sage length to de­scribe an outer uni­verse con­tain­ing ours as a simu­la­tion is greater than that to de­scribe our uni­verse.

It is.
This dis­cus­sion is get­ting bor­ing.

• I ac­tu­ally ar­rived at this be­lieve my­self when I was younger, and changed my mind when a room­mate beat it out of me. I

I’m cur­rently at the con­clu­sion it’s not the same, be­cause an “ar­tifi­cial uni­verse” within a simu­la­tion can still in­ter­act with the uni­verse. The simu­la­tion can in­fluence stuff out­side the simu­la­tion, and stuff out­side the simu­la­tion can in­fluence the simu­la­tion.

Oddly, the thing that con­vinced me was think­ing about moral­ity. Think­ing on it now, I guess fram­ing it in terms of some­thing to pro­tect re­ally is helpful. On­tolog­i­cal pla­ton­ism can lead to some fucked up con­clu­sions, morally. I’ll share a fleshed-out ver­sion of the thought-chain that changed my mind.

Re­view the claim, briefly:

But num­bers are just… num­bers. If we have a com­puter calcu­lat­ing the fibonacci se­quence, it’s hard to see that run­ning the calcu­lat­ing pro­gram makes this se­quence any more real than if we had just con­cep­tu­al­ized the rule[2] - or even, to a math­e­mat­i­cal Pla­ton­ist, if we’d never thought of it at all. And we do know the rule (mod­ulo hav­ing a the­ory of quan­tum grav­ity), and the ini­tial state of the uni­verse is (to the best of our knowl­edge) small and sim­ple enough that we could de­scribe it, or an­other similar but sub­tly differ­ent uni­verse, in terms small enough to write down. At that point, what we have seems in some sense to be a simu­lated uni­verse, just as real as if we’d run a com­puter to calcu­late it all.

1) So, if I set the ini­tial con­di­tions for a uni­verse con­tain­ing Suffer­ing Hu­mans, I’m not re­spon­si­ble—the ini­tial con­di­tions of the Hell-uni­verse ex­isted Pla­ton­i­cally re­gard­less of the fact that i defined it in the math­e­mat­i­cal space.

2) Alright, so now what if I run the Hell Uni­verse? Well, pla­ton­i­cally speak­ing I already speci­fied the en­tire uni­verse when I laid out the ini­tial con­di­tions, so I don’t see why run­ning it is a big deal.

So we are cur­rently run­ning a Si­mu­la­tion of Hell, with a clean con­science. If you haven’t already bailed from this on­tol­ogy, lets con­tinue...

3) Math­e­mat­i­cally, the Hells which hap­pen to have Anne in­serted at time T were already in the pla­tonic space of pos­si­ble uni­verses, so why not set the con­di­tions and run that uni­verse? Anne is a real per­son, by the way—we’re just in­sert­ing a copy of her into the hell-verse

4) Anne just up­loaded her con­scious­ness onto a hard drive. Hold on...Anne can now be thought of as a self con­tained sys­tem, with in­put and out­put. Anne’s con­scious­ness is defined in the pla­tonic space, as are all pos­si­ble in­puts and out­puts that she might ex­pe­rience. If ev­ery in­put we might sub­ject Anne to is already defined in pla­tonic space, it makes no differ­ence which one we choose to ac­tu­ally rep­re­sent on the com­puter...

...Any­way, you see where this leads. Now for­get the moral­ity part—that was just to illus­trate the weak­nesses of Pla­tonic on­tol­ogy. Con­sid­er­ing all math­e­mat­i­cal struc­tures equally “real” makes the con­cept of “re­al­ity” lose all mean­ing. There is some­thing very im­por­tant which dis­t­in­guishes re­al­ity from non-real math­e­mat­i­cal uni­verses—the fact that you can ob­serve it. The fact that it can in­ter­act with you.

This might seem less ob­vi­ous when you’re un­sure whether or not your uni­verse is a simu­la­tion, but it’s ob­vi­ous to the par­ent uni­verse. If we ever start simu­lat­ing things, we’re not go­ing to think of it as sim­ply a rep­re­sen­ta­tion spec­i­fy­ing a point in pla­tonic space—we’re go­ing to think of the simu­lated world as a part of our re­al­ity.

Bite the bul­let: we are most likely not even a com­puter simu­la­tion, just a math­e­mat­i­cal con­struct[3]

That’s not a bul­let...I’d say you were bit­ing a bul­let if you didn’t be­lieve that. Real­ity has to be a math­e­mat­i­cal con­struct—if it isn’t, we’ve just thrown logic out the win­dow. But that doesn’t mean any­one was sit­ting around writ­ing the equa­tion.

Real­ity is also spe­cial. It’s differ­ent from all those other math­e­mat­i­cal con­structs, be­cause I will only ever ob­serve re­al­ity.

Even if most ca­pa­ble civ­i­liza­tions simu­late only a few uni­verses for e.g. eth­i­cal rea­sons, civ­i­liza­tions that have no such con­cerns could simu­late such enor­mous num­bers of uni­verses that the ex­pected num­ber of uni­verses simu­lated by any simu­la­tion-ca­pa­ble civ­i­liza­tion is still huge

I don’t think we should be calcu­lat­ing like­li­hoods this way.

I go to good-old Oc­cam’s ra­zor (or more mod­ernly, Mimi­mum Mes­sage Length). Does the simu­la­tion ar­gu­ment make for a sim­pler model? As in, can you ac­tu­ally sug­gest me a uni­verse in which we are a simu­la­tion which is sim­pler than the uni­verse out­lined by vanilla physics? (The an­swer isn’t nec­es­sar­ily “no”, but I’d say that the sim­pler the laws we ob­serve, the more likely the an­swer is to be “no”. If we live in a more com­pli­cated uni­verse—es­pe­cially if the laws of the uni­verse seemed to care about agents (the fact that we are even here does up the prob­a­bil­ity of that) - the an­swer might be ’yes”. That said. I’d still bet on “no”.)

• There is some­thing very im­por­tant which dis­t­in­guishes re­al­ity from non-real math­e­mat­i­cal uni­verses—the fact that you can ob­serve it. The fact that it can in­ter­act with you.

I think this leads to un­pleas­ant con­clu­sions. If causal­ity is all we care about, does that mean we shouldn’t care about peo­ple who are too far away to in­ter­act with (e.g. peo­ple on an in­ter­stel­lar colony too far away to reach in our life­time)? Heck, if some­one dived into a ro­tat­ing black hole with the in­tent to set up a civ­i­liza­tion in the zone of “nor­mal space” closer to the sin­gu­lar­ity, I think I’d care about whether they suc­ceeded, even though it couldn’t pos­si­bly af­fect me. Back on Earth, should we care more about peo­ple close to us and less about peo­ple fur­ther away, since we have more causal con­tact with the former? Should we care more about the rich and pow­er­ful than about the poor and weak, since their de­ci­sions are more likely to af­fect us?

I go to good-old Oc­cam’s ra­zor (or more mod­ernly, Mimi­mum Mes­sage Length). Does the simu­la­tion ar­gu­ment make for a sim­pler model? As in, can you ac­tu­ally sug­gest me a uni­verse in which we are a simu­la­tion which is sim­pler than the uni­verse out­lined by vanilla physics?

If you don’t con­sider the pos­si­bil­ity of be­ing simu­lated it seems like you would make wrong de­ci­sions. Sup­pose that you agree with Bob to cre­ate 1000 simu­la­tions of the uni­verse tonight, and then to­mor­row you’ll place a black sphere in the simu­lated uni­verses. To­mor­row morn­ing Bob offers to bet you a cookie that you’re in one of the simu­lated uni­verses. If you take the bet on the grounds that the model of the uni­verse in which you’re not in the simu­la­tion is sim­pler, then it seems like you lose most of the time (at least un­der naive an­throp­ics).

Now ob­vi­ously in real life we don’t have this in­di­ca­tion as to whether we’re a simu­la­tion. But if we’re try­ing to make a moral de­ci­sion for which it mat­ters whether we’re in a simu­la­tion, it’s im­por­tant to get the right an­swer.

• If you don’t con­sider the pos­si­bil­ity of be­ing simulated

Didn’t say that. We might be in a simu­la­tion. The ques­tion is, is that the more par­si­mo­nious hy­poth­e­sis?

Ob­ser­va­tion is the king of episte­mol­ogy, and Par­si­mony is queen. If par­si­mony says we’re simu­lated, then we’re prob­a­bly simu­lated. In the counter-fac­tual world where I have a mem­ory of agree­ing with Bob to cre­ate 1000 simu­la­tions, then par­si­mony says I’m likely in a simu­la­tion. We might be in a uni­verse where the most par­si­mo­nious hy­poth­e­sis given cur­rent ev­i­dence is simu­la­tion, or we might not. Would that I had a par­si­mony calcu­la­tor, but for now I’m just guess­ing not.

There are ob­ser­va­tions that might lead a simu­la­tion hy­poth­e­sis to be the most par­si­mo­nious hy­poth­e­sis. I claim it as a ques­tion which is ul­ti­mately in the realm of sci­ence, al­though we still need philos­o­phy to figure out a good way to judge par­si­mony.

un­pleas­ant conclusions

Th­ese two state­ments sum my cur­rent stance.

Epistemic Ra­tion­al­ity: Take ev­ery math­e­mat­i­cal struc­ture that isn’t ruled out by the ev­i­dence. Rank them by par­si­mony.

CDT (which I’ll take as “in­stru­men­tal ra­tio­nal­ity” for now):: If your ac­tions have re­sults, you can use ac­tions to choose your fa­vorite re­sult.

so, ap­ply­ing that to the points you raised...

Should we care more about the rich and pow­er­ful than about the poor and weak, since their de­ci­sions are more likely to af­fect us?

I have suffi­cient ev­i­dence to be­lieve that both the poor and the rich ex­ist. I care about them both. In the counter-fac­tual world where I was more cer­tain con­cern­ing the ex­is­tence of the rich and less cer­tain con­tain­ing the ex­is­tence of the poor, then it would make sense to di­rect my efforts to the rich.

If I want to give peo­ple utils, and If I can give 10 utils to per­son R if I have 70% cer­tainty that they ex­ist to benefit from it, or 20 utils to per­son P if I have 10% cer­tainty that they ex­ist to benefit from it, I ob­vi­ously choose per­son R.

Back to re­al­ity: I’ve got in­cred­ible lev­els of cer­tainty that both the rich and the poor ex­ist.

should we care more about peo­ple close to us and less about peo­ple fur­ther away, since we have more causal con­tact with the former?

Once again, it’s a ques­tion of cer­tainty that they ex­ist. If I told you that donat­ing \$100 to the im­pov­er­ished Lan­nisters would be effi­cient al­tru­ism, wouldn’t you want to check whether such peo­ple truly ex­ist and whether the claims I made about them are true?

if some­one dived into a ro­tat­ing black hole with the in­tent to set up a civ­i­liza­tion in the zone of “nor­mal space” closer to the sin­gu­lar­ity, I think I’d care about whether they suc­ceeded, even though it couldn’t pos­si­bly af­fect me

You’d put ev­ery effort into as­sur­ing that they suc­ceeded be­fore they dived into the black hole and be­came causally dis­con­nected from you. After­words, you’re mem­ory of them would re­main as ev­i­dence that they ex­ist...you’d hope they were do­ing alright, but you have no way of know­ing and your ac­tions will not effect them now.

. If causal­ity is all we care about, does that mean we shouldn’t care about peo­ple who are too far away to in­ter­act with (e.g. peo­ple on an in­ter­stel­lar colony too far away to reach in our life­time)?

taboo care...

Given your cur­rent ob­ser­va­tions, what like­li­hood can you as­sign to their ex­is­tence? (emo­tional re­ac­tions like “care” will prob­a­bly fol­low from this).

Can you help them or hurt them via your ac­tions?

So of course you’d care … in pro­por­tion to your cer­tainty that they ex­ist.

• Ob­ser­va­tion is the king of episte­mol­ogy, and Par­si­mony is queen. If par­si­mony says we’re simu­lated, then we’re prob­a­bly simu­lated. In the counter-fac­tual world where I have a mem­ory of agree­ing with Bob to cre­ate 1000 simu­la­tions, then par­si­mony says I’m likely in a simu­la­tion.

It seems to me the most par­si­mo­nious hy­poth­e­sis is that the hu­man race will cre­ate many simu­la­tions in the fu­ture—that seems like the nat­u­ral course of progress, and I think we need to in­tro­duce an ad­di­tional as­sump­tion to claim that we won’t. If we ac­cept this then the same logic as if we’d made that agree­ment with Bob seems to hold.

I have suffi­cient ev­i­dence to be­lieve that both the poor and the rich ex­ist. I care about them both. In the counter-fac­tual world where I was more cer­tain con­cern­ing the ex­is­tence of the rich and less cer­tain con­tain­ing the ex­is­tence of the poor, then it would make sense to di­rect my efforts to the rich.

Hang on. You’ve gone from talk­ing about “what I can in­ter­act with” to “what I know ex­ists”. If logic leads us to be­lieve that non-real math­e­mat­i­cal uni­verses ex­ist (i.e. un­der available ev­i­dence the most par­si­mo­nious as­sump­tion is that they do, even though we can’t causally in­ter­act with them), is that or is that not suffi­cient rea­son to weigh them in our moral de­ci­sion­mak­ing?

• You’ve gone from talk­ing about “what I can in­ter­act with” to “what I know ex­ists”

My mis­take for us­ing the word “in­ter­ac­tion” then—it seems to have differ­ent con­no­ta­tions to you than it does to me.

Re­ceiv­ing ev­i­dence—AKA mak­ing an ob­ser­va­tion—is an in­ter­ac­tion. You can’t know some­thing ex­ists un­less you can causally in­ter­act with it.

If logic leads us to be­lieve that non-real math­e­mat­i­cal uni­verses exist

How can some­thing non-real ex­ist?

I dis­pute the idea that what does or does not ex­ist is a ques­tion of logic.

I say that logic can tell you how par­si­mo­nious a model is, whether it con­tains con­tra­dic­tion, and stuff like that.

But only ob­ser­va­tion can tell you what ex­ists /​ is real.

If we ac­cept this then the same logic as if we’d made that agree­ment with Bob seems to hold.

I’d ar­gue that any simu­la­tions that hu­man­ity makes must be con­tained within the en­tire uni­verse. So adding lower simu­la­tions doesn’t make the fi­nal de­scrip­tion of the uni­verse any more com­plex than it already was. Posit­ing higher simu­la­tions, on the other hand, does in­crease the to­tal num­ber of ax­ioms.

The story you refer­ence con­tains the case where we make a simu­la­tion which is iden­ti­cal to the ac­tual uni­verse. I think that un­less our uni­verse has some re­ally weird laws, we won’t ac­tu­ally be able to do this.

Not all uni­verses in which hu­man­ity cre­ates simu­la­tions are uni­verses in which it is par­si­mo­nious for us to be­lieve that we are some­one’s simu­la­tion.

• But only ob­ser­va­tion can tell you what ex­ists /​ is real.

You’re right, I was be­ing sloppy. My point was: sup­pose the most par­si­mo­nious model that ex­plains our ob­ser­va­tions also im­plies the ex­is­tence of some peo­ple who we can’t causally in­ter­act with. Do we con­sider those peo­ple in our moral calcu­la­tions?

I’d ar­gue that any simu­la­tions that hu­man­ity makes must be con­tained within the en­tire uni­verse. So adding lower simu­la­tions doesn’t make the fi­nal de­scrip­tion of the uni­verse any more com­plex than it already was. Posit­ing higher simu­la­tions, on the other hand, does in­crease the to­tal num­ber of ax­ioms.

I can see the logic, but doesn’t the same ar­gu­ment ap­ply equally well in the “agree­ment with Bob” case?

The story you refer­ence con­tains the case where we make a simu­la­tion which is iden­ti­cal to the ac­tual uni­verse. I think that un­less our uni­verse has some re­ally weird laws, we won’t ac­tu­ally be able to do this.

True, but only nec­es­sary so that the par­ti­ci­pants can re­mem­ber be­ing the peo­ple they were out­side the simu­la­tion; I don’t think it’s fun­da­men­tal to any of the ar­gu­ments.

• My point was: sup­pose the most par­si­mo­nious model that ex­plains our ob­ser­va­tions also im­plies the ex­is­tence of some peo­ple who we can’t causally in­ter­act with. Do we con­sider those peo­ple in our moral calcu­la­tions?

This is im­pos­si­ble. No causal in­ter­ac­tion means no ob­ser­va­tions. A par­si­mo­nious model can­not posit any state­ments that have no im­pli­ca­tions for your ob­ser­va­tions.

But I un­der­stand the spirit of your ques­tion: if they had causal im­pli­ca­tions for us, but we had no causal im­pli­ca­tions for them (im­ply­ing that we can ob­serve them and they can effect us, but they can’t ob­serve us and we can’t effect them) then I would cer­tainly care about what hap­pened to them.

But I still can’t fac­tor them into any moral calcu­la­tions be­cause my ac­tions can­not effect them, so they can­not fac­tor into any moral calcu­la­tions. The laws of the uni­verse have ren­dered me pow­er­less.

I can see the logic, but doesn’t the same ar­gu­ment ap­ply equally well in the “agree­ment with Bob” case?

and

True, but only nec­es­sary so that the par­ti­ci­pants can re­mem­ber be­ing the peo­ple they were out­side the simu­la­tion; I don’t think it’s fun­da­men­tal to any of the ar­gu­ments.

I’m not sure I fol­low these two state­ments- can you elab­o­rate what you mean?

• This is im­pos­si­ble. No causal in­ter­ac­tion means no ob­ser­va­tions. A par­si­mo­nious model can­not posit any state­ments that have no im­pli­ca­tions for your ob­ser­va­tions.

Wait, what?

So, I go about my life ob­serv­ing things, and one of the things I ob­serve is that ob­jects don’t tend to spon­ta­neously dis­ap­pear… they per­sist, ab­sent some force that acts on them to dis­rupt their per­sis­tence. I also ob­serve things con­sis­tent with there be­ing a light­speed limit to causal in­ter­ac­tions, and with the uni­verse ex­pand­ing at such a rate that the dis­tance be­tween two points a cer­tain dis­tance apart is in­creas­ing faster than light­speed.

Then Ge­orge gets into a space­ship and ac­cel­er­ates to near-light­speed, such that in short or­der Ge­orge has crossed that dis­tance thresh­old.

Which the­ory is more par­si­mo­nious: that Ge­orge has ceased to ex­ist? that Ge­orge per­sists, but I can’t causally in­ter­act with him? that he per­sists and I can (some­how) in­ter­act with him? other?

I still can’t fac­tor them into any moral calcu­la­tions be­cause my ac­tions can­not effect them

Sup­pose my cur­rent ac­tions can af­fect the ex­pected state of Ge­orge af­ter he crosses that thresh­old (e.g., I can put a time bomb on his ship). Does the state of Ge­orge-be­yond-the-thresh­old fac­tor into my moral calcu­la­tions about the fu­ture?

• Which the­ory is more parsimonious

That Ge­orge per­sists, but I can’t causally in­ter­act with him.

Sup­pose my cur­rent ac­tions can af­fect the ex­pected state of Ge­orge af­ter he crosses that thresh­old (e.g., I can put a time bomb on his ship). Does the state of Ge­orge-be­yond-the-thresh­old fac­tor into my moral calcu­la­tions about the fu­ture?

Yes.

My rule: “A par­si­mo­nious model can­not posit any state­ments that have no im­pli­ca­tions for your ob­ser­va­tions” has not been con­tra­dicted by my an­swers. The model must ex­plain your ob­ser­va­tion that a mem­ory of Ge­orge get­ting into that space­ship re­sides in your mind.

As to whether or not Ge­orge dis­ap­peared as soon as he crossed the dis­tance thresh­old...it’s pos­si­ble, but the set of ax­ioms nec­es­sary to de­scribe the uni­verse where Ge­orge per­sists is more par­si­mo­nious than the set of ax­ioms nec­es­sary to de­scribe the uni­verse where Ge­orge van­ishes. There­fore, you should as­sign a higher like­li­hood to the prob­a­bil­ity that Ge­orge per­sists.

This is the solu­tion to the so called “Prob­lem” of In­duc­tion. “Things don’t gen­er­ally dis­ap­pear, so I’ll as­sume they’ll con­tinue not dis­ap­pear­ing” is just a spe­cial case of par­si­mony. Uni­verses in which the fu­ture is similar to the past are more par­si­mo­nious.

• I ba­si­cally agree with all of this.
So, when lmm in­vites us to sup­pose that the most par­si­mo­nious model that ex­plains our ob­ser­va­tions also im­plies the ex­is­tence of some peo­ple who we can’t causally in­ter­act with, is Ge­orge an ex­am­ple of what lmm is invit­ing us to sup­pose? If not, why not?

• Se­man­tics, per­haps.

I con­sid­ered things like Ge­orge’s mem­ory trace as an ex­am­ple of an “in­ter­ac­tion”, the same way as see­ing the moon­light is an “in­ter­ac­tion” with the moon de­spite the fact that the light I saw is ac­tu­ally from a past ver­sion of the moon and not the cur­rent one.

So maybe we were just us­ing differ­ent no­tions of what “causal in­ter­ac­tion” means? To me, “peo­ple we can’t causally in­ter­act with” means peo­ple who don’t cause any of our ob­ser­va­tions, in­clud­ing mem­ory-re­lated ones.

• So you would say that Ge­orge is not an ex­am­ple of what lmm is invit­ing us to sup­pose, be­cause we can causally in­ter­act with him, be­cause he caused a mem­ory?

I don’t think this is just se­man­tics. You are elid­ing the differ­ence be­tween causal re­la­tion­ships that ex­ist now and causal re­la­tion­ships that ex­isted only in the past, pre­sum­ably be­cause you don’t con­sider this differ­ence im­por­tant. But it seems like an im­por­tant differ­ence to me.

• it seems like an im­por­tant differ­ence to me.

You’re right, it is im­por­tant.

But in my defense, look at the the origi­nal con­text:

There is some­thing very im­por­tant which dis­t­in­guishes re­al­ity from non-real math­e­mat­i­cal uni­verses—the fact that you can ob­serve it. The fact that it can in­ter­act with you.

In this con­text, it makes sense to con­sider gaps of space and time as ir­rele­vant. This idea is sup­posed to work no mat­ter what your ob­ser­va­tions are, even if space and time aren’t even in­volved.

If I know that A causes B and A causes C, and I ob­serve C, then I know that B is true.

We can agree to say that A, B, and C are all part of one causal net­work. That’s how I was think­ing of it. A and B are causally in­ter­act­ing. A and C are causally in­ter­act­ing. There­fore, C and B are causally in­ter­act­ing. If causal lines (in any di­rec­tion) con­nect C to B, then C and B are “causally in­ter­act­ing”. At this level of ab­strac­tion, we can even do away with causal­ity and just say that they are “in­ter­act­ing” within one sys­tem of log­i­cal state­ments.

That’s why Ge­orge’s mem­ory trace causally links me to Ge­orge.

A = Past Ge­orge.

B = Pre­sent George

C = my mem­ory of Ge­orge.

Now that I’ve speci­fied what I mean by a causal in­ter­ac­tion, you can see why my an­swer to …

when lmm in­vites us to sup­pose that the most par­si­mo­nious model that ex­plains our ob­ser­va­tions also im­plies the ex­is­tence of some peo­ple who we can’t causally in­ter­act with, is Ge­orge an ex­am­ple of what lmm is invit­ing us to sup­pose? If not, why not?

...is no, since ev­i­dence for the ex­is­tence of some­thing must im­ply a causal in­ter­ac­tion by my defi­ni­tion.

It seemed like you in­ter­preted “causal in­ter­ac­tion” to be a syn­onym for “effect”. And un­der that defi­ni­tion, yeah, C can­not effect B.

Les­son learned: I shouldn’t make up words like “causal in­ter­ac­tion” and as­sume peo­ple know what is in my head when I say it. My mis­take was that I thought most peo­ple would con­sider the phrase “A and B are causally in­ter­act­ing” to im­plic­itly con­tain the in­for­ma­tion that causal in­ter­ac­tion is always a bidi­rec­tional thing, and in­fer my mean­ing ac­cord­ingly.

edit...

The whole idea I was cham­pi­oning is that in or­der to earn the la­bel “real”, some­thing must in­ter­act with you. In other words, it must be within the same log­i­cal sys­tem as you.

In other words,

If my ob­ser­va­tion is “C” and “not F”

then “F” can­not be real. “(E=>F)&E” can­not be real. “C” ab­solutely must be real. “A=>B&C” might be real. “A” might be real. “B” might be real. “A=>B” might be real. “A=>C” might be real. “E” might be real. “E=>F might be real. “W” might be real. So on and so forth, with par­si­mony as­sign­ing prob­a­bil­ities to each.

...That’s the defi­ni­tion of “real” that I was pri­mar­ily defend­ing. I con­sider what I just said to be op­posed to the pla­tonic math­e­mat­i­cal defi­ni­tion of re­al­ity that is be­ing pro­posed. The pla­tonic math­e­mat­i­cal defi­ni­tion of re­al­ity es­sen­tially pro­poses that two log­i­cal sys­tems which do not log­i­cally in­ter­act in any way can both be “real”, and that’s what I ob­ject to. Real­ity is defined as things which log­i­cally in­ter­act with me. There­fore if I ob­serve F, then not-F is false.

I do re­al­ize that this is mostly a se­man­tic quib­ble over the mean­ing of “Real”...but the un­der­ly­ing “things which log­i­cally in­ter­act with me” ex­ists re­gard­less of what we call it, and I feel like “real” is a la­bel that should be re­served for that. Math­e­mat­i­ci­ans have already taken “true”, which used to mean “real’ (by my defi­ni­tion) and cor­rupted it to also mean “tau­tolog­i­cal”. The Set Of All Tau­tolog­i­cal State­ments already has “tau­tolog­i­cal”, so can we please just re­serve “real”? There are no other short words to de­scribe “the log­i­cal sys­tem that I am con­tained within” left.

(Note: my some­what ex­as­per­ated tone con­cern­ing ver­bal ap­pro­pri­a­tion is meant to be face­tious )

• It seemed like you in­ter­preted “causal in­ter­ac­tion” to be a syn­onym for “effect”. And un­der that defi­ni­tion, yeah, C can­not effect B.

Not just spel­ling fas­cism, I want to be sure I un­der­stand you cor­rectly: do you mean effect or af­fect?

We can agree to say that A, B, and C are all part of one causal net­work. That’s how I was think­ing of it. A and B are causally in­ter­act­ing. A and C are causally in­ter­act­ing. There­fore, C and B are causally in­ter­act­ing. If causal lines (in any di­rec­tion) con­nect C to B, then C and B are “causally in­ter­act­ing”. At this level of ab­strac­tion, we can even do away with causal­ity and just say that they are “in­ter­act­ing” within one sys­tem of log­i­cal state­ments.

So you’re con­sid­er­ing the re­gion that’s con­nected by any zigzag of causal events, in any di­rec­tion? We care about Bob’s daugh­ter who we never met? We care about her cousin who is now so far away that not only is she causally dis­con­nected from us, but also from Bob?

I can’t claim this is in­con­sis­tent, but it seems ar­bi­trary. The cat­e­gory of peo­ple I can causally in­ter­act with (i.e. can af­fect and can be af­fected by) is a nat­u­ral one, but I don’t see why I should re­gard some­one who’s in a space­time that used to be con­nected to mine but now isn’t (i.e. Bob) any differ­ently from some­one who’s in a par­allel space­time that’s never been con­nected to my own. There doesn’t seem to be any em­piri­cal­like dis­tinc­tion there.

• Not just spel­ling fas­cism, I want to be sure I un­der­stand you cor­rectly: do you mean effect or af­fect?

Er...I think it’s “effect”? I find it con­fus­ing—I think my cur­rent use falls within the ex­cep­tion to the noun-verb heuris­tic but I’m not sure.

=You in­ter­preted “causal in­ter­ac­tion” to be a syn­onym for “some­thing which causes an al­ter­a­tion in an­other thing”

=Alter­a­tions in C do not cause al­ter­a­tions in B.”

So you’re con­sid­er­ing the re­gion that’s con­nected by any zigzag of causal events, in any di­rec­tion? We care about Bob’s daugh­ter who we never met? We care about her cousin who is now so far away that not only is she causally dis­con­nected from us, but also from Bob?

We con­sider them as real, yes.

I don’t see why I should re­gard some­one who’s in a space­time that used to be con­nected to mine but now isn’t (i.e. Bob) any differ­ently from some­one who’s in a par­allel space­time that’s never been con­nected to my own.

If the pro­posed par­allel space­time will one day be con­nected to your own, then it clas­sifies as real but cur­rently un­know­able. Upon ob­serv­ing ev­i­dence of the newly con­nected space­time, a ra­tio­nal agent would dis­card the most par­si­mo­nious hy­poth­e­sis that it had held prior to the ob­ser­va­tion. This sce­nario can be summed up by the phrase “What if Rus­sel’s Teapot is real After All?” (What would hap­pen is that we’d ad­mit that we were wrong be­fore, but as­sert that we had no way of see­ing it com­ing)

If the pro­posed par­allel space­time will never be con­nected to your own, then it isn’t real.

• It sounds to me like you want “af­fect”.

To effect some­thing is to bring it about. (In other words: to cause it to come into be­ing; to put it into effect.) “I effected [pro­duced] an agree­ment be­tween the dis­putants.” “They sailed away with­out effect­ing [ac­com­plish­ing] their pur­pose.”

To af­fect some­thing is to in­fluence it. (To have an effect on it.) Note that, con­fus­ingly, the verb “af­fect” can be defined in terms of the noun “effect”.

• I touched on the flower. I in­fluenced the flower. I af­fected the flower. I had an effect on the flower.

I caused a com­mo­tion. I pro­duced a com­mo­tion. I effected a com­mo­tion.

Good?

To af­fect some­thing is to in­fluence it. (To have an effect on it.) Note that, con­fus­ingly, the verb “af­fect” can be defined in terms of the noun “effect”.

So “effect” is de­scribing a spe­cific cause-effect chain while Affect is de­scribing the ex­is­tence of some sort of cause-effect chain with­out spec­i­fy­ing any par­tic­u­lar one? (Overeat­ing effects weight gain, Diet af­fects weight.)

• “Affected the flower” and “effected a com­mo­tion” are right, but I think you’d be bet­ter of just ban­ish­ing the verb effect from your vo­cab­u­lary. It’s ex­tremely un­com­mon and I and other peo­ple as­so­ci­ate it with pointy-haired bosses and bu­reau­crats.

(There is an­other un­re­lated verb us­age of effect used by mu­si­ci­ans: to effect a sig­nal is to pro­cess that sig­nal with an effect.)

• Agreed that the words are ter­rible as com­mu­ni­ca­tion tools. Is there a good sub­sti­tute that i can use to talk about causal­ity?

• Ok, I think I un­der­stand your po­si­tion. I main­tain that it’s an un­nat­u­ral dis­tinc­tion to draw—a uni­verse that will be con­nected to ours in the fu­ture, or has been con­nected to ours in the past, isn’t em­piri­cally differ­ent from one that is and will always be dis­con­nected from ours. Thought ex­per­i­ment: sup­pose at some point af­ter Bob dis­ap­peared over the hori­zon, two copies of the pre­sent state of the uni­verse start run­ning in par­allel—or, bet­ter, that there have always been two copies run­ning in par­allel. Although copy A and copy B hap­pen to have co­in­ci­dent his­to­ries, there’s no causal con­nec­tion be­tween them and never has been, so to us in uni­verse B, uni­verse A isn’t “real” in your ter­minol­ogy, right (and let’s as­sume a quan­tum-me­chan­i­cal col­lapse pos­tu­late ap­plies, so af­ter the “split” some ran­dom events start turn­ing out differ­ently in uni­verses A and B, so you can tell whether you’re in one or the other)? But I as­sert that there’s no way for us to tell the differ­ence be­tween bob-in-uni­verse-A and bob-in-uni­verse-B.

(The other ex­am­ple I’ve thought of is pre­vi­ous/​sub­se­quent uni­verses in Pen­rose’s “Con­for­mal cyclic cos­mol­ogy”, but I don’t think there are any im­por­tant differ­ences from the cases we’ve already talked about).

• I main­tain that it’s an un­nat­u­ral dis­tinc­tion to draw—a uni­verse that will be con­nected to ours in the fu­ture, or has been con­nected to ours in the past, isn’t em­piri­cally differ­ent from one that is and will always be dis­con­nected from ours

Em­piri­cal: based on, con­cerned with, or ver­ifi­able by ob­ser­va­tion or ex­pe­rience rather than the­ory or pure logic.

A uni­verse that is to­tally dis­con­nected is un­ver­ifi­able by ob­ser­va­tion and ex­pe­rience. It lies in the realm of pure logic. It leaves no em­piri­cal traces.

Granted, there are also some pos­si­ble uni­verses that are log­i­cally con­nected and yet leave no em­piri­cal traces. (One ex­am­ple of this is the “Heaven” hy­poth­e­sis, which pos­tu­lates a place which is to­tally un­ob­serv­able at the pre­sent time. So our uni­verse has an effect on Heaven-verse, cre­at­ing a uni­di­rec­tional causal link… but Heaven has no effect on us. It’s the same with your ex­am­ple—the past has a uni­di­rec­tional causal link with var­i­ous pos­si­ble fu­tures.)

So yes, I bite the thing you re­gard as a bul­let. There are not nec­es­sar­ily any em­piri­cal differ­ences. I still think that when the com­mon per­son says “Real­ity”, they mean some­thing closer to my defi­ni­tion—some­thing with a causal in­ter­ac­tion with you. That’s why peo­ple might say “heaven is real, de­spite the lack of ev­i­dence” or “Rus­sel’s Teapot might be real, though it’s un­likely” but they never say “Harry Pot­ter is real, de­spite the lack of ev­i­dence” or “Set the­ory is real, de­spite the lack of ev­i­dence”.

All of these things can be rep­re­sented to­tally un­ob­serv­able log­i­cal struc­tures, but only the Heaven struc­ture is pro­posed to in­ter­act with our uni­verse—so only the Heaven struc­ture is a hy­poth­e­sis about re­al­ity. The rest are fan­tasy and math­e­mat­ics.

(If you want em­piri­cism, I will say that the most par­si­mo­nious hy­poth­e­sis is strictly limited to choos­ing the small­est log­i­cal struc­ture which ex­plains all ob­serv­able things.)

Edit:

Oh cool—you’ve made me re­al­ized that my defi­ni­tion of re­al­ity im­plies ran­dom events cre­ate a uni­verse for each op­tion (so a stochas­tic coin flip cre­ates a “heads” uni­verse and a “tails” uni­verse, both “real” | real = “causal in­ter­ac­tion in ei­ther di­rec­tion”). I hadn’t ex­plic­itly rec­og­nized that yet. Thanks!

I think I’m ac­tu­ally fairly com­fortable with that. How­ever it does seem to run slightly con­trary to lay­man use of “re­al­ity” and I like to keep my rigo­rized defi­ni­tions of words as close as pos­si­ble to the un­ri­gor­ous lay­man’s us­age. I might be re­turn­ing with a slightly re­vised defi­ni­tion which tack­les some of the weird­ness sur­round­ing uni­di­rec­tional re­la­tion­ships. If I can’t find one, I bite the bul­let and ac­cept the di­ver­gence of my “re­al­ity” from lay­man’s “re­al­ity” via “uni­verses with ran­dom­ness have many real wor­lds, split­ting for each ran­dom event”. Doesn’t seem like too harsh of a bul­let though—lay­men’s defi­ni­tions aren’t always in­ter­nally con­sis­tent and do some­times col­lapse un­der rigo­riza­tion. If I find that I can’t wig­gle out of this, it does mean that I might have to think more about an­throp­ics and slightly al­ter the way I con­cep­tu­al­ize the re­la­tion­ship be­tween my “util­ity” func­tion and what I’ve been call­ing re­al­ity.

(I still think your on­tol­ogy of “all tau­tolo­gies are real” is even farther from lay­men’s on­tol­ogy and pos­si­bly makes moral­ity go all funny for the rea­sons de­scribed in my top post on the topic. Not sure whether you think dis­tance from lay­men’s defi­ni­tions is some­thing worth min­i­miz­ing, but figur­ing out how util­ity/​moral­ity works in your on­tol­ogy is im­por­tant)

• I still think that when the com­mon per­son says “Real­ity”, they mean some­thing closer to my defi­ni­tion—some­thing with a causal in­ter­ac­tion with you. That’s why peo­ple might say “heaven is real, de­spite the lack of ev­i­dence” or “Rus­sel’s Teapot might be real, though it’s un­likely” but they never say “Harry Pot­ter is real, de­spite the lack of ev­i­dence” or “Set the­ory is real, de­spite the lack of ev­i­dence”.

I try not to say “re­al­ity”—I don’t think laypeo­ple have an in­tu­ition about the case where we dis­agree—that is, re­gions that are causally dis­con­nected (in the sense of the rel­a­tivis­tic term of art—whose mean­ing ap­par­ently doesn’t al­ign with your in­tu­ition?) from us, but can be reached by some zigzag chain of causal paths. In the Heaven case there’s a one-di­rec­tional causal link, and in Rus­sell’s teapot case there’s a reg­u­lar causal con­nec­tion. Do peo­ple have an in­tu­ition about whether things that have fallen into a black hole, or over the cos­molog­i­cal event hori­zon, are “still real”?

That said, on some level you’re right; I do feel that Bob is “more real” than Harry Pot­ter. I think that’s just a func­tion of Bob’s uni­verse be­ing more similar to my own though. If Carol in an­other uni­verse has a mag­i­cal cross-uni­verse tele­porter and is think­ing about whether to visit our uni­verse, it seems wrong to say she’s more real now if the de­ci­sion she’s about to make is yes than if the de­ci­sion is no. (And the no­tion that she’s already con­nected to our uni­verse be­cause she has the choice, even if she never ac­tu­ally vis­its our uni­verse, feels equally sus­pect)

(Feel free to stop re­ply­ing if I’m get­ting repet­i­tive, and thanks for the dis­cus­sion so far in any case)

I still think your on­tol­ogy of “all tau­tolo­gies are real” is even farther from lay­men’s on­tol­ogy and pos­si­bly makes moral­ity go all funny for the rea­sons de­scribed in my top post on the topic.

I agree; I’ve never felt happy with the simu­la­tion ar­gu­ment in any form, and try­ing to chase through its more ex­treme im­pli­ca­tions was as much about hop­ing to find a con­tra­dic­tion as about ex­plor­ing things that I thought were true. Like I’ve said, I’m hope­ful that a good the­ory of an­throp­ics will dis­solve these ques­tions.

• I try not to say “re­al­ity”

Now, that con­fuses me. I thought your post was largely about defin­ing re­al­ity. Isn’t the topic un­der dis­cus­sion largely what the ap­pro­pri­ate way to define re­al­ity is? Isn’t the very premise of pla­tonic re­al­ism that all tau­tolo­gies are real?

• Hmm, you’re right. Maybe I just ob­ject to “re­al­ity” be­cause it im­plies a unique­ness that I don’t think is jus­tified.

• My philos­o­phy on words is this:

We of­ten use words (soul, free will, etc) to define ideas that aren’t well defined. Some­times, on rigor­ous in­spec­tion, those ideas turn out to be non­sen­si­cal. This leaves us with two op­tions:

1) Dis­card the words altogether

2) Re-define the words so as to get as close as pos­si­ble to the origi­nal mean­ing, while main­tain­ing self-con­sis­tency. (see Eliezer’s posts on “free will” for an ex­am­ple of this which is car­ried out, I be­lieve, suc­cess­fully.).

I gen­er­ally opt for (2) in the cases where the un­der­ly­ing con­cept be­ing de­scribed as some sort of value and there is no other word that quite tack­les it.

I main­tain that “re­al­ity” is one of those words for which the un­der­ly­ing con­cept is valuable and un-de­scribed by any other word. I re­main un­sure of whether or not the lay­men’s in­tu­itive defi­ni­tion of “Real­ity” is log­i­cally con­sis­tent. I’ll con­tinue try­ing to find a rigor­ous defi­ni­tion that com­pletely cap­tures the origi­nal in­tu­ition and noth­ing more. If I end up giv­ing up I’ll have to opt for (2) or (1)...If, un­der the clos­est defi­ni­tion, prob­a­bil­is­tic-many-world-split­ting turns out to be the only “weird-to-nor­mal-peo­ple” con­se­quence of chang­ing the defi­ni­tion then I’m okay with pick­ing (2), since at least the prac­ti­cal con­se­quences add up to nor­mal­ity.

I’d choose op­tion (1) and abol­ish “re­al­ity” al­to­gether, though, be­fore I let it be turned into a syn­onym for “tau­tol­ogy”. That’s just too far from the origi­nal in­tu­ition to be a use­ful ver­bal la­bel and we already have “tau­tol­ogy” any­how. Plus, the prac­ti­cal con­se­quences do not seem to add up to nor­mal­ity at all.

• (nods slowly)
Yeah, OK, point ac­cepted. I had lost track of the origi­nal con­text… my bad.
Thanks for your pa­tience.

• This is im­pos­si­ble. No causal in­ter­ac­tion means no ob­ser­va­tions. A par­si­mo­nious model can­not posit any state­ments that have no im­pli­ca­tions for your ob­ser­va­tions.

TheOtherDave’s already cov­ered this part

I’m not sure I fol­low these two state­ments- can you elab­o­rate what you mean?

Se­cond one first:

The only rea­son we need to as­sume the simu­la­tion is iden­ti­cal to the outer uni­verse is so that our pro­tag­o­nists’ mem­ory is con­sis­tent with be­ing in ei­ther. The only rea­son this is a difficulty at all is be­cause the pro­tag­o­nists need to re­mem­ber ar­rang­ing a simu­la­tion in the outer uni­verse for the sake of the story, as that’s the only rea­son they sus­pect the ex­is­tence of simu­lated uni­verses like the one they are cur­rently in.

If the pro­tag­o­nists have some other (mag­i­cal, for the mo­ment) rea­son to be­lieve that a large num­ber of uni­verses ex­ist and most of those are simu­lated in one of the oth­ers, it doesn’t mat­ter if the laws of physics differ be­tween uni­verses—I don’t think that’s es­sen­tial to any of the other ar­gu­ments (un­less you want to make an an­thropic ar­gu­ment that a par­tic­u­lar uni­verse is more or less likely to be simu­lated than av­er­age be­cause of its phys­i­cal laws).

Now for my first state­ment.

Your ar­gu­ment as I un­der­stood it is: Even if the most par­si­mo­nious ex­pla­na­tion of our ob­ser­va­tions ne­ces­si­tates the ex­is­tence of an “outer” uni­verse and a large num­ber of simu­lated uni­verses in­side it, it is still more par­si­mo­nious to as­sume that we are in the “outer” uni­verse.

My re­sponse is: doesn’t this same ar­gu­ment mean that we should ac­cept Bob’s bet in my ex­am­ple (and there­fore lose in the vast ma­jor­ity of cases)?

• See the re­sponse to TheOtherDave

Your ar­gu­ment as I un­der­stood it is: Even if the most par­si­mo­nious ex­pla­na­tion of our ob­ser­va­tions ne­ces­si­tates the ex­is­tence of an “outer” uni­verse and a large num­ber of simu­lated uni­verses in­side it, it is still more par­si­mo­nious to as­sume that we are in the “outer” uni­verse.

Then there has been a mis­com­mu­ni­ca­tion at some point. If you rephrase that as:

“Even if the most par­si­mo­nious ex­pla­na­tion of our ob­ser­va­tions ne­ces­si­tates the ex­is­tence of an “outer” uni­verse and a large num­ber of simu­lated uni­verses in­side it, it is still some­times more par­si­mo­nious to as­sume that we are in the “outer” uni­verse.”

Then you’d be right. The fact that we have the ca­pac­ity to simu­late a bunch of uni­verses our­selves doesn’t in-and-of-it­self count as ev­i­dence that we are be­ing simu­lated. My ar­gu­ment is more or less iden­ti­cal to V_V’s in the other thread.

(un­less you want to make an an­thropic ar­gu­ment that a par­tic­u­lar uni­verse is more or less likely to be simu­lated than av­er­age be­cause of its phys­i­cal laws)

I would agree with that state­ment. If our uni­verse turns out to have a ridicu­lously com­plex set of laws, it might ac­tu­ally be more par­si­mo­nious to posit an Outer Uni­verse with much sim­pler laws which gave rise to be­ings which are simu­lat­ing us. (In the same way that de­scribing the ini­tial con­di­tions of the uni­verse is prob­a­bly a shorter mes­sage than de­scribing a hu­man brain)

• Con­sid­er­ing all math­e­mat­i­cal struc­tures equally “real” makes the con­cept of “re­al­ity” lose all mean­ing.

I agree, and I’d like to offer ad­di­tional ar­gu­ment. Math­e­mat­i­cal ob­jects ex­ist. Al­most no one would deny that, for ex­am­ple, there is a num­ber be­tween 7,534,345,617 and 7,534,345,619. Or that there is a Lie group with such-and-such prop­er­ties. What dis­t­in­guishes Teg­mark’s claims from these un­re­mark­able state­ments? Roughly this: Teg­mark is say­ing that these math­e­mat­i­cal ob­jects are phys­i­cally real. But on his own view, this just amounts to say­ing that math­e­mat­i­cal ob­jects are math­e­mat­i­cal ob­jects. Yeah yeah Teg­mark, math­e­mat­i­cal ob­jects are math­e­mat­i­cal ob­jects, can’t dis­pute that, but don’t much care. Now I’ll turn my at­ten­tion back to tan­gible mat­ters.

Teg­mark steals his own thun­der.

• I think Teg­mark’s level 1-4 tax­on­omy is use­ful. Strip it of physics and put it to philos­o­phy:

Lv 1) What we can ob­serve di­rectly (qualia)

Lv 2) What we can’ t ob­serve, but could be (Rus­sel’s teapot)

Lv 3) What we can’t ob­serve, but we know might have hap­pened if chance played out differ­ently. (many-wor­lds)

Lv 4) Math­e­mat­i­cal uni­verses.

Th­ese are dis­tinct con­cepts. The ques­tion is, where and how do you draw a line and call it re­al­ity? (I say that we can’t in­clude 4, nor can we only in­clude 1. We ei­ther in­clude 1, 2 or 1, 2, 3...prefer­ably the former.)

• I took the por­tion of your com­ment I quoted to be about level 4 only. Any­way, that is where my com­ment is aimed, at agree­ing that we can’t in­clude 4.

• I’m cur­rently at the con­clu­sion it’s not the same, be­cause an “ar­tifi­cial uni­verse” within a simu­la­tion can still in­ter­act with the uni­verse. The simu­la­tion can in­fluence stuff out­side the simu­la­tion, and stuff out­side the simu­la­tion can in­fluence the simu­la­tion.

Yeah, but un­mod­ified simu­la­tions are the same, whereas mod­ified simu­la­tions di­verge. The fact that some­thing from the out­side in­ter­acted with the simu­la­tion means that it’s just one dis­t­in­guish­ably-differ­ent one out of many. Purely statis­ti­cally speak­ing, we’d ex­pect not-screwed-with uni­verses to form the biggest prob­a­bil­ity block by far.

• I’m not quite sure what you mean. Would you mind rephras­ing or elab­o­rat­ing?

• The evolu­tion of a uni­verse that’s not be­ing in­fluenced by its host uni­verse is de­ter­mined by its ini­tial state. How­ever, any in­ter­ac­tion of a host uni­verse with the nested uni­verse adds bits to its de­scrip­tion. There­for, even if we’d nu­mer­i­cally ex­pect most host uni­verses to screw with their child uni­verses some­how (which still isn’t given!) they’ll all screw with them in differ­ent ways, whereas the un­screwed-with ones will all look the same. Thus, while most uni­verses may be screwed-with (which isn’t even a given!), the set of un­screwed-with uni­verses is still the biggest sub­set.

• How­ever, any in­ter­ac­tion of a host uni­verse with the nested uni­verse adds bits to its description

No, you can sub­tract in­for­ma­tion from things. Edge case: what if the host just re­places ev­ery bit in the hard drive with all 0′s?

the set of un­screwed-with uni­verses is still the biggest sub­set.

In what? the pla­tonic math­e­mat­i­cal space? Or the sub­set of uni­verses that a given host uni­verse simu­lates?

I think I do get your mean­ing, but it doesn’t seem very well defined...

• No, you can sub­tract in­for­ma­tion from things.

Of course you can end up with a state that has a lower min­i­mal de­scrip­tion length. How­ever, al­most any in­ter­ac­tion is gonna end up adding bits.

In what? the pla­tonic math­e­mat­i­cal space?

Yes, and yes this is very ill-defined, and yes it’s not clear why the set size should mat­ter, but the simu­la­tion ar­gu­ment rests on the very same as­sump­tion—some kind of equal an­ti­ci­pa­tion prior over causes for our uni­verse? So if you already ac­cept the premise that uni­verse count­ing should mat­ter for the simu­la­tion ar­gu­ment, you can just reuse that for the “an­ti­ci­pate be­ing in the un­screwed with uni­verse” ar­gu­ment. (Shouldn’t you an­ti­ci­pate be­ing in a screwed with uni­verse, even if you don’t know in which way it’d be screwed with? Hm. Is this ev­i­dence that most hosts end up not screw­ing with their sims?)

• If we’re only talk­ing about the pla­tonic math­e­mat­i­cal space, then why does it mat­ter what hosts do or do not do to their simu­la­tions?

The en­tire thing (host and simu­la­tion) is one in­ter­act­ing math­e­mat­i­cal unit. There might also be a math­e­mat­i­cal unit that rep­re­sents the simu­la­tion, in­de­pen­dently of the host, but we can count that sep­a­rately.

There are an in­finite num­ber of math­e­mat­i­cal struc­tures that could ex­plain your ob­ser­va­tions. An in­finite num­ber of those in­volve simu­la­tions, and an in­finite num­ber of them don’t in­volve simu­la­tions. Of the ones that in­volve simu­la­tions, an in­finite num­ber of them are “screwed” with and an in­finite num­ber are “un­screwed”.

So, if we want to choose a model where ev­ery­thing in the pla­tonic math­e­mat­i­cal space is “real” (One one level I want to con­demn this as liter­ally the most un-par­si­mo­nious model of re­al­ity, and on an­other level I’ll just say that you have defined re­al­ity in a funny way and it’s just a se­man­tic dis­tinc­tion) and then we want to figure out where within this struc­ture we are us­ing the rule that “the like­li­hood of a state­ment con­cern­ing our lo­ca­tion be­ing true cor­re­sponds to the num­ber of uni­verses in which it is true and which also fit our other ob­ser­va­tions”, then we have to find a way of com­par­ing in­fini­ties.

And that’s what you’re do­ing—com­par­ing in­fini­ties. So … what mechanism are you propos­ing for do­ing so?

• I don’t know, but the fact that out of an in­finity of pos­si­ble uni­verses we’re prac­ti­cally in the sin­gle-digit in­te­gers, has to mean some­thing. Ask a ge­nie for a ran­dom in­te­ger and you’d be sur­prised if it ever finished spit­ting out num­bers in the life­time of the uni­verse; for it to stop af­ter a few min­utes of talk­ing would be ab­surd. So ei­ther we’re vastly wrong about the in­for­ma­tion the­o­retic com­plex­ity of our uni­verse, or the seem­ing sim­plic­ity of its laws is due to ei­ther sam­pling bias, or MU is wrong and this uni­verse re­ally just hap­pens to just ex­ist for no good an­swer­able rea­son, there’s a lu­dicrous co­in­ci­dence at work, or there has to be some rea­son why we are more likely to find our­selves in a uni­verse at the start of the chain, whose hosts are not visi­bly screw­ing with it. The point is to add up to nor­mal­ity, af­ter all.

• Mostly, my thought is that “there prob­a­bly ex­ist real peo­ple out there some­where, and we are prob­a­bly not among them; we are prob­a­bly mere simu­la­tions in their world” doesn’t seem equiv­a­lent to “what it means to be a real per­son, or a real any­thing, is to be a well-defined ab­stract com­pu­ta­tion that need not nec­es­sar­ily be in­stan­ti­ated” (aka Dust the­ory, as has been said).

That said, I can’t re­ally imag­ine why I would ever care about the differ­ence for longer than it takes to think about the ques­tion.

Sure, the former feels more com­pel­ling be­cause it’s framed as a sta­tus challenge, but if I do any­thing more than just su­perfi­cially pat­tern-match it that pretty much dis­solves… I have to be a lot more im­por­tant than I am, rel­a­tively speak­ing, be­fore the so­cial sta­tus of my en­tire uni­verse be­comes a rele­vant con­sid­er­a­tion in my sta­tus calcu­la­tions.

(To be clear, I am speak­ing solely for my­self here. I do rec­og­nize that some folks here view them­selves, in­di­vi­d­u­ally, as im­por­tant to the fu­ture de­vel­op­ment of our uni­verse, and I can see how for those peo­ple the sta­tus of our uni­verse as a whole might be an im­por­tant con­sid­er­a­tion, and I’m not challeng­ing that; I’m just as­sert­ing that I don’t view my­self as that im­por­tant, and I be­lieve I’m cor­rect in that eval­u­a­tion.)

• Our pre­sent civ­i­liza­tion is likely to reach the point where it can simu­late a uni­verse rea­son­ably soon

I don’t know about that, seems un­likely to me. A fu­ture civ­i­liza­tion simu­lat­ing us re­quires a) tons of in­for­ma­tion about us, that is likely to be ir­re­versibly lost in the mean­time, and b) enough com­put­ing power to simu­late at a suffi­ciently fine level of de­tail (i.e. if it’s a crude ap­prox­i­ma­tion, it will di­verge from what ac­tu­ally hap­pened pretty fast). Either of those alone looks like it makes simu­lat­ing cur­rent-earth un­fea­si­ble.

But my main re­ac­tion to the simu­la­tion ar­gu­ment (even as­sum­ing it’s pos­si­ble) is “so what?”. Are there any de­ci­sions I would change if I knew I might be be­ing simu­lated?

• A fu­ture civ­i­liza­tion simu­lat­ing their own an­ces­tors would re­quire a lot of in­for­ma­tion about them, pos­si­bly im­pos­si­bly-hard-to-get amounts. You’re right about that.

So what? They could still simu­late some ar­bi­trary, fic­tional pre-sin­gu­lar­ity civ. There is no guaran­tee what­so­ever, if we’re part of a simu­la­tion, that we were ever any­thing else.

• But my main re­ac­tion to the simu­la­tion ar­gu­ment (even as­sum­ing it’s pos­si­ble) is “so what?”. Are there any de­ci­sions I would change if I knew I might be be­ing simu­lated?

Pos­si­ble eth­i­cal po­si­tion: I care about the con­tinued sur­vival of hu­man­ity in some form. I also care about hu­man hap­piness in some way that avoids the re­pug­nant con­clu­sion (that is, I’m will­ing to sac­ri­fice some pro­por­tion of un­happy lives in ex­change for mak­ing the rest of them much hap­pier). I am offered the op­tion of re­leas­ing an AI that we be­lieve with 99% prob­a­bil­ity to be Friendly; this has an ex­pec­ta­tion of greatly in­creas­ing hu­man hap­piness, but car­ries a small risk of elimi­nat­ing hu­man­ity in this uni­verse. If I be­lieve I am not simu­lated, I do not re­lease it, be­cause the small risk of elimi­nat­ing all hu­man­ity in ex­is­tence is not worth tak­ing. If I be­lieve I am simu­lated, I re­lease it, be­cause it is al­most surely im­pos­si­ble for this to elimi­nate all hu­man­ity in ex­is­tence, and the ex­pected hap­piness gain is worth it.

• Modern philos­o­phy is just a set of notes on the mar­gins of Descartes’ “Med­i­ta­tions”.

• Modern philos­o­phy is just a set of notes on the mar­gins of Descartes’ “Med­i­ta­tions”.

That is the most damn­ing crit­i­cism of philos­o­phy I have ever seen.

• (1) It’s to­tally tongue-in-cheek. (2) By “mod­ern” I don’t mean “con­tem­po­rary”, I mean “since Descartes on­wards”. (3) By “notes” I mean crit­i­cisms. (4) The point is that I see re­sponses to the simu­la­tion aka. Dae­mon ar­gu­ment re­cur­ring in philos­o­phy.

• (3) By “notes” I mean crit­i­cisms.

Ahh, that one makes a differ­ence in con­no­ta­tion. There cer­tainly seems to be more of that than I would judge worth­while.

• The Numer­i­cal Pla­ton­ist’s con­struct is just the uni­verse it­self again. No prob­lem there.

If you’re not a nu­mer­i­cal pla­ton­ist, I don’t see how un­ex­e­cuted com­pu­ta­tions could be ex­pe­rienced.

And that leaves us with reg­u­lar simu­la­tion.

(In­ci­den­tally, point 6 has a hid­den as­sump­tion about the dis­tri­bu­tion of simu­lated uni­verses)

• The Numer­i­cal Pla­ton­ist’s con­struct is just the uni­verse it­self again. No prob­lem there.

Why? If it’s just be­cause the com­pu­ta­tions come out the same, doesn’t that mean any simu­la­tion of the uni­verse is also just the uni­verse it­self again?

• Tech­ni­cally we are already run­ning a perfect simu­la­tion of a uni­verse liter­ally in­dis­t­in­guish­able from our own.

The fact that such a simu­la­tion is in­dis­t­in­guish­able means that we should be am­biva­lent about whether it is simu­lated or not- how­ever, simu­la­tions which we run ARE dis­t­in­guish­able from our re­al­ity, in the same sense that a Godel state­ment is true, even if it the differ­ence is not ap­par­ent from within the simu­la­tion.

• The fact that such a simu­la­tion is in­dis­t­in­guish­able means that we should be am­biva­lent about whether it is simu­lated or not-

Does that nec­es­sar­ily fol­low? Should we nec­es­sar­ily be am­biva­lent about e.g. events in any other in­fla­tion­ary bub­ble (i.e. in star sys­tems that have be­come causally dis­con­nected from our own)

• To your first ques­tion: Yes. If some­thing has one of two char­ac­ter­is­tics, but no in­for­ma­tion that we can (even the­o­ret­i­cally) ac­quire al­lows us to de­ter­mine which of those is true, then it is not mean­ingful to care about which one is true. Drop­ping to the ob­ject-level, it would be con­tra­dic­tory to have a simu­la­tion which ac­cepted as in­put ONLY a set of ini­tial con­di­tions, but de­vel­oped sen­tient life that was aware of you.

To your sec­ond ques­tion: “star sys­tems that have be­come causally dis­con­nected from our own” are dis­t­in­guish­able from our own. I’ll an­swer the ques­tion “Should be nec­es­sar­ily be am­biva­lent about things which we can­not even the­o­ret­i­cally in­ter­act with” as a gen­eral case.

Utili­tar­ian: Yes. (It has no effect on us)
Con­se­quen­tial­ist: Yes. (We have no effect on them)
So­cial Con­tract: Only if we don’t have a deal with them.
Deist: Only if God says so.
Naive: Yes; I can’t know what they are, so I can’t change my de­ci­sions based on them.

What the­ory of ethics or de­ci­sion has a non-triv­ial an­swer?

• It seems like we could rea­son­ably have a util­ity func­tion that as­signs more or less value to cer­tain ac­tions de­pend­ing on things we can’t causally in­ter­act with. E.g. a small risk of wiping out all hu­man­ity within our fu­ture light cone would, I think, be less of a nega­tive if I knew there was a hu­man colony in a causally dis­con­nected re­gion of the uni­verse.

• How much less? What’s the asymp­tote (of the ra­tio) as the num­ber of hu­man colony ships that have ex­ited the light cone ap­proach in­finity?

ETA: Also, that sce­nario moved the goal­posts again. The ques­tion was “Should we con­sider those hy­po­thet­i­cal colon­ists opinions when de­cid­ing to risk de­stroy­ing ev­ery­thing we can?”

• I don’t have a ra­tio; it’s more that I at­tach an ad­di­tional (fixed) pre­mium to kil­ling off the en­tire hu­man race, on top of the or­di­nary level of di­su­til­ity I as­sign to kil­ling each in­di­vi­d­ual hu­man.

(nb I’m try­ing to phrase this in util­i­tar­ian terms but I don’t ac­tu­ally con­sider my­self a util­i­tar­ian; my true po­si­tion is more what seems to be de­scribed as de­on­tolog­i­cal?)

• So you at­tach some mea­sure of util­ity to the state­ment ‘Hu­man­ity still ex­ists’, and then at­tach a prob­a­bil­ity to hu­man­ity ex­ist­ing out­side of your light cone based on the in­for­ma­tion available; if hu­man­ity is 99% likely to ex­ist out­side of the cone, then the ad­di­tional di­su­til­ity of wiping out the last hu­man in your light cone is re­duced by 99%?

And the di­su­til­ity of geno­cide and mass slaugh­ters short of ex­tinc­tion re­main un­changed?

• The prob­lem with math­e­mat­i­cal re­al­ism (which, btw, see also), is that it’s challeng­ing to jus­tify the sim­plic­ity of our ini­tial state—Oc­cam is not a fun­da­men­tal law of physics, and al­most all pos­si­ble uni­verse-gen­er­at­ing laws are un­fath­omably large. You can sort of jus­tify that by say­ing “even uni­verses with com­pli­cated ini­tial states will tend to simu­late sim­ple uni­verses first”, but that just leaves you ask­ing why the num­ber of simu­la­tions should mat­ter at all. (I don’t have a good an­swer to that; if you find one, I’d love if you could tell me)

• Like I say, I think a good the­ory of an­throp­ics is the best hope for this. Un­der UDASSA it’s “ob­vi­ous” that one would be most likely to find one­self in a sim­ple uni­verse—though that may just be beg­ging the ques­tion, as I’m not aware of a jus­tifi­ca­tion for us­ing a com­plex­ity mea­sure in UDASSA.

• Haskell (prob­a­bly the lan­guage most likely to be used for a uni­verse simu­la­tion, at least at pre­sent tech­nol­ogy lev­els)

Why this fas­ci­na­tion with Haskell?
It seems more like a toy, or ed­u­ca­tional tool, or at the very best a tool for highly spe­cial­is­tic re­search, but pretty surely not suit­able for any large scale pro­gram­ming.

• Haskell (prob­a­bly the lan­guage most likely to be used for a uni­verse simu­la­tion, at least at pre­sent tech­nol­ogy lev­els) >fol­lows lazy eval­u­a­tion: a value is not calcu­lated un­less it is used.

In that case, why does the simu­la­tion need to be run­ning all the time? Wouldn’t one just ask the fancy, lambda-de­rived soft­ware to ren­der what­ever spe­cific event one wanted to see?

If on the other hand whole_uni­verse_from_time_im­memo­rial() needs to ex­e­cute ev­ery time, which of course as­sumes a loop­hole gets found to in­finitely add in­for­ma­tion to the host uni­verse, then pre­sum­ably ev­ery pos­si­ble ar­gu­ment (which in­cludes the pro­gram’s own code—it­self a con­stituent of the uni­verse be­ing simu­lated) would be needed by func­tion any­way, so why not strict eval­u­a­tion?

And both of these cases still as­sume we han­dle time in a com­mon sense fash­ion. Ac­cord­ing to rel­a­tivity, time is in­ter­twined with the other di­men­sions, and these di­men­sions in turn are an ar­ti­fact of our par­tic­u­lar uni­verse, dis­tinc­tive char­ac­ter­is­tics cre­ated at the Big Bang along with ev­ery­thing else. There­fore, it then seems likely give_me_the_whole_uni­verse() would have to ex­e­cute ev­ery­thing at once—more pre­cisely, would have to ex­cite out­side of time—to ac­cu­rately simu­late the uni­verse (or simu­la­tion thereof) we ob­serve. Even func­tional pro­gram­ming has to carry out steps one af­ter the other, re­quiring a uni­verse with a time di­men­sion, even if the logic to this or­der is differ­ent from that of tra­di­tional im­per­a­tive paradigms.

• In that case, why does the simu­la­tion need to be run­ning all the time? Wouldn’t one just ask the fancy, lambda-de­rived soft­ware to ren­der what­ever spe­cific event one wanted to see?

In­deed we would. If you be­lieve we are such a simu­la­tion, that im­plies the simu­la­tor is in­ter­ested in some event that causally de­pends on to­day’s his­tory. I don’t think this mat­ters though.

And both of these cases still as­sume we han­dle time in a com­mon sense fash­ion. Ac­cord­ing to rel­a­tivity, time is in­ter­twined with the other di­men­sions, and these di­men­sions in turn are an ar­ti­fact of our par­tic­u­lar uni­verse, dis­tinc­tive char­ac­ter­is­tics cre­ated at the Big Bang along with ev­ery­thing else. There­fore, it then seems likely givemethe­w­hole­u­ni­verse() would have to ex­e­cute ev­ery­thing at once—more pre­cisely, would have to ex­cite out­side of time—to ac­cu­rately simu­late the uni­verse (or simu­la­tion thereof) we ob­serve. Even func­tional pro­gram­ming has to carry out steps one af­ter the other, re­quiring a uni­verse with a time di­men­sion, even if the logic to this or­der is differ­ent from that of tra­di­tional im­per­a­tive paradigms.

Causal­ity is pre­served un­der rel­a­tivity, AIUI. You may not nec­es­sar­ily be able to say ab­solutely whether one event hap­pened be­fore or af­ter an­other, but you can say what the causal re­la­tion be­tween them is (whether one could have caused the other, or they are spa­tially sep­a­rated such that nei­ther could have caused the other). So there is no prob­lem with us­ing naive time in one’s simu­la­tions.

Are you ar­gu­ing that a simu­lat­able uni­verse must have a time di­men­sion? I don’t think that’s en­tirely true; all it means is that a simu­lat­able uni­verse must have a non-cyclic chain of causal­ity. It would be ex­ceed­ingly difficult to simu­late e.g. the Godel ro­tat­ing uni­verse. But a uni­verse like our own is no prob­lem.

• If just the con­cep­tual pos­si­bil­ity of the uni­verse is enough to ex­pe­rience it, as some have sus­pected to be the case, you still have to con­sider the pos­si­bil­ity that the part of the uni­verse you’re con­cep­tu­ally in is a simu­la­tion in­side of an­other con­cep­tual uni­verse.

Look­ing at it from an­other an­gle, I’m pretty sure we all ac­cept that our minds are run­ning on com­put­ers known as hu­man brains, and we don’t just ex­pe­rience the con­cep­tual pos­si­bil­ity of that brain. Mind you, the en­tire uni­verse might just be some kind of con­cep­tual pos­si­bil­ity, but there is a con­cep­tual uni­verse out there that our minds are run­ning on a tiny part of. Once you ac­cept this, it would seem hyp­o­crit­i­cal to re­ject the pos­si­bil­ity of an­other layer of con­cep­tual com­pu­ta­tion out of hand.

In short, just be­cause we’re in a math­e­mat­i­cal con­struct doesn’t mean that we’re not part of a simu­la­tion within that math­e­mat­i­cal con­struct. Si­mu­la­tion ar­gu­ment and the uni­verse be­ing a math­e­mat­i­cal con­struct are not mu­tu­ally ex­clu­sive.

But if all that’s re­quired for a simu­la­tion is a math­e­mat­i­cal form for the true laws of physics, and knowl­edge of some early state of the universe

Why would you even need that much? If we’re just talk­ing about the math­e­mat­i­cal idea of this uni­verse, it ex­ists whether or not we know how to define it. It’s not in­con­sis­tent to say that some­one defin­ing but not nec­es­sar­ily calcu­lat­ing the math is the nec­es­sary and suffi­cient con­di­tion for us to ex­pe­rience it, but I don’t see why you’d draw the line there.

• In short, just be­cause we’re in a math­e­mat­i­cal con­struct doesn’t mean that we’re not part of a simu­la­tion within that math­e­mat­i­cal con­struct. Si­mu­la­tion ar­gu­ment and the uni­verse be­ing a math­e­mat­i­cal con­struct are not mu­tu­ally ex­clu­sive.

Sure, but if any­thing it seems like they both ap­ply—we are over­whelm­ingly likely to be simu­lated hu­mans in a math­e­mat­i­cal-con­struct uni­verse.

Why would you even need that much? If we’re just talk­ing about the math­e­mat­i­cal idea of this uni­verse, it ex­ists whether or not we know how to define it. It’s not in­con­sis­tent to say that some­one defin­ing but not nec­es­sar­ily calcu­lat­ing the math is the nec­es­sary and suffi­cient con­di­tion for us to ex­pe­rience it, but I don’t see why you’d draw the line there.

I was try­ing to make it clear where the trade­off with math­e­mat­i­cal Pla­ton­ism is. If you be­lieve math­e­mat­i­cal things ex­ist eter­nally, or ex­ist when defined, or ex­ist when ex­plic­itly calcu­lated, that af­fects what limit you have to place on hu­man civ­i­liza­tion’s achieve­ments (and if you’re a straight-up Pla­ton­ist then you can’t make this ob­jec­tion at all, be­cause as you say, the idea of the uni­verse already ex­ists).

• I think Can You Prove Two Par­ti­cles Are Iden­ti­cal? ex­plains the differ­ence be­tween the pos­si­bil­ities here very well: What is the differ­ence? We can­not as­sume there is a differ­ence sim­ply for the sake of ask­ing what the differ­ence is. Though if you must, I should hope you’re well aware of your as­sump­tion.