Simulations Map: what is the most probable type of the simulation in which we live?

There is a chance that we may be liv­ing in a com­puter simu­la­tion cre­ated by an AI or a fu­ture su­per-civ­i­liza­tion. The goal of the simu­la­tions map is to de­pict an overview of all pos­si­ble simu­la­tions. It will help us to es­ti­mate the dis­tri­bu­tion of other mul­ti­ple simu­la­tions in­side it along with their mea­sure and prob­a­bil­ity. This will help us to es­ti­mate the prob­a­bil­ity that we are in a simu­la­tion and – if we are – the kind of simu­la­tion it is and how it could end.

Si­mu­la­tion argument

The simu­la­tion map is based on Bostrom’s simu­la­tion ar­gu­ment. Bostrom showed that that “at least one of the fol­low­ing propo­si­tions is true:

(1) the hu­man species is very likely to go ex­tinct be­fore reach­ing a “posthu­man” stage;

(2) any posthu­man civ­i­liza­tion is ex­tremely un­likely to run a sig­nifi­cant num­ber of simu­la­tions of their evolu­tion­ary his­tory (or vari­a­tions thereof);

(3) we are al­most cer­tainly liv­ing in a com­puter simu­la­tion”. http://​​www.simu­la­tion-ar­gu­ment.com/​​simu­la­tion.html

The third propo­si­tion is the strongest one, be­cause (1) re­quires that not only hu­man civ­i­liza­tion but al­most all other tech­nolog­i­cal civ­i­liza­tions should go ex­tinct be­fore they can be­gin simu­la­tions, be­cause non-hu­man civ­i­liza­tions could model hu­man ones and vice versa. This makes (1) ex­tremely strong uni­ver­sal con­jec­ture and there­fore very un­likely to be true. It re­quires that all pos­si­ble civ­i­liza­tions will kill them­selves be­fore they cre­ate AI, but we can hardly even imag­ine such a uni­ver­sal course. If de­struc­tion is down to dan­ger­ous phys­i­cal ex­per­i­ments, some civ­i­liza­tions may live in uni­verses with differ­ent physics; if it is down to bioweapons, some civ­i­liza­tions would have enough con­trol to pre­vent them.

In the same way, (2) re­quires that all su­per-civ­i­liza­tions with AI will re­frain from cre­at­ing simu­la­tions, which is un­likely.

Fea­si­bly there could be some kind of uni­ver­sal phys­i­cal law against the cre­ation of simu­la­tions, but such a law is im­pos­si­ble, be­cause some kinds of simu­la­tions already ex­ist, for ex­am­ple hu­man dream­ing. Dur­ing hu­man dream­ing very pre­cise simu­la­tions of the real world are cre­ated (which can’t be dis­t­in­guished from the real world from within – that is why lu­cid dreams are so rare). So, we could con­clude that af­ter small ge­netic ma­nipu­la­tions it is pos­si­ble to cre­ate a brain that will be 10 times more ca­pa­ble of cre­at­ing dreams than an or­di­nary hu­man brain. Such a brain could be used for the cre­ation of simu­la­tions and strong AI surely will find more effec­tive ways of do­ing it. So simu­la­tions are tech­ni­cally pos­si­ble (and qualia is no prob­lem for them as we have qualia in dreams).

Any fu­ture strong AI (re­gard­less of whether it is FAI or UFAI) should run at least sev­eral mil­lion simu­la­tions in or­der to solve the Fermi para­dox and to calcu­late the prob­a­bil­ity of the ap­pear­ance of other AIs on other planets, and their pos­si­ble and most typ­i­cal goal sys­tems. AI needs this in or­der to calcu­late the prob­a­bil­ity of meet­ing other AIs in the Uni­verse and the pos­si­ble con­se­quences of such meet­ings.

As a re­sult a pri­ory es­ti­ma­tion of me be­ing in a simu­la­tion is very high, pos­si­bly 1000000 to 1. The best chance of low­er­ing this es­ti­ma­tion is to find some flaws in the ar­gu­ment, and pos­si­ble flaws are dis­cussed be­low.

Most abun­dant classes of simulations

If we live in a simu­la­tion, we are go­ing to be in­ter­ested in know­ing the kind of simu­la­tion it is. Prob­a­bly we be­long to the most abun­dant class of simu­la­tions, and to find it we need a map of all pos­si­ble simu­la­tions; an at­tempt to cre­ate one is pre­sented here.

There are two main rea­sons for simu­la­tion dom­i­na­tion: goal and price. Some goals re­quire the cre­ation of very large num­ber of simu­la­tions, so such simu­la­tions will dom­i­nate. Also cheaper and sim­pler simu­la­tions are more likely to be abun­dant.

Ei­tan_Zo­har sug­gested http://​​less­wrong.com/​​r/​​dis­cus­sion/​​lw/​​mh6/​​you_are_mostly_a_simu­la­tion/​​ that FAI will de­liber­ately cre­ate an al­most in­finite num­ber of simu­la­tions in or­der to dom­i­nate the to­tal land­scape and to en­sure that most peo­ple will find them­selves in­side FAI con­trol­led simu­la­tions, which will be bet­ter for them as in such simu­la­tions un­bear­able suffer­ing can be ex­cluded. (If in the in­finite world an al­most in­finite num­ber of FAIs ex­ist, each of them could not change the land­scape of simu­la­tion dis­tri­bu­tion, be­cause its share in all simu­la­tions would be in­finitely small. So we need a ca­sual trade be­tween an in­finite num­ber of FAIs to re­ally change the pro­por­tion of simu­la­tions. I can’t say that it is im­pos­si­ble, but it may be difficult.)

Another pos­si­ble largest sub­set of simu­la­tions is the one cre­ated for leisure and for the ed­u­ca­tion of some kind of high level be­ings.

The cheap­est simu­la­tions are sim­ple, low-re­s­olu­tion and me-simu­la­tions (one real ac­tor, with the rest of the world around him like a back­drop), similar to hu­man dreams. I as­sume here that simu­la­tions are dis­tributed as the same power law as planets, cars and many other things: smaller and cheaper ones are more abun­dant.

Si­mu­la­tions could also be laid on one an­other in so-called Ma­tryoshka simu­la­tions where one simu­lated civ­i­liza­tion is simu­lat­ing other civ­i­liza­tions. The low­est level of any Ma­tryoshka sys­tem will be the most pop­u­lated. If it is a Ma­tryoshka simu­la­tion, which con­sists of his­tor­i­cal simu­la­tions, the simu­la­tion lev­els in it will be in de­scend­ing time or­der, for ex­am­ple the 24th cen­tury civ­i­liza­tion mod­els the 23rd cen­tury one, which in turn mod­els the 22nd cen­tury one, which it­self mod­els the 21st cen­tury simu­la­tion. A simu­la­tion in a Ma­tryoshka will end on the level where cre­ation of the next level is im­pos­si­ble. The be­gin­ning of 21st cen­tury simu­la­tions will be the most abun­dant class in Ma­tryoshka simu­la­tions (similar to our time pe­riod.)

Ar­gu­ment against simu­la­tion theory

There are sev­eral pos­si­ble ob­jec­tions against the Si­mu­la­tion ar­gu­ment, but I find them not strong enough to do it.

1. Measure

The idea of mea­sure was in­tro­duced to quan­tify the ex­tent of the ex­is­tence of some­thing, mainly in quan­tum uni­verse the­o­ries. While we don’t know how to ac­tu­ally mea­sure “the mea­sure”, the idea is based on in­tu­ition that differ­ent ob­servers have differ­ent pow­ers of ex­is­tence, and as a re­sult I could find my­self to be one of them with a differ­ent prob­a­bil­ity. For ex­am­ple, if we have three func­tional copies of me, one of them is the real per­son, an­other is a hi-res simu­la­tion and the third one is low-res simu­la­tion, are my chances of be­ing each of them equal (1/​3)?

The «mea­sure» con­cept is the most frag­ile el­e­ment of all simu­la­tion ar­gu­ments. It is based mostly on the idea that all copies have equal mea­sure. But per­haps mea­sure also de­pends on the en­ergy of calcu­la­tions. If we have a com­puter which is us­ing 10 watts of en­ergy to calcu­late an ob­server, it may be pre­sented as two par­allel com­put­ers which are us­ing five watts each. Th­ese ob­servers may be di­vided again un­til we reach the min­i­mum amount of en­ergy re­quired for calcu­la­tions, which could be called «Plank ob­server». In this case our ini­tial 10 watt com­puter will be equal to – for ex­am­ple – one billion plank ob­servers.

And here we see a great differ­ence in the case of simu­la­tions, be­cause simu­la­tion cre­ators have to spend less en­ergy on calcu­la­tions (or it would be eas­ier to make real world ex­per­i­ments). But in this case such simu­la­tions will have a lower mea­sure. But if the to­tal num­ber of all simu­la­tions is large, then the to­tal mea­sure of all simu­la­tions will still be higher than the mea­sure of real wor­lds. But if most real wor­lds end with global catas­tro­phe, the re­sult would be an even higher pro­por­tion of real wor­lds which could out­weigh simu­la­tions af­ter all.

2. Univer­sal AI catastrophe

One pos­si­ble uni­ver­sal global catas­tro­phe could hap­pen where a civ­i­liza­tion de­vel­ops an AI-over­lord, but any AI will meet some kind of un­re­solv­able math and philo­soph­i­cal prob­lems which will ter­mi­nate it at its early stages, be­fore it can cre­ate many simu­la­tions. See an overview of this type of prob­lem in my map “AI failures level”.

3. Univer­sal ethics

Another idea is that all AIs con­verge to some kind of ethics and de­ci­sion the­ory which pre­vent them from cre­at­ing simu­la­tions, or they cre­ate p-zom­bie simu­la­tions only. I am skep­ti­cal about that.

4. In­finity problems

If ev­ery­thing pos­si­ble ex­ists or if the uni­verse is in­finite (which are equal state­ments) the pro­por­tion be­tween two in­finite sets is mean­ingless. We could over­come this con­jec­ture us­ing the idea of math­e­mat­i­cal limit: if we take a big­ger uni­verse and longer pe­ri­ods of time, the simu­la­tions will be more and more abun­dant within them.

But in all cases, in the in­finite uni­verse any world ex­ists an in­finite num­ber of times, and this means that my copies ex­ist in real wor­lds an in­finite num­ber of times, re­gard­less of whether I am in a simu­la­tion or not.

5. Non-uniform mea­sure over Uni­verse (ac­tu­al­ity)

Con­tem­po­rary physics is based on the idea that ev­ery­thing that ex­ists, ex­ists in equal sense, mean­ing that the Sun and very re­mote stars have the same mea­sure of ex­is­tence, even in ca­su­ally sep­a­rated re­gions of the uni­verse. But if our re­gion of space-time is some­how more real, it may change simu­la­tion dis­tri­bu­tion which will fa­vor real wor­lds.

6. Flux universe

The same copies of me ex­ist in many differ­ent real and simu­lated wor­lds. In sim­ple form it means that the no­tion that “I am in one spe­cific world” is mean­ingless, but the dis­tri­bu­tion of differ­ent in­ter­pre­ta­tions of the world is re­flected in the prob­a­bil­ities of differ­ent events.

E.g. the higher the chances that I am in a simu­la­tion, the big­ger the prob­a­bil­ity that I will ex­pe­rience some kind of mir­a­cles dur­ing my life­time. (Many mir­a­cles al­most prove that you are in simu­la­tion, like fly­ing in dreams.) But here cor­re­la­tion is not cau­sa­tion.

The stronger ver­sion of the same prin­ci­ple im­plies that I am one in many differ­ent wor­lds, and I could ma­nipu­late the prob­a­bil­ity of find­ing my­self in a set of pos­si­ble wor­lds, ba­si­cally by for­get­ting who I am and be­com­ing equal to a larger set of ob­servers. It may work with­out any new physics, it only re­quires chang­ing the num­ber of similar ob­servers, and if such ob­servers are Tur­ing com­puter pro­grams, they could ma­nipu­late their own num­bers quite eas­ily.

Higher lev­els of flux the­ory do re­quire new physics or at least quan­tum me­chan­ics in a many wor­lds in­ter­pre­ta­tion. In it differ­ent in­ter­pre­ta­tions of the world out­side of the ob­server could in­ter­act with each other or ex­pe­rience some kind of in­terfer­ence.

See fur­ther dis­cus­sion about a flux uni­verse here: http://​​less­wrong.com/​​lw/​​mgd/​​the_con­se­quences_of_dust_the­ory/​​

7. Bolz­mann brains out­weigh simulations

It may turn out that BBs out­weigh both real wor­lds and simu­la­tions. This may not be a prob­lem from a plan­ning point of view be­cause most BBs cor­re­spond to some real copies of me.

But if we take this ap­proach to solve the BBs prob­lem, we will have to use it in the simu­la­tion prob­lem as well, mean­ing: “I am not in a simu­la­tion be­cause for any simu­la­tion, there ex­ists a real world with the same “me”. It is coun­ter­in­tu­itive.

Si­mu­la­tion and global risks

Si­mu­la­tions may be switched off or may simu­late wor­lds which are near global catas­tro­phe. Such wor­lds may be of spe­cial in­ter­est for fu­ture AI be­cause they help to model the Fermi para­dox and they are good for use as games.

Mir­a­cles in simulations

The map also has blocks about types of simu­la­tion hosts, about many level simu­la­tions, plus ethics and mir­a­cles in simu­la­tions.

The main point about simu­la­tion is that it dis­turbs the ran­dom dis­tri­bu­tion of ob­servers. In the real world I would find my­self in mediocre situ­a­tions, but simu­la­tions are more fo­cused on spe­cial events and mir­a­cles (think about movies, dreams and nov­els). The more in­ter­est­ing my life is, the less chance that it is real.

If we are in simu­la­tion we should ex­pect more global risks, strange events and mir­a­cles, so be­ing in a simu­la­tion is chang­ing our prob­a­bil­ity ex­pec­ta­tion of differ­ent oc­cur­rences.

This map is par­allel to the Dooms­day ar­gu­ment map.

Es­ti­ma­tions given in the map of the num­ber of differ­ent types of simu­la­tion or re­quired flops are more like place hold­ers, and may be sev­eral or­ders of mag­ni­tude higher or lower.

I think that this map is rather pre­limi­nary and its main con­clu­sions may be up­dated many times.

The pdf of the map is here, and jpg is be­low.

Pre­vi­ous posts with maps:

Digi­tal Im­mor­tal­ity Map

Dooms­day Ar­gu­ment Map

AGI Safety Solu­tions Map

A map: AI failures modes and levels

A Roadmap: How to Sur­vive the End of the Universe

A map: Ty­pol­ogy of hu­man ex­tinc­tion risks

Roadmap: Plan of Ac­tion to Prevent Hu­man Ex­tinc­tion Risks

Im­mor­tal­ity Roadmap