Are you Living in a Me-Simulation?

In Yes­ter­day’s Post I tried to ex­plain why I thought I was liv­ing in a com­puter simu­la­tion, and in par­tic­u­lar a first-per­son simu­la­tion where I would be the only full-time-con­scious be­ing.

Un­for­tu­nately, I wanted to ex­plain too many things at the same time, with­out tak­ing the time to give pre­cise ar­gu­ments for any of them. I ended up spend­ing only one sen­tence on why I be­lieve I am liv­ing in such a simu­la­tion, and did not even try to give a pre­cise es­ti­ma­tion of the prob­a­bil­ity of be­ing in such a simu­la­tion.

Luck­ily, Ikaxas took the time to un­der­stand what were the premises, and gave pre­cise counter-ar­gu­ments. It forced me to be much more pre­cise, and up­dated my be­liefs on why I thought I was liv­ing in a first-per­son simu­la­tion. I am grate­ful he did so, and will be re-us­ing ar­gu­ments we ex­changed in Yes­ter­day’s dis­cus­sion.

The Si­mu­la­tion Argument

In what fol­lows, I will as­sume the reader is fa­mil­iar with the simu­la­tion ar­gu­ment. I will be refer­ring more pre­cisely to this text: https://​​www.simu­la­tion-ar­gu­ment.com/​​simu­la­tion.html.

Ances­tor simu­la­tions are defined in the simu­la­tion ar­gu­ment as “A sin­gle such a com­puter could simu­late the en­tire men­tal his­tory of hu­mankind (call this an an­ces­tor-simu­la­tion) [...].”

The simu­la­tion ar­gu­ment es­ti­mates the prob­a­bil­ity of an­ces­tor simu­la­tions, and not first-per­son simu­la­tions. There­fore, I can­not in­fer the prob­a­bil­ity of a first-per­son simu­la­tion from the prob­a­bil­ity of be­ing in case 3) of the simu­la­tion ar­gu­ment (al­most cer­tain to live in a com­puter simu­la­tion), as I did in Yes­ter­day’s post.

How­ever, Bostrom men­tions the case of what he calls se­lec­tive simu­la­tions (Part VI., Para­graph 13), and in par­tic­u­lar me-simu­la­tions:

In ad­di­tion to an­ces­tor-simu­la­tions, one may also con­sider the pos­si­bil­ity of more se­lec­tive simu­la­tions that in­clude only a small group of hu­mans or a sin­gle in­di­vi­d­ual. The rest of hu­man­ity would then be zom­bies or “shadow-peo­ple” – hu­mans simu­lated only at a level suffi­cient for the fully simu­lated peo­ple not to no­tice any­thing sus­pi­cious. It is not clear how much cheaper shadow-peo­ple would be to simu­late than real peo­ple. It is not even ob­vi­ous that it is pos­si­ble for an en­tity to be­have in­dis­t­in­guish­ably from a real hu­man and yet lack con­scious ex­pe­rience. Even if there are such se­lec­tive simu­la­tions, you should not think that you are in one of them un­less you think they are much more nu­mer­ous than com­plete simu­la­tions. There would have to be about 100 billion times as many “me-simu­la­tions” (simu­la­tions of the life of only a sin­gle mind) as there are an­ces­tor-simu­la­tions in or­der for most simu­lated per­sons to be in me-simu­la­tions.

In short, me-simu­la­tions are simu­la­tions con­tain­ing only one con­scious be­ing and one should only be­lieve that he lives in a me-simu­la­tion if he thinks those are (at least) 100 billion times more prob­a­ble than an­ces­tor-simu­la­tions.

I briefly in­tro­duced at the be­gin­ning the con­cept of first-per­son simu­la­tions. I define first-per­son simu­la­tions as a par­tic­u­lar case of me-simu­la­tion, where only the per­ceived en­vi­ron­ment of the con­scious be­ing is ren­dered (like it is the case in a first-per­son shooter).

I will try to ex­plain why I think the prob­a­bil­ity of be­ing in a me-simu­la­tion is com­pa­rable to the one of be­ing in an-an­ces­tor simu­la­tion.

In­cen­tive and Probabilities

In this part, I will sup­pose that we are not in case 1) of the simu­la­tion ar­gu­ment (where “the hu­man species is very likely to go ex­tinct be­fore reach­ing a posthu­man stage”), and try to un­der­stand what would mo­ti­vate a posthu­man civ­i­liza­tion to run an­ces­tor simu­la­tions and me-simu­la­tions.

Ances­tor Simulations

In the next para­graphs, I will as­sume that the reader is fa­mil­iar with the Fermi Para­dox and the con­cept of Great Filters. If this is not the case, here are in­tro­duc­tions by Kurzge­sact:

Now, my claim is that one of the fol­low­ing state­ment if true:

i) Posthu­mans will run an­ces­tor-simu­la­tions be­cause they don’t ob­serve other forms of (con­scious) in­tel­li­gence in the uni­verse (Fermi Para­dox), and are there­fore try­ing to un­der­stand if there was in­deed a Great Filter be­fore them.
ii) Posthu­mans will ob­serve that other forms of con­scious in­tel­li­gences are abun­dant and the uni­verse, and will have low in­ter­est in run­ning Ances­tor simu­la­tions.

The premise of this claim is that hu­man-like civ­i­liza­tions able to in­vent com­put­ers/​in­ter­net/​space­ships are ei­ther ex­tremely rare (Great Filter be­hind us), or they are not, and in this case there ex­ists a Great Filter ahead of us (be­cause of the Fermi Para­dox).

So Posthu­mans will only have in­ter­est in run­ning Ances­tor-simu­la­tions if they be­lieve their form of in­tel­li­gence is ex­tremely rare. In this case, they would want to run a large amount of those simu­la­tions in or­der to have pre­cise-enough es­ti­ma­tions of how likely is each event in their past (e.g. Cam­brian ex­plo­sion) and de­ter­mine which events were ex­tremely un­likely.

As a con­se­quence, we would be liv­ing in an an­ces­tor-simu­la­tion if and only if all of the fol­low­ing state­ments are true:

1) There are no Great Filters be­tween us and the Posthu­man run­ning the simulation
2) There is at least one Great Filter be­hind us
3) The simu­la­tion passed this first Great Filter, and his now “play­ing” the digi­tal age phase (1990-2018)

Be­cause of the Align­ment prob­lem, and other ex­is­ten­tial risks in gen­eral, I per­son­ally find the state­ment 1) ex­tremely un­likely (less than one in a thou­sand).

Now, as­sum­ing the state­ment 2) be­ing true, the prob­a­bil­ity of be­ing in a simu­la­tion which suc­cess­fully passed at least one Great Filter and is now full of in­tel­li­gent species ca­pa­ble of ex­plor­ing space (in SpaceX I be­lieve) is ex­tremely un­likely. In fact, the ra­tio (num­ber of simu­la­tions that passed a Great Filter F) /​ (num­ber of simu­la­tions that ar­rive to the Great Filter F) is by defi­ni­tion small (smaller than 0.000001 in my in­tu­ition of a Great Filter).

There­fore, I be­lieve that the prob­a­bil­ity of be­ing in a 2018-Earth-like Ances­tor-simu­la­tion con­sist­ing of 7 billion hu­mans is ex­tremely small (less than a one-in-a-billion chance).

Phys­i­cal­ism and Consciousness

Be­fore go­ing into more de­tails with Me-Si­mu­la­tions, I need to make sure I was clear enough about how I see Ances­tor-Si­mu­la­tions in con­trast with Me-Si­mu­la­tions, and in par­tic­u­lar how they differ in the con­scious­ness of their in­hab­itants.

Phys­i­cal­ism es­sen­tially states that “ev­ery­thing is phys­i­cal” and that there is noth­ing mag­i­cal about con­scious­ness. Hence, if posthu­mans want to simu­late hu­man civ­i­liza­tions with as a start­ing point the Big Bang, they must im­ple­ment enough com­plex­ity in what­ever makes the simu­la­tion work so that the hu­mans in the simu­la­tion are com­plex enough to (for in­stance) build space­ships, and from this com­plex­ity will nat­u­rally arise con­scious­ness.

I will not dis­cuss here the prob­a­bil­ity of Phys­i­cal­ism be­ing true in my model of re­al­ity. For this post, I as­sume Phys­i­cal­ism to be true in or­der to rea­son con­ve­niently about con­scious­ness of simu­lated minds (I don’t see how we could dis­cuss con­scious­ness in simu­la­tions with Dual­ism for in­stance).

How­ever, I will an­swer one is­sue Ikaxas ad­dressed in Yes­ter­day’s post:

“[...] even in a first-per­son simu­la­tion, the peo­ple you were in­ter­act­ing with would be con­scious as long as they were within your frame of aware­ness (oth­er­wise the simu­la­tion couldn’t be ac­cu­rate), it’s just that they would blink out of ex­is­tence once they left your frame of aware­ness.”

I claim that even though they would be con­scious in the “frame of aware­ness”, they would not be fully-con­scious. The rea­son is that if you give them spo­radic con­scious­ness, they would greatly lack of what con­sti­tutes con­scious-hu­man’s sub­jec­tive ex­pe­riences: iden­tity and con­ti­nu­ity of con­scious­ness.

Me-Simulations

I have close to none per­sonal em­piri­cal ev­i­dence about the ex­is­tence of Me-Si­mu­la­tions. The clos­est phe­nomenon of first-per­son simu­la­tion (par­tic­u­lar case of me-simu­la­tion) to my per­sonal ex­pe­rience would be Vir­tual Real­ity (VR).

To com­pare the prob­a­bil­ity of Me-Si­mu­la­tions I will start by ex­plain­ing why I am con­vinced that Me-Si­mu­la­tions can be made cost-effi­cient, and then I would dis­cuss the us­ages I be­lieve I posthu­man civ­i­liza­tion could have of me-simu­la­tions.

Vir­tual Real­ity and Complexity

I grew up play­ing video-games where only a tiny frac­tion of the fic­tional uni­verse is ren­dered at each mo­ment. In 2017 I went to a VR con­fer­ence where they ex­plained how 2017 would be a VR win­ter be­cause plenty of star­tups would be de­vel­op­ing VR Soft­ware but there would be nei­ther in­vest­ment nor pub­lic in­ter­est. More­over, the com­plex­ity of ren­der­ing 360 high-re­s­olu­tion images would be over­whelming con­sid­er­ing the cur­rent (2017) al­gorithms/​com­pu­ta­tion power.

In par­tic­u­lar, I went to see a startup de­vel­op­ing an ex­pe­rience where hu­mans could go to the movie… us­ing VR. You would then be vir­tu­ally sit­ting on a chair, watch­ing a screen in­cluded in the image from your VR head­set. Even in this sim­ple en­vi­ron­ment, the startup founders had to only ren­der the chairs at the left and right of your chair be­cause it would be to com­pu­ta­tion­ally ex­pen­sive to fully ren­der all the movie the­ater.

Efficiency

I claim that me-simu­la­tions could be made at lest 100 billion times less com­pu­ta­tion­ally ex­pen­sive than full simu­la­tions. Here are my rea­sons to be­lieve so:

1) Even though it would be nec­es­sary to gen­er­ate con­scious­ness to mimic hu­man pro­cesses, it would only be nec­es­sary for the hu­mans you di­rectly in­ter­act to, so maybe 10 hours of hu­man con­scious­ness other than yours ev­ery day.
2) The phys­i­cal den­sity needed to simu­late a me-simu­la­tion would be at most the size of your room (about 20 me­ters squared times the height of your room). If you are in room it is triv­ially true, and if you are in the outer world I be­lieve you are less self-aware of the rest of the phys­i­cal world, so the “com­plex­ity of re­al­ity” nec­es­sary so that you be­lieve the world is real is about the same as if you were in your room. How­ever, Earth’s Sur­face is about 500 mil­lion km squared, so 2.5 * 10^13 times greater. Hence, it would be at least 100 billion times less com­pu­ta­tion­ally in­ten­sive to run a me-simu­la­tion, as­sum­ing you would want to simu­late at least the same height for the an­ces­tor-simu­la­tion.

Fur­ther­more, you would only need to run one an­ces­tor simu­la­tion to run an in­finitely large num­ber of me-simu­la­tions: if you knew about the en­vi­ron­ment, and had in mem­ory how the con­scious hu­mans from the an­ces­tor simu­la­tion be­haved, you could eas­ily run a me-simu­la­tion where a bunch of the other char­ac­ters are just a copy of what they did in the past, but only one (or a small num­ber of peo­ple) is con­scious. A bit like in West­world there are some plots where robots are re­ally con­vinc­ing, but when­ever they are not in the same place (e.g. not in the sa­loon) their non-con­scious­ness is more ap­par­ent.

Purpose

“Agreed, me-simu­la­tions might be much more cost-effi­cient than an an­ces­tor-simu­la­tion, but why would a posthu­man civ­i­liza­tion want to simu­late me-simu­la­tions in the first place?” you might say.

Here is a list of sce­nar­ios that con­vince me of the us­age of such simu­la­tions:

  • In­ter­galac­tic travel : hu­mans trav­el­ing at (al­most) the speed of light to an­other galaxy might feel bored and want to re­live the life of fa­mous hu­mans. For in­stance “re­live the life of Napoleon and con­quer most of Europe in less than 10 years” would be a ter­rific game to play. In those kind of games you would 1) au­to­mat­i­cally for­get ev­ery­thing about your life (brain wash), be­fore 2) start­ing from scratch (be­ing a baby Napoleon) the life of Napoleon, un­til you die and then 3) have your mem­o­ries back and come back to the space­ship.

  • Emu­lated minds: in gen­eral, Vir­tual Real­ity, as I de­scribed be­fore, could lead to me-simu­la­tions. Maybe the ping isn’t that great and it is much more con­ve­nient to play by your­self (only con­scious be­ing) the Napoleon game. Or maybe you are sen­tenced to live in a vir­tual world all by your­self. See for in­stance how minds are trans­ferred into toast­ers in the “White Christ­mas” epi­sode of Black Mir­ror, or even in a Mon­key Pillow in the “Black Mu­seum” epi­sode.

Wrap­ping it up

  • In the simu­la­tion ar­gu­ment Bostrom defines me-simu­la­tions which con­tain only one con­scious be­ing.

  • I defined a par­tic­u­lar case of me-simu­la­tion (first-per­son simu­la­tion) where only the per­ceived en­vi­ron­ment is ren­dered.

  • I ex­plained why I be­lieved an­cient-simu­la­tions where unlikely

  • I ar­gued that in a first-per­son simu­la­tion (which is for me the most nat­u­ral way of run­ning me-simu­la­tions), only you would be full-time-con­scious, the oth­ers be­ing par­tially-con­scious with­out real iden­tity/​mem­o­ries.

  • I gave ar­gu­ments in fa­vor of cost-effec­tive­ness of me-simu­la­tion com­pared to an­cient-simulations

  • I gave some in­tu­itive ex­am­ples of us­ages of me-simu­la­tions for posthu­man civilizations

No nominations.
No reviews.