Beyond Astronomical Waste

Faced with the as­tro­nom­i­cal amount of un­claimed and un­used re­sources in our uni­verse, one’s first re­ac­tion is prob­a­bly won­der­ment and an­ti­ci­pa­tion, but a sec­ond re­ac­tion may be dis­ap­point­ment that our uni­verse isn’t even larger or con­tains even more re­sources (such as the abil­ity to sup­port 3^^^3 hu­man life­times or per­haps to perform an in­finite amount of com­pu­ta­tion). In a pre­vi­ous post I sug­gested that the po­ten­tial amount of as­tro­nom­i­cal waste in our uni­verse seems small enough that a to­tal util­i­tar­ian (or the to­tal util­i­tar­i­anism part of some­one’s moral un­cer­tainty) might rea­son that since one should have made a deal to trade away power/​re­sources/​in­fluence in this uni­verse for power/​re­sources/​in­fluence in uni­verses with much larger amounts of available re­sources, it would be ra­tio­nal to be­have as if this deal was ac­tu­ally made. But for var­i­ous rea­sons a to­tal util­i­tar­ian may not buy that ar­gu­ment, in which case an­other line of thought is to look for things to care about be­yond the po­ten­tial as­tro­nom­i­cal waste in our uni­verse, in other words to ex­plore pos­si­ble sources of ex­pected value that may be much greater than what can be gained by just cre­at­ing worth­while lives in this uni­verse.

One ex­am­ple of this is the pos­si­bil­ity of es­cap­ing, or be­ing de­liber­ately up­lifted from, a simu­la­tion that we’re in, into a much big­ger or richer base uni­verse. Or more gen­er­ally, the pos­si­bil­ity of con­trol­ling, through our de­ci­sions, the out­comes of uni­verses with much greater com­pu­ta­tional re­sources than the one we’re ap­par­ently in. It seems likely that un­der an as­sump­tion such as Teg­mark’s Math­e­mat­i­cal Uni­verse Hy­poth­e­sis, there are many simu­la­tions of our uni­verse run­ning all over the mul­ti­verse, in­clud­ing in uni­verses that are much richer than ours in com­pu­ta­tional re­sources. If such simu­la­tions ex­ist, it also seems likely that we can leave some of them, for ex­am­ple through one of these mechanisms:

  1. Ex­ploit­ing a flaw in the soft­ware or hard­ware of the com­puter that is run­ning our simu­la­tion (in­clud­ing “nat­u­ral simu­la­tions” where a very large uni­verse hap­pens to con­tain a simu­la­tion of ours with­out any­one in­tend­ing this).

  2. Ex­ploit­ing a flaw in the psy­chol­ogy of agents run­ning the simu­la­tion.

  3. Altru­ism (or other moral/​ax­iolog­i­cal con­sid­er­a­tions) on the part of the simu­la­tors.

  4. Acausal trade.

  5. Other in­stru­men­tal rea­sons for the simu­la­tors to let out simu­lated be­ings, such as want­ing some­one to talk to or play with. (Paul Chris­ti­ano’s re­cent When is un­al­igned AI morally valuable? con­tains an ex­am­ple of this, how­ever the idea there only lets us es­cape to an­other uni­verse similar to this one.)

(Be­ing run as a simu­la­tion in an­other uni­verse isn’t nec­es­sar­ily the only way to con­trol what hap­pens in that uni­verse. Another pos­si­bil­ity is if uni­verses with halt­ing or­a­cles ex­ist (which is im­plied by Teg­mark’s MUH since they ex­ist as math­e­mat­i­cal struc­tures in the ar­ith­meti­cal hi­er­ar­chy), some of their or­a­cle queries may be ques­tions whose an­swers can be con­trol­led by our de­ci­sions, in which case we can con­trol what hap­pens in those uni­verses with­out be­ing simu­lated by them (in the sense of be­ing run step by step in a com­puter). Another ex­am­ple is that su­per­in­tel­li­gent be­ings may be able to rea­son about what our de­ci­sions are with­out hav­ing to run a step by step simu­la­tion of us, even with­out ac­cess to a halt­ing or­a­cle.)

The gen­eral idea here is for a su­per­in­tel­li­gence de­scend­ing from us to (af­ter de­ter­min­ing that this is an ad­vis­able course of ac­tion) use some frac­tion of the re­sources of this uni­verse to rea­son about or search (com­pu­ta­tion­ally) for much big­ger/​richer uni­verses that are run­ning us as simu­la­tions or can oth­er­wise be con­trol­led by us, and then de­ter­mine what we need to do to max­i­mize the ex­pected value of the con­se­quences of our ac­tions on the base uni­verses, per­haps through one or more of the above listed mechanisms.

Prac­ti­cal Implications

Real­iz­ing this kind of ex­is­ten­tial hope seems to re­quire a higher level of philo­soph­i­cal so­phis­ti­ca­tion than just pre­vent­ing as­tro­nom­i­cal waste in our own uni­verse. Com­pared to that prob­lem, here we have more ques­tions of a philo­soph­i­cal na­ture, for which no em­piri­cal feed­back seems pos­si­ble. It seems very easy to make a mis­take some­where along the chain of rea­son­ing and waste a more-than-as­tro­nom­i­cal amount of po­ten­tial value, for ex­am­ple by failing to re­al­ize the pos­si­bil­ity of af­fect­ing big­ger uni­verses through our ac­tions, in­cor­rectly calcu­lat­ing the ex­pected value of such a strat­egy, failing to solve the dis­tri­bu­tional/​on­tolog­i­cal shift prob­lem of how to value strange and un­fa­mil­iar pro­cesses or en­tities in other uni­verses, failing to figure out the cor­rect or op­ti­mal way to es­cape into or oth­er­wise in­fluence larger uni­verses, etc.

The to­tal util­i­tar­ian in me is thus very con­cerned about try­ing to pre­serve and im­prove the col­lec­tive philo­soph­i­cal com­pe­tence of our civ­i­liza­tion, such that when it be­comes pos­si­ble to pur­sue strate­gies like ones listed above, we’ll be able to make the right de­ci­sions. The best op­por­tu­nity to do this that I can fore­see is the ad­vent of ad­vanced AI, which is an­other rea­son I want to push for AIs that are not just value al­igned with us, but also have philo­soph­i­cal com­pe­tence that scales with their other in­tel­lec­tual abil­ities, so they can help cor­rect the philo­soph­i­cal er­rors of their hu­man users (in­stead of merely defer­ring to them), thereby greatly im­prov­ing our col­lec­tive philo­soph­i­cal com­pe­tence.

An­ti­ci­pated Questions

How is this idea re­lated to Nick Bostrom’s Si­mu­la­tion Ar­gu­ment? Nick’s ar­gu­ment fo­cuses on the pos­si­bil­ity of post-hu­mans (pre­sum­ably liv­ing in a uni­verse similar to ours but just at a later date) simu­lat­ing us as their an­ces­tors. It does not seem to con­sider that we may be run­ning as simu­la­tions in much larger/​richer uni­verses, or that this may be a source of great po­ten­tial value.

Isn’t this a form of Pas­cal’s Mug­ging? I’m not sure. It could be that when we figure out how to solve Pas­cal’s Mug­ging it will be­come clear that we shouldn’t try to leave our simu­la­tion for rea­sons similar to why we shouldn’t pay the mug­ger. How­ever the anal­ogy doesn’t seem so tight that I think this is highly likely. Also, note that the ar­gu­ment here isn’t that we should do the equiv­a­lent of “pay the mug­ger” but rather that we should try to bring our­selves into a po­si­tion where we can defini­tively figure out what the right thing to do is.