Bey­ond Astro­nom­ical Waste

Faced with the as­tro­nom­ical amount of un­claimed and un­used re­sources in our uni­verse, one’s first re­ac­tion is prob­ably won­der­ment and an­ti­cip­a­tion, but a second re­ac­tion may be dis­ap­point­ment that our uni­verse isn’t even lar­ger or con­tains even more re­sources (such as the abil­ity to sup­port 3^^^3 hu­man life­times or per­haps to per­form an in­fin­ite amount of com­pu­ta­tion). In a pre­vi­ous post I sug­ges­ted that the po­ten­tial amount of as­tro­nom­ical waste in our uni­verse seems small enough that a total util­it­arian (or the total util­it­ari­an­ism part of someone’s moral un­cer­tainty) might reason that since one should have made a deal to trade away power/​re­sources/​in­flu­ence in this uni­verse for power/​re­sources/​in­flu­ence in uni­verses with much lar­ger amounts of avail­able re­sources, it would be ra­tional to be­have as if this deal was ac­tu­ally made. But for vari­ous reas­ons a total util­it­arian may not buy that ar­gu­ment, in which case an­other line of thought is to look for things to care about bey­ond the po­ten­tial as­tro­nom­ical waste in our uni­verse, in other words to ex­plore pos­sible sources of ex­pec­ted value that may be much greater than what can be gained by just cre­at­ing worth­while lives in this uni­verse.

One ex­ample of this is the pos­sib­il­ity of es­cap­ing, or be­ing de­lib­er­ately up­lif­ted from, a sim­u­la­tion that we’re in, into a much big­ger or richer base uni­verse. Or more gen­er­ally, the pos­sib­il­ity of con­trolling, through our de­cisions, the out­comes of uni­verses with much greater com­pu­ta­tional re­sources than the one we’re ap­par­ently in. It seems likely that un­der an as­sump­tion such as Teg­mark’s Mathem­at­ical Uni­verse Hy­po­thesis, there are many sim­u­la­tions of our uni­verse run­ning all over the mul­ti­verse, in­clud­ing in uni­verses that are much richer than ours in com­pu­ta­tional re­sources. If such sim­u­la­tions ex­ist, it also seems likely that we can leave some of them, for ex­ample through one of these mech­an­isms:

  1. Ex­ploit­ing a flaw in the soft­ware or hard­ware of the com­puter that is run­ning our sim­u­la­tion (in­clud­ing “nat­ural sim­u­la­tions” where a very large uni­verse hap­pens to con­tain a sim­u­la­tion of ours without any­one in­tend­ing this).

  2. Ex­ploit­ing a flaw in the psy­cho­logy of agents run­ning the sim­u­la­tion.

  3. Al­tru­ism (or other moral/​axi­olo­gical con­sid­er­a­tions) on the part of the sim­u­lat­ors.

  4. Acausal trade.

  5. Other in­stru­mental reas­ons for the sim­u­lat­ors to let out sim­u­lated be­ings, such as want­ing someone to talk to or play with. (Paul Chris­ti­ano’s re­cent When is un­aligned AI mor­ally valu­able? con­tains an ex­ample of this, how­ever the idea there only lets us es­cape to an­other uni­verse sim­ilar to this one.)

(Be­ing run as a sim­u­la­tion in an­other uni­verse isn’t ne­ces­sar­ily the only way to con­trol what hap­pens in that uni­verse. Another pos­sib­il­ity is if uni­verses with halt­ing or­acles ex­ist (which is im­plied by Teg­mark’s MUH since they ex­ist as math­em­at­ical struc­tures in the arith­met­ical hier­archy), some of their or­acle quer­ies may be ques­tions whose an­swers can be con­trolled by our de­cisions, in which case we can con­trol what hap­pens in those uni­verses without be­ing sim­u­lated by them (in the sense of be­ing run step by step in a com­puter). Another ex­ample is that su­per­in­tel­li­gent be­ings may be able to reason about what our de­cisions are without hav­ing to run a step by step sim­u­la­tion of us, even without ac­cess to a halt­ing or­acle.)

The gen­eral idea here is for a su­per­in­tel­li­gence des­cend­ing from us to (after de­term­in­ing that this is an ad­vis­able course of ac­tion) use some frac­tion of the re­sources of this uni­verse to reason about or search (com­pu­ta­tion­ally) for much big­ger/​richer uni­verses that are run­ning us as sim­u­la­tions or can oth­er­wise be con­trolled by us, and then de­term­ine what we need to do to max­im­ize the ex­pec­ted value of the con­sequences of our ac­tions on the base uni­verses, per­haps through one or more of the above lis­ted mech­an­isms.

Practical Implications

Real­iz­ing this kind of ex­ist­en­tial hope seems to re­quire a higher level of philo­soph­ical soph­ist­ic­a­tion than just pre­vent­ing as­tro­nom­ical waste in our own uni­verse. Com­pared to that prob­lem, here we have more ques­tions of a philo­soph­ical nature, for which no em­pir­ical feed­back seems pos­sible. It seems very easy to make a mis­take some­where along the chain of reas­on­ing and waste a more-than-as­tro­nom­ical amount of po­ten­tial value, for ex­ample by fail­ing to real­ize the pos­sib­il­ity of af­fect­ing big­ger uni­verses through our ac­tions, in­cor­rectly cal­cu­lat­ing the ex­pec­ted value of such a strategy, fail­ing to solve the dis­tri­bu­tional/​on­to­lo­gical shift prob­lem of how to value strange and un­fa­mil­iar pro­cesses or en­tit­ies in other uni­verses, fail­ing to fig­ure out the cor­rect or op­timal way to es­cape into or oth­er­wise in­flu­ence lar­ger uni­verses, etc.

The total util­it­arian in me is thus very con­cerned about try­ing to pre­serve and im­prove the col­lect­ive philo­soph­ical com­pet­ence of our civil­iz­a­tion, such that when it be­comes pos­sible to pur­sue strategies like ones lis­ted above, we’ll be able to make the right de­cisions. The best op­por­tun­ity to do this that I can fore­see is the ad­vent of ad­vanced AI, which is an­other reason I want to push for AIs that are not just value aligned with us, but also have philo­soph­ical com­pet­ence that scales with their other in­tel­lec­tual abil­it­ies, so they can help cor­rect the philo­soph­ical er­rors of their hu­man users (in­stead of merely de­fer­ring to them), thereby greatly im­prov­ing our col­lect­ive philo­soph­ical com­pet­ence.

Anti­cip­ated Questions

How is this idea re­lated to Nick Bostrom’s Sim­u­la­tion Ar­gu­ment? Nick’s ar­gu­ment fo­cuses on the pos­sib­il­ity of post-hu­mans (pre­sum­ably liv­ing in a uni­verse sim­ilar to ours but just at a later date) sim­u­lat­ing us as their an­cest­ors. It does not seem to con­sider that we may be run­ning as sim­u­la­tions in much lar­ger/​richer uni­verses, or that this may be a source of great po­ten­tial value.

Isn’t this a form of Pas­cal’s Mug­ging? I’m not sure. It could be that when we fig­ure out how to solve Pas­cal’s Mug­ging it will be­come clear that we shouldn’t try to leave our sim­u­la­tion for reas­ons sim­ilar to why we shouldn’t pay the mug­ger. However the ana­logy doesn’t seem so tight that I think this is highly likely. Also, note that the ar­gu­ment here isn’t that we should do the equi­val­ent of “pay the mug­ger” but rather that we should try to bring ourselves into a po­s­i­tion where we can defin­it­ively fig­ure out what the right thing to do is.