# Beyond Astronomical Waste

Faced with the as­tro­nom­i­cal amount of un­claimed and un­used re­sources in our uni­verse, one’s first re­ac­tion is prob­a­bly won­der­ment and an­ti­ci­pa­tion, but a sec­ond re­ac­tion may be dis­ap­point­ment that our uni­verse isn’t even larger or con­tains even more re­sources (such as the abil­ity to sup­port 3^^^3 hu­man life­times or per­haps to perform an in­finite amount of com­pu­ta­tion). In a pre­vi­ous post I sug­gested that the po­ten­tial amount of as­tro­nom­i­cal waste in our uni­verse seems small enough that a to­tal util­i­tar­ian (or the to­tal util­i­tar­i­anism part of some­one’s moral un­cer­tainty) might rea­son that since one should have made a deal to trade away power/​re­sources/​in­fluence in this uni­verse for power/​re­sources/​in­fluence in uni­verses with much larger amounts of available re­sources, it would be ra­tio­nal to be­have as if this deal was ac­tu­ally made. But for var­i­ous rea­sons a to­tal util­i­tar­ian may not buy that ar­gu­ment, in which case an­other line of thought is to look for things to care about be­yond the po­ten­tial as­tro­nom­i­cal waste in our uni­verse, in other words to ex­plore pos­si­ble sources of ex­pected value that may be much greater than what can be gained by just cre­at­ing worth­while lives in this uni­verse.

One ex­am­ple of this is the pos­si­bil­ity of es­cap­ing, or be­ing de­liber­ately up­lifted from, a simu­la­tion that we’re in, into a much big­ger or richer base uni­verse. Or more gen­er­ally, the pos­si­bil­ity of con­trol­ling, through our de­ci­sions, the out­comes of uni­verses with much greater com­pu­ta­tional re­sources than the one we’re ap­par­ently in. It seems likely that un­der an as­sump­tion such as Teg­mark’s Math­e­mat­i­cal Uni­verse Hy­poth­e­sis, there are many simu­la­tions of our uni­verse run­ning all over the mul­ti­verse, in­clud­ing in uni­verses that are much richer than ours in com­pu­ta­tional re­sources. If such simu­la­tions ex­ist, it also seems likely that we can leave some of them, for ex­am­ple through one of these mechanisms:

1. Ex­ploit­ing a flaw in the soft­ware or hard­ware of the com­puter that is run­ning our simu­la­tion (in­clud­ing “nat­u­ral simu­la­tions” where a very large uni­verse hap­pens to con­tain a simu­la­tion of ours with­out any­one in­tend­ing this).

2. Ex­ploit­ing a flaw in the psy­chol­ogy of agents run­ning the simu­la­tion.

3. Altru­ism (or other moral/​ax­iolog­i­cal con­sid­er­a­tions) on the part of the simu­la­tors.

4. Other in­stru­men­tal rea­sons for the simu­la­tors to let out simu­lated be­ings, such as want­ing some­one to talk to or play with. (Paul Chris­ti­ano’s re­cent When is un­al­igned AI morally valuable? con­tains an ex­am­ple of this, how­ever the idea there only lets us es­cape to an­other uni­verse similar to this one.)

(Be­ing run as a simu­la­tion in an­other uni­verse isn’t nec­es­sar­ily the only way to con­trol what hap­pens in that uni­verse. Another pos­si­bil­ity is if uni­verses with halt­ing or­a­cles ex­ist (which is im­plied by Teg­mark’s MUH since they ex­ist as math­e­mat­i­cal struc­tures in the ar­ith­meti­cal hi­er­ar­chy), some of their or­a­cle queries may be ques­tions whose an­swers can be con­trol­led by our de­ci­sions, in which case we can con­trol what hap­pens in those uni­verses with­out be­ing simu­lated by them (in the sense of be­ing run step by step in a com­puter). Another ex­am­ple is that su­per­in­tel­li­gent be­ings may be able to rea­son about what our de­ci­sions are with­out hav­ing to run a step by step simu­la­tion of us, even with­out ac­cess to a halt­ing or­a­cle.)

The gen­eral idea here is for a su­per­in­tel­li­gence de­scend­ing from us to (af­ter de­ter­min­ing that this is an ad­vis­able course of ac­tion) use some frac­tion of the re­sources of this uni­verse to rea­son about or search (com­pu­ta­tion­ally) for much big­ger/​richer uni­verses that are run­ning us as simu­la­tions or can oth­er­wise be con­trol­led by us, and then de­ter­mine what we need to do to max­i­mize the ex­pected value of the con­se­quences of our ac­tions on the base uni­verses, per­haps through one or more of the above listed mechanisms.

### Prac­ti­cal Implications

Real­iz­ing this kind of ex­is­ten­tial hope seems to re­quire a higher level of philo­soph­i­cal so­phis­ti­ca­tion than just pre­vent­ing as­tro­nom­i­cal waste in our own uni­verse. Com­pared to that prob­lem, here we have more ques­tions of a philo­soph­i­cal na­ture, for which no em­piri­cal feed­back seems pos­si­ble. It seems very easy to make a mis­take some­where along the chain of rea­son­ing and waste a more-than-as­tro­nom­i­cal amount of po­ten­tial value, for ex­am­ple by failing to re­al­ize the pos­si­bil­ity of af­fect­ing big­ger uni­verses through our ac­tions, in­cor­rectly calcu­lat­ing the ex­pected value of such a strat­egy, failing to solve the dis­tri­bu­tional/​on­tolog­i­cal shift prob­lem of how to value strange and un­fa­mil­iar pro­cesses or en­tities in other uni­verses, failing to figure out the cor­rect or op­ti­mal way to es­cape into or oth­er­wise in­fluence larger uni­verses, etc.

The to­tal util­i­tar­ian in me is thus very con­cerned about try­ing to pre­serve and im­prove the col­lec­tive philo­soph­i­cal com­pe­tence of our civ­i­liza­tion, such that when it be­comes pos­si­ble to pur­sue strate­gies like ones listed above, we’ll be able to make the right de­ci­sions. The best op­por­tu­nity to do this that I can fore­see is the ad­vent of ad­vanced AI, which is an­other rea­son I want to push for AIs that are not just value al­igned with us, but also have philo­soph­i­cal com­pe­tence that scales with their other in­tel­lec­tual abil­ities, so they can help cor­rect the philo­soph­i­cal er­rors of their hu­man users (in­stead of merely defer­ring to them), thereby greatly im­prov­ing our col­lec­tive philo­soph­i­cal com­pe­tence.

### An­ti­ci­pated Questions

How is this idea re­lated to Nick Bostrom’s Si­mu­la­tion Ar­gu­ment? Nick’s ar­gu­ment fo­cuses on the pos­si­bil­ity of post-hu­mans (pre­sum­ably liv­ing in a uni­verse similar to ours but just at a later date) simu­lat­ing us as their an­ces­tors. It does not seem to con­sider that we may be run­ning as simu­la­tions in much larger/​richer uni­verses, or that this may be a source of great po­ten­tial value.

Isn’t this a form of Pas­cal’s Mug­ging? I’m not sure. It could be that when we figure out how to solve Pas­cal’s Mug­ging it will be­come clear that we shouldn’t try to leave our simu­la­tion for rea­sons similar to why we shouldn’t pay the mug­ger. How­ever the anal­ogy doesn’t seem so tight that I think this is highly likely. Also, note that the ar­gu­ment here isn’t that we should do the equiv­a­lent of “pay the mug­ger” but rather that we should try to bring our­selves into a po­si­tion where we can defini­tively figure out what the right thing to do is.

• This is a post that’s stayed with me since it was pub­lished. The ti­tle is es­pe­cially helpful as a han­dle. It is a sim­ple refer­ence for this idea, that there are deeply con­fus­ing philo­soph­i­cal prob­lems that are cen­tral to our abil­ity to at­tain most of the value we care about (and that this might be a cen­tral con­cern when think­ing about AI).

It’s not been very close to ar­eas I think about a lot, so I’ve not tried to build on it much, and would be in­ter­ested in a re­view from some­one who thinks in more de­tail about these mat­ters more, but I ex­pect they’ll agree it’s a very helpful post to ex­ist.

• I think that at the time this post came out, I didn’t have the men­tal scaf­fold­ing nec­es­sary to re­ally en­gage with it – I thought of this ques­tion as maybe im­por­tant, but sort of “above my pay­grade”, some­thing bet­ter left to other peo­ple who would have the re­sources to en­gage more se­ri­ously with it.

But, over the past cou­ple years, the con­cepts here have formed an im­por­tant com­po­nent of my un­der­stand­ing of ro­bust agency. Much of this came from pri­vate in-per­son con­ver­sa­tions, but this post is the best writeup of the con­cept I’m cur­rently aware of.

One thing I like about this post is the fo­cus on philo­soph­i­cal com­pe­tence. Pre­vi­ously, I’d thought of this ques­tion as dan­ger­ous to think about, be­cause you might make philo­soph­i­cal mis­takes that doomed you or your uni­verse for (in ret­ro­spect) silly rea­sons.

My cur­rent model is more like “no, you-with-your-21st-cen­tury-hu­man-brain shouldn’t ac­tu­ally at­tempt to take ac­tions aiming pri­mar­ily to af­fect other uni­verses on the macro scale. Ne­go­ti­at­ing with other uni­verses is some­thing you do when you’re a literal galaxy brain that is quite con­fi­dent in it’s philos­o­phy.”

But, mean­while, it seems that:

(note: low-to-mid con­fi­dence, still work­ing through these prob­lems my­self, and I very much still philo­soph­i­cally con­fused about at least some of this)

– be­com­ing philo­soph­i­cally com­pe­tent, as a species, may be one of the most im­por­tant goals fac­ing hu­man­ity, and how this pro­ject in­ter­faces with AI de­vel­op­ment may be cru­cially im­por­tant. This may be rele­vant to peo­ple (like me) who aren’t di­rectly work­ing on al­ign­ment but try­ing to have a good model of the strate­gic land­scape.

– a con­cept not from this par­tic­u­lar post, but rele­vant, is the no­tion that the ques­tion is not “are you in a simu­la­tion or not?”, it’s more like “to what de­gree are you in simu­la­tions? Which dis­tri­bu­tion of agents are you mak­ing choices on be­half of?”. And this has some im­pli­ca­tions about how you should make choices that you should worry about now, be­fore you are a literal galaxy brain. (many of which are more mun­dane)

– I think there may be a con­nec­tion be­tween “Beyond Astronom­i­nal Waste” and “Ro­bust­ness to scale.” You can’t be a galaxy brain now, but you can be the sort of per­son who would be demon­stra­bly safe to scale up, in a way that simu­la­tions can de­tect, which might let you punch above your cur­rent weight, in terms of philo­soph­i­cal com­pe­tence.

No reviews.
• Be­ing run as a simu­la­tion in an­other uni­verse isn’t nec­es­sar­ily the only way to con­trol what hap­pens in that uni­verse.

Had you seen Mul­ti­verse-wide co­op­er­a­tion via cor­re­lated de­ci­sion-mak­ing, btw? (some­what similar to acausal trade but differs in that it’s based on the agents be­ing similar to each other rather than mod­el­ing each other)

• If there’s some kind of mea­sure of “ob­server weight” over the whole math­e­mat­i­cal uni­verse, we might be already much larger than 1/​3^^^3 of it, so the to­tal util­i­tar­ian can only gain so much. And even if there’s no mea­sure, I’m not sure my to­tal util­i­tar­i­anism would scale lin­early to such num­bers. But I’m very con­fused about all this.

We can di­vide our cre­dence in to­tal util­i­tar­i­anism into “bounded to­tal util­i­tar­i­anism” (in­clud­ing mea­sure-based) and “un­bounded to­tal util­i­tar­i­anism”. Con­di­tional on bounded to­tal util­i­tar­i­anism, I don’t see a rea­son to think that po­ten­tial value gained from con­trol­ling larger/​richer uni­verses couldn’t be at least sev­eral or­ders of mag­ni­tude larger than from what hap­pens in this uni­verse. (Maybe this is true for some forms of bounded to­tal util­i­tar­i­anism with par­tic­u­larly low bounds, but shouldn’t be true for all of them.) Con­di­tional on un­bounded to­tal util­i­tar­i­anism, things are even more con­fus­ing as it’s not clear how un­bounded to­tal util­i­tar­i­anism can for­mally work, but in­for­mally it seems that if un­bounded to­tal util­i­tar­i­anism can work it very likely say that try­ing to con­trol larger/​richer uni­verses is the right thing to do.

Over­all it seems like a fairly safe con­clu­sion that the part of you that is at­tracted by the idea of pre­vent­ing as­tro­nom­i­cal waste (or a large frac­tion of that part of you) prob­a­bly shouldn’t stop at just pre­vent­ing as­tro­nom­i­cal waste in this uni­verse.

• Yeah, I agree the gain can be or­ders of mag­ni­tude larger than this uni­verse. Only ob­ject­ing to the use of 3^^^3 as a metaphor, be­cause I’m not sure we can care that strongly.

My in­stinct says we can’t care about any­thing much big­ger than an ex­po­nen­tial. That’s also use­ful for pre­vent­ing Pas­cal’s mug­gings, be­cause I can re­peat­edly flip a coin and ask the mug­ger to in­fluence the out­come, thus re­duc­ing their cred­i­bil­ity ex­po­nen­tially with time. But maybe that’s too con­ve­nient.

• I don’t un­der­stand why you think that the ex­pec­ta­tion should be or­ders of mag­ni­tude larger for other uni­verses. The model “like util­i­tar­i­anism, but with an up­per bound on # of peo­ple” seems kind of wacky, maybe it gets a seat in the moral par­li­a­ment but I don’t think it’s the dom­i­nant force for car­ing about as­tro­nom­i­cal waste. For non-count­ing-mea­sure util­i­tar­i­anism, I don’t see ei­ther why the mod­els con­cerned about as­tro­nom­i­cal waste would as­sign larger uni­verses an over­whelming share of our car­ing-mea­sure.

It also feels to me like you are 2-en­velop­ing wrong if you end up with a 100x ra­tio here. (I.e., if you have 10% prob­a­bil­ity on a model where there two are equal, I don’t think you should end up with 100x.)

Over­all it seems like a fairly safe con­clu­sion that the part of you that is at­tracted by the idea of pre­vent­ing as­tro­nom­i­cal waste (or a large frac­tion of that part of you) prob­a­bly shouldn’t stop at just pre­vent­ing as­tro­nom­i­cal waste in this uni­verse.

I you put 50% on a the­ory that cares over­whelm­ingly about in­finite uni­verses and 50% on a the­ory that cares about all uni­verses, the thing to do is prob­a­bly still to pre­vent as­tro­nom­i­cal waste in this uni­verse, so that we can later en­gage in trade or spend the re­sources ex­plor­ing what­ever an­gles of at­tack seem use­ful. Maybe this is the kind of thing you have in mind, but it’s a no­table spe­cial case be­cause it seems to recom­mend the same short-term be­hav­ior.

try­ing to pre­serve and im­prove the col­lec­tive philo­soph­i­cal com­pe­tence of our civ­i­liza­tion, such that when it be­comes pos­si­ble to pur­sue strate­gies like ones listed above, we’ll be able to make the right de­ci­sions.

I agree that if we don’t even­tu­ally reach philo­soph­i­cal ma­tu­rity (or end up on an ap­prox­i­mately op­ti­mal philo­soph­i­cal tra­jec­tory) then we won’t cap­ture most of the value in the uni­verse. It seems like that con­clu­sion doesn’t re­ally de­pend on in­finite uni­verses though (e.g. a util­i­tar­ian might be similarly con­cerned about dis­cov­er­ing how to op­ti­mally or­ga­nize mat­ter), un­less you think this is the main way our prefer­ences might not be eas­ily sa­tiable.

The best op­por­tu­nity to do this that I can fore­see is the ad­vent of ad­vanced AI, which is an­other rea­son I want to push for AIs that are not just value al­igned with us, but also have philo­soph­i­cal com­pe­tence that scales with their other in­tel­lec­tual abil­ities, so they can help cor­rect the philo­soph­i­cal er­rors of their hu­man users (in­stead of merely defer­ring to them), thereby greatly im­prov­ing our col­lec­tive philo­soph­i­cal com­pe­tence.

This doesn’t seem re­lated to re­cent dis­cus­sions about philo­soph­i­cal com­pe­tence and AI, since itis about wha we want AI to do even­tu­ally rather than what you want to do in the 21 cen­tury (I’m not sure if it was sup­posed to be re­lated).

• For non-count­ing-mea­sure util­i­tar­i­anism, I don’t see ei­ther why the mod­els con­cerned about as­tro­nom­i­cal waste would as­sign larger uni­verses an over­whelming share of our car­ing-mea­sure.

I guess with mea­sure-based util­i­tar­i­anism, it’s more about den­sity of po­ten­tially valuable things within the uni­verse than size. If our uni­verse only sup­ports 10^120 available op­er­a­tions, most of it (>99%) is go­ing to be de­void of value un­der many eth­i­cally plau­si­ble ways of dis­tribut­ing car­ing-mea­sure over the space-time re­gions within a uni­verse.

I agree that if we don’t even­tu­ally reach philo­soph­i­cal ma­tu­rity (or end up on an ap­prox­i­mately op­ti­mal philo­soph­i­cal tra­jec­tory) then we won’t cap­ture most of the value in the uni­verse. It seems like that con­clu­sion doesn’t re­ally de­pend on in­finite uni­verses though (e.g. a util­i­tar­ian might be similarly con­cerned about dis­cov­er­ing how to op­ti­mally or­ga­nize mat­ter),

Some peo­ple seem to think there’s a good chance that our cur­rent level of philo­soph­i­cal un­der­stand­ing is enough to cap­ture most of the value in this uni­verse. (For ex­am­ple, if we im­ple­ment a uni­verse-wide simu­la­tion de­signed ac­cord­ing to Eliezer’s Fun The­ory, or if we just wipe out all suffer­ing.) Others may think that we don’t cur­rently have enough un­der­stand­ing to do that, but we can reach that level of un­der­stand­ing “by de­fault”. My ar­gu­ment here is that both of these seem less likely if the goal is in­stead to cap­ture value from larger/​richer uni­verses, and that gives more im­pe­tus to try­ing to im­prove our philo­soph­i­cal com­pe­tence.

un­less you think this is the main way our prefer­ences might not be eas­ily sa­tiable.

Not sure what you mean by this.

This doesn’t seem re­lated to re­cent dis­cus­sions about philo­soph­i­cal com­pe­tence and AI, since it is about what we want AI to do even­tu­ally rather than what you want to do in the 21 cen­tury (I’m not sure if it was sup­posed to be re­lated).

They’re not sup­posed to be re­lated ex­cept in so far as they’re both ar­gu­ments for want­ing AI to be able to help hu­mans cor­rect their philo­soph­i­cal mis­takes in­stead of just defer­ring to hu­mans.

• I guess with mea­sure-based util­i­tar­i­anism, it’s more about den­sity of po­ten­tially valuable things within the uni­verse than size. If our uni­verse only sup­ports 10^120 available op­er­a­tions, most of it (>99%) is go­ing to be de­void of value un­der many eth­i­cally plau­si­ble ways of dis­tribut­ing car­ing-mea­sure over the space-time re­gions within a uni­verse.

I agree, but if you have a broad dis­tri­bu­tion over mix­tures then you’ll be in­clud­ing many that don’t use literal lo­ca­tions and those will dom­i­nate for “sparse” uni­verses.

I can see eas­ily how you’d get a mod­est fac­tor fa­vor­ing other uni­verses over as­tro­nom­i­cal waste in this uni­verse, but as your mea­sure/​un­cer­tainty gets broader (or you have a broader dis­tri­bu­tion over trad­ing part­ners) the ra­tio seems to shrink to­wards 1 and I don’t feel like “or­ders of mag­ni­tude” is that plau­si­ble.

Some peo­ple seem to think there’s a good chance that our cur­rent level of philo­soph­i­cal un­der­stand­ing is enough to cap­ture most of the value in this uni­verse. (For ex­am­ple, if we im­ple­ment a uni­verse-wide simu­la­tion de­signed ac­cord­ing to Eliezer’s Fun The­ory, or if we just wipe out all suffer­ing.) Others may think that we don’t cur­rently have enough un­der­stand­ing to do that, but we can reach that level of un­der­stand­ing “by de­fault”. My ar­gu­ment here is that both of these seem less likely if the goal is in­stead to cap­ture value from larger/​richer uni­verses, and that gives more im­pe­tus to try­ing to im­prove our philo­soph­i­cal com­pe­tence.

I agree this is a fur­ther ar­gu­ment for need­ing more philo­soph­i­cal com­pe­tence. I per­son­ally feel like that po­si­tion is already pretty solid but I ac­knowl­edge that it’s not a uni­ver­sal po­si­tion even amongst EAs.

They’re not sup­posed to be re­lated ex­cept in so far as they’re both ar­gu­ments for want­ing AI to be able to help hu­mans cor­rect their philo­soph­i­cal mis­takes in­stead of just defer­ring to hu­mans.

“Defer to hu­mans” could mean many differ­ent things. This is an ar­gu­ment against AI for­ever defer­ring to hu­mans in their cur­rent form /​ with their cur­rent knowl­edge. When I talk about “defer to hu­mans” I’m usu­ally talk­ing about an AI defer­ring to hu­mans who are ex­plic­itly al­lowed to de­liber­ate/​learn/​self-mod­ify if that’s what they choose to do (or, per­haps more im­por­tantly, to con­struct a new AI with greater philo­soph­i­cal com­pe­tence and put it in charge).

I un­der­stand that some peo­ple might ad­vo­cate for a stronger form of “defer to hu­mans” and it’s fine to re­spond to them, but wanted to make sure there wasn’t a mi­s­un­der­stand­ing. (Also I don’t feel there are very many ad­vo­cates for the stronger form, I think the bulk of the AI com­mu­nity imag­ines our AI defer­ring to us but us be­ing free to de­sign bet­ter AIs later.)

• I agree, but if you have a broad dis­tri­bu­tion over mix­tures then you’ll be in­clud­ing many that don’t use literal lo­ca­tions and those will dom­i­nate for “sparse” uni­verses.

I cur­rently think that each way of dis­tribut­ing car­ing-mea­sure over a uni­verse should be a sep­a­rate mem­ber of moral par­li­a­ment, given a weight equal to its eth­i­cal plau­si­bil­ity, in­stead of hav­ing just one mem­ber with some sort of uni­ver­sal dis­tri­bu­tion. So there ought to be a sub­stan­tial coal­i­tion in one’s moral par­li­a­ment that think con­trol­ling big­ger/​richer uni­verses is po­ten­tially or­ders of mag­ni­tude more valuable.

Another in­tu­ition pump here is to con­sider a thought ex­per­i­ment where you think there’s 5050 chance that our uni­verse sup­ports ei­ther 10^120 op­er­a­tions or 10^(10^120) op­er­a­tions (and con­trol­ling other uni­verses isn’t pos­si­ble). Isn’t there some large coal­i­tion of to­tal util­i­tar­i­ans in your moral par­li­a­ment who would be at least 100x hap­pier to find out that the uni­verse sup­ports 10^(10^120) op­er­a­tions (and be will­ing to bet/​trade ac­cord­ingly)?

When I talk about “defer to hu­mans” I’m usu­ally talk­ing about an AI defer­ring to hu­mans who are ex­plic­itly al­lowed to de­liber­ate/​​learn/​​self-mod­ify if that’s what they choose to do (or, per­haps more im­por­tantly, to con­struct a new AI with greater philo­soph­i­cal com­pe­tence and put it in charge).

Yeah I didn’t make this clear, but my worry here is that most hu­mans won’t choose to “de­liber­ate/​​learn/​​self-mod­ify” in a way that leads to philo­soph­i­cal ma­tu­rity (or con­struct a new AI with greater philo­soph­i­cal com­pe­tence and put it in charge), if you ini­tially give them an AI that has great in­tel­lec­tual abil­ities in most ar­eas but defers to hu­mans on philo­soph­i­cal mat­ters. One pos­si­bil­ity is that be­cause hu­mans don’t have value func­tions that are ro­bust against dis­tri­bu­tional shifts, they’ll (with the help of their AIs) end up do­ing an ad­ver­sar­ial at­tack against their own value func­tions and not be able to re­cover from that. If they some­how avoid that, they may still get stuck at some level of philo­soph­i­cal com­pe­tence that is less than what’s needed to cap­ture value from big­ger/​richer uni­verses, and never feel a need to put a new philo­soph­i­cally com­pe­tent AI in charge. It seems to me that the best way to avoid both of these out­comes (as well as pos­si­ble near-term moral catas­tro­phes such as cre­at­ing a lot of suffer­ing that can’t be bal­anced out later) is to make sure that the first ad­vanced AIs are highly or scal­ably com­pe­tent in philos­o­phy. (I un­der­stand you prob­a­bly dis­agree with “get­ting stuck” even with re­gard to cap­tur­ing value from big­ger/​richer uni­verses, you’re not very con­cerned about near term moral catas­tro­phes, and I’m not sure what your think­ing on the un­re­cov­er­able self-at­tack thing is.)

• Another in­tu­ition pump here is to con­sider a thought ex­per­i­ment where you think there’s 5050 chance that our uni­verse sup­ports ei­ther 10^120 op­er­a­tions or 10^(10^120) op­er­a­tions (and con­trol­ling other uni­verses isn’t pos­si­ble). Isn’t there some large coal­i­tion of to­tal util­i­tar­i­ans in your moral par­li­a­ment who would be at least 100x hap­pier to find out that the uni­verse sup­ports 10^(10^120) op­er­a­tions (and be will­ing to bet/​trade ac­cord­ingly)?

I to­tally agree that there are mem­bers of the par­li­a­ment who would as­sign much higher value on other uni­verses than on our uni­verse.

I’m say­ing that there is also a sig­nifi­cant con­tin­gent that cares about our uni­verse, so the peo­ple who care about other uni­verses aren’t go­ing to dom­i­nate.

(And on top of that, all of the con­tin­gents are roughly just try­ing to max­i­mize the “mar­ket value” of what we get, so for the most part we need to rea­son about an even more spread out dis­tri­bu­tion.)

Yeah I didn’t make this clear, but my worry here is that most hu­mans won’t choose to “de­liber­ate/​​learn/​​self-mod­ify” in a way that leads to philo­soph­i­cal ma­tu­rity (or con­struct a new AI with greater philo­soph­i­cal com­pe­tence and put it in charge), if you ini­tially give them an AI that has great in­tel­lec­tual abil­ities in most ar­eas but defers to hu­mans on philo­soph­i­cal mat­ters.

There are tons of ways you could get peo­ple to do some­thing they won’t choose to do. I don’t know if “give them an AI that doesn’t defer to them about philos­o­phy” is more nat­u­ral than e.g. “give them an AI that doesn’t defer to them about how they should de­liber­ate/​learn/​self-mod­ify.”

• I’m say­ing that there is also a sig­nifi­cant con­tin­gent that cares about our uni­verse, so the peo­ple who care about other uni­verses aren’t go­ing to dom­i­nate.

I don’t think I’m get­ting your point here. Per­son­ally it seems safe to say that >80% of the con­tin­gent of my moral par­li­a­ment that cares about as­tro­nom­i­cal waste would say that if our uni­verse was ca­pa­ble of 10^(10^120) op­er­a­tions it would be at least 100x as valuable as if was ca­pa­ble of only 10^120 op­er­a­tions. Are your num­bers differ­ent from this? In any case, what im­pli­ca­tions are you sug­gest­ing based on “no dom­i­na­tion”?

(And on top of that, all of the con­tin­gents are roughly just try­ing to max­i­mize the “mar­ket value” of what we get, so for the most part we need to rea­son about an even more spread out dis­tri­bu­tion.)

I don’t un­der­stand this part at all. Please elab­o­rate?

There are tons of ways you could get peo­ple to do some­thing they won’t choose to do.

I did pref­ace my con­clu­sion with “The best op­por­tu­nity to do this that I can fore­see”, so if you have other ideas about what some­one like me ought to do, I’d cer­tainly wel­come them.

I don’t know if “give them an AI that doesn’t defer to them about philos­o­phy” is more nat­u­ral than e.g. “give them an AI that doesn’t defer to them about how they should de­liber­ate/​​learn/​​self-mod­ify.”

Isn’t “how they should de­liber­ate/​​learn/​​self-mod­ify” it­self a difficult philo­soph­i­cal prob­lem (in the field of meta-philos­o­phy)? If it’s some­how eas­ier or safer to “give them an AI that doesn’t defer to them about how they should de­liber­ate/​​learn/​​self-mod­ify” than to “give them an AI that doesn’t defer to them about philos­o­phy” then I’m all for that but it doesn’t seem like a very differ­ent idea from mine.

• I don’t think I’m get­ting your point here. Per­son­ally it seems safe to say that >80% of the con­tin­gent of my moral par­li­a­ment that cares about as­tro­nom­i­cal waste would say that if our uni­verse was ca­pa­ble of 10^(10^120) op­er­a­tions it would be at least 100x as valuable as if was ca­pa­ble of only 10^120 op­er­a­tions. Are your num­bers differ­ent from this? In any case, what im­pli­ca­tions are you sug­gest­ing based on “no dom­i­na­tion”?

I might have given 50% or 60% in­stead of >80%.

I don’t un­der­stand how you would get sig­nifi­cant con­clu­sions out of this with­out big mul­ti­pli­ers. Yes, there are some par­ti­ci­pants in your par­li­a­ment who care more about wor­lds other than this one. Those wor­lds ap­pear to be sig­nifi­cantly harder to in­fluence (by means other than trade), so this doesn’t seem to have a huge effect on what you ought to do in this world. (As­sum­ing that we are able to make trades that we ob­vi­ously would have wanted to make be­hind the veil of ig­no­rance.)

In par­tic­u­lar, if your ra­tio be­tween the value of big and small uni­verses was only 5x, then that would only have a 5x mul­ti­plier on the value of the in­ter­ven­tions you list in the OP. Given that many of them look very tiny, I as­sumed you were imag­in­ing a much larger mul­ti­plier. (Some­thing that looks very tiny may end up be­ing a huge deal, but once we are already wrong by many or­ders of mag­ni­tude it doesn’t feel like the last 5x has a huge im­pact.)

I don’t un­der­stand this part at all. Please elab­o­rate?

We will have con­trol over as­tro­nom­i­cal re­sources in our uni­verse. We can then acausally trade that away for in­fluence over the kinds of uni­verses we care about in­fluenc­ing. At equil­ibrium, ig­nor­ing mar­ket failures and fric­tion, how much you value get­ting con­trol over as­tro­nom­i­cal re­sources doesn’t de­pend on which kinds of as­tro­nom­i­cal re­sources you in par­tic­u­lar ter­mi­nally value. Every­one in­stru­men­tally uses the same util­ity func­tion, given by the mar­ket-clear­ing prices of differ­ent kinds of as­tro­nom­i­cal re­sources. In par­tic­u­lar, the op­ti­mal ra­tio be­tween (say) he­do­nism and tak­ing-over-the-uni­verse de­pends on the mar­ket price of the uni­verse you live in, not on how much you in par­tic­u­lar value the uni­verse you live in. This is ex­actly analo­gous to say­ing: the op­ti­mal trade­off be­tween work and leisure de­pends only the mar­ket price of the out­put of your work (ig­nor­ing fric­tion and mar­ket failures), not on how much you in par­tic­u­lar value the out­put of your work.

So the up­shot is that in­stead of us­ing your moral par­li­a­ment to set prices, you want to be us­ing a broader dis­tri­bu­tion over all of the peo­ple who con­trol as­tro­nom­i­cal re­sources (weighted by the mar­ket prices of their re­sources). Our prefer­ences are still ev­i­dence about what oth­ers want, but this just tends to make the dis­tri­bu­tion more spread out (and there­fore cuts against e.g. car­ing much less about coloniz­ing small uni­verses).

Isn’t “how they should de­liber­ate/​​learn/​​self-mod­ify” it­self a difficult philo­soph­i­cal prob­lem (in the field of meta-philos­o­phy)? If it’s some­how eas­ier or safer to “give them an AI that doesn’t defer to them about how they should de­liber­ate/​​learn/​​self-mod­ify” than to “give them an AI that doesn’t defer to them about philos­o­phy” then I’m all for that but it doesn’t seem like a very differ­ent idea from mine.r

I still don’t re­ally get your po­si­tion, and es­pe­cially why you think:

It seems to me that the best way to avoid both of these out­comes [...] is to make sure that the first ad­vanced AIs are highly or scal­ably com­pe­tent in philos­o­phy.

I do un­der­stand why you think it’s an im­por­tant way to avoid philo­soph­i­cal er­rors in the short-term, in that case I just don’t see why you think that such prob­lems are im­por­tant rel­a­tive to other fac­tors that af­fect the qual­ity of the fu­ture.

This seems to come up a lot in our dis­cus­sions. It would be use­ful if you could make a clear state­ment of why you think this prob­lem (which I un­der­stand as: “en­sure early AI is highly philo­soph­i­cally com­pe­tent” or per­haps “differ­en­tial philo­soph­i­cal progress,” set­ting aside the ap­pli­ca­tion of philo­soph­i­cal com­pe­tence to what-I’m-call­ing-al­ign­ment) is im­por­tant, ideally with some kind of quan­ti­ta­tive pic­ture of how im­por­tant you think it is. If you ex­pect to write that up at some point then I’ll just pause un­til then.

• I don’t un­der­stand how you would get sig­nifi­cant con­clu­sions out of this with­out big mul­ti­pli­ers. Yes, there are some par­ti­ci­pants in your par­li­a­ment who care more about wor­lds other than this one. Those wor­lds ap­pear to be sig­nifi­cantly harder to in­fluence (by means other than trade), so this doesn’t seem to have a huge effect on what you ought to do in this world. (As­sum­ing that we are able to make trades that we ob­vi­ously would have wanted to make be­hind the veil of ig­no­rance.)

Wait, you are as­sum­ing a baseline/​de­fault out­come where acausal trade takes place, and com­par­ing other in­ter­ven­tions to that? My baseline for com­par­i­son is in­stead (as stated in the OP) “what can be gained by just cre­at­ing worth­while lives in this uni­verse”. My rea­sons for this are (1) I (and likely oth­ers who might read this) don’t think acausal trade is much more likely to work than the other items on my list and (2) the main in­tended au­di­ence for this post is peo­ple who have re­al­ized the im­por­tance of in­fluenc­ing the far fu­ture but not aware of (or have se­ri­ously con­sid­ered) the pos­si­bil­ity of in­fluenc­ing other uni­verses through things like acausal trade and other items on my list. Even the most so­phis­ti­cated thinkers in EA seem to fall into this cat­e­gory, e.g., peo­ple like Will MacAskill, Toby Ord, and Nick Beck­stead, un­less they’ve pri­vately con­sid­ered the pos­si­bil­ity and chose not to talk about it in pub­lic, in which case it still seems safe to as­sume that most peo­ple in EA think “cre­at­ing worth­while lives in this uni­verse” is the most good that can be ac­com­plished.

In par­tic­u­lar, if your ra­tio be­tween the value of big and small uni­verses was only 5x, then that would only have a 5x mul­ti­plier on the value of the in­ter­ven­tions you list in the OP. Given that many of them look very tiny, I as­sumed you were imag­in­ing a much larger mul­ti­plier. (Some­thing that looks very tiny may end up be­ing a huge deal, but once we are already wrong by many or­ders of mag­ni­tude it doesn’t feel like the last 5x has a huge im­pact.)

I don’t un­der­stand where “5x” comes from or why that’s the rele­vant mul­ti­plier in­stead of 100x.

It would be use­ful if you could make a clear state­ment of why you think this prob­lem is important

I’ll think about this, but I think I’d be more mo­ti­vated to at­tempt this (and maybe also have a bet­ter idea of what I need to do) if other peo­ple also spoke up and told me that they couldn’t un­der­stand my past at­tempts to ex­plain this (in­clud­ing what I wrote in the OP and pre­vi­ous com­ments in this thread).

• If there’s some kind of mea­sure of “ob­server weight” over the whole math­e­mat­i­cal uni­verse, we might be already much larger than 1/​3^^^3 of it, so the to­tal util­i­tar­ian can only gain so much.

Could you provide some in­tu­ition for this? Naively, I’d ex­pect our “ob­server mea­sure” over the space of math­e­mat­i­cal struc­tures to be 0.

• I cu­rated this post be­cause it crys­tallised an im­por­tant point re­gard­ing op­ti­mis­ing the long-term fu­ture, that I’ve not seen any­one write down suc­cinctly be­fore (with refer­ence to the rele­vant tech­ni­cal con­cepts, while still be­ing short and read­able).

• Thanks. I agree with your over­all con­clu­sions.

On the speci­fics, Bostrom’s simu­la­tion ar­gu­ment is more than just a par­allel here: it has an im­pact on how rich we might ex­pect our di­rect par­ent simu­la­tor to be.

The simu­la­tion ar­gu­ment ap­plies similarly to one base world like ours, or to an un­countable num­ber of par­allel wor­lds em­bed­ded in Teg­mark IV struc­tures. Either way, if you buy case 3, the pro­por­tion of simu­lated-by-a-world-like-ours wor­lds rises close to 1 (I’m count­ing wor­lds “depth-first”, since it seems most in­tu­itive, and in­finite simu­la­tion depth from wor­lds like ours seems im­pos­si­ble).

If Teg­mark’s pic­ture is ac­cu­rate, we’d ex­pect to be em­bed­ded in some hugely richer base struc­ture—but in Bostrom’s case 3 we’d likely have to get through N lev­els of wor­lds-like-ours first. While that wouldn’t sig­nifi­cantly change the amount of value on the table, it might make it a lot harder for us to ex­ert in­fluence on the most valuable struc­tures.

This prob­a­bly ar­gues for your over­all point: we’re not the best minds to be mak­ing such calcu­la­tions (ei­ther on the an­swers, or on the ex­pected util­ity of find­ing good an­swers).

• If Teg­mark’s pic­ture is ac­cu­rate, we’d ex­pect to be em­bed­ded in some hugely richer base struc­ture—but in Bostrom’s case 3 we’d likely have to get through N lev­els of wor­lds-like-ours first. While that wouldn’t sig­nifi­cantly change the amount of value on the table, it might make it a lot harder for us to ex­ert in­fluence on the most valuable struc­tures.

I’m not sure it makes sense to talk about “ex­pect” here. (I’m con­fused about an­throp­ics and es­pe­cially about first-per­son sub­jec­tive ex­pec­ta­tions.) But if you take the third-per­son UDT-like per­spec­tive here, we’re di­rectly em­bed­ded in some hugely richer base struc­tures, and also in­di­rectly em­bed­ded via N lev­els of wor­lds-like-ours, and hav­ing more of the lat­ter doesn’t re­duce how much value (in the UDT-util­ity sense) we can gain by in­fluenc­ing the former; it just gives us more op­tions that we can choose to take or not. In other words, we always have the op­tion of pre­tend­ing the lat­ter don’t ex­ist and just op­ti­mize for ex­ert­ing in­fluence via the di­rect em­bed­dings.

On sec­ond thought, it does in­crease the op­por­tu­nity cost of ex­ert­ing such in­fluence, be­cause we’d be spend­ing re­sources in both the di­rectly em­bed­ded wor­lds and the in­di­rectly-em­bed­ded wor­lds to do that. To get around this, the even­tual su­per­in­tel­li­gence do­ing this could wait un­til such a time in our uni­verse that Bostrom’s propo­si­tion 3 isn’t true any­more (or true to a lesser ex­tent) be­fore try­ing to in­fluence richer uni­verses, since pre­sum­ably only the his­tor­i­cally in­ter­est­ing pe­ri­ods of our uni­verse are heav­ily simu­lated by wor­lds-like-ours.

• That seems right.

I’d been pri­mar­ily think­ing about more sim­ple-minded es­cape/​up­lift/​sig­nal-to-simu­la­tors in­fluence (via this us), rather than UDT-in­fluence. If we were ever up­lifted or es­caped, I’d ex­pect it’d be into a world-like-ours. But of course you’re cor­rect that UDT-style in­fluence would ap­ply im­me­di­ately.

Op­por­tu­nity costs are a con­sid­er­a­tion, though there may be be­havi­ours that’d in­crease ex­pected value in both di­rect-em­bed­dings and wor­lds-like-ours. Win-win be­havi­ours could be taken early.

Per­son­ally, I’d ex­pect this not to im­pact our short/​medium-term ac­tions much (out­side of AI de­sign). The uni­verse looks to be self-similar enough that any strat­egy re­quiring only lo­cal ac­tion would use a tiny frac­tion of available re­sources.

I think the real difficulty is only likely to show up once a SI has pro­vided a richer pic­ture of the uni­verse than we’re able to un­der­stand/​ac­cept, and it hap­pens to sug­gest rad­i­cally differ­ent re­source al­lo­ca­tions.

Most peo­ple are go­ing to be fine with “I want to take the en­ergy of one un­used star and do philo­soph­i­cal/​as­tro­nom­i­cal calcu­la­tions”; fewer with “Based on {some­thing be­yond un­der­stand­ing}, I’m al­lo­cat­ing 99.99% of the en­ergy in ev­ery reach­able galaxy to {seem­ingly sense­less waste}”.

I just hope the class of ac­tions that are vastly im­por­tant, costly, and hard to show clear mo­ti­va­tion for, is small.

• What of ex­po­nen­tial to­tal util­i­tar­i­anism? That’s a to­tal util­i­tar­i­anism that mul­ti­plies the to­tal util­ity by the ex­po­nen­tial of the pop­u­la­tion. It may be very un­likely, but as pop­u­la­tion grows, it will even­tu­ally come to dom­i­nate.

That’s why I think moral the­o­ries should be nor­mal­ised in­de­pen­dently, to pre­vent the su­per-pop­u­la­tion ones from win­ning just by de­fault.

• That’s why I think moral the­o­ries should be nor­mal­ised in­de­pen­dently, to pre­vent the su­per-pop­u­la­tion ones from win­ning just by de­fault.

I’m as­sum­ing this as well. Did I give a differ­ent im­pres­sion in the post? If so I’ll try to clar­ify.

• Nor­mally when I nor­mal­ise, I use the ex­pected max­i­mum of the util­ity func­tion if we just max­imised it and noth­ing else: https://​​www.less­wrong.com/​​posts/​​hBJCMWELaW6Mx­inYW/​​in­terthe­o­retic-util­ity-comparison

There­fore if to­tal util­i­tar­i­anism is not heav­ily weighted, it will likely re­main unim­por­tant; your phras­ing “or some­one whose moral un­cer­tainty in­cludes to­tal util­i­tar­i­anism” sug­gested to me that you thought to­tal util­i­tar­i­anism would be im­por­tant even if as­signed a low weight, which sug­gested that it was not be­ing nor­mal­ised.

• your phras­ing “or some­one whose moral un­cer­tainty in­cludes to­tal util­i­tar­i­anism” sug­gested to me that you thought to­tal util­i­tar­i­anism would be im­por­tant even if as­signed a low weight, which sug­gested that it was not be­ing nor­mal­ised.

Ok, I didn’t mean that. What I meant was that if your moral un­cer­tainty in­cludes to­tal util­i­tar­i­anism, then the to­tal util­i­tar­ian part should rea­son as fol­lows. Would it be clearer /​ clear enough if I re­placed “or some­one whose moral un­cer­tainty in­cludes to­tal util­i­tar­i­anism” with “or the to­tal util­i­tar­i­anism part of some­one’s moral un­cer­tainty”?

• With quan­tum branch­ing, our uni­verse could have some num­ber like a googol­plex of stuff, maybe more. And philo­soph­i­cally, you’re wor­ried about the differ­ence be­tween that and 3^^^3? I get that there’s a big gap there but I’d guess it’s one that we’re defi­ni­tion­ally un­able to do use­ful moral rea­son­ing about.

• I feel like scope in­sen­si­tivity is some­thing to worry about here. I’d be re­ally happy to learn that hu­man­ity will man­age to take good care of our cos­mic en­dow­ment but my hap­piness wouldn’t scale prop­erly with the amount of value at stake if I learned we took good care of a su­per-cos­mic en­dow­ment. I think that’s the re­sult of my in­abil­ity to grasp the quan­tities in­volved rather than a true re­flec­tion of my ex­trap­o­lated val­ues, how­ever.

My con­cern is more that rea­son­ing about en­tities in sim­pler uni­verses ca­pa­ble of con­duct­ing acausal trades with us will turn out to be to­tally in­tractable (as will the other pro­posed es­cape meth­ods), but since I’m very un­cer­tain about that I think it’s definitely worth fur­ther in­ves­ti­ga­tion. I’m also not con­vinced Teg­mark’s MUH is true in the first place, but this post is mak­ing me want to do more read­ing on the ar­gu­ments in fa­vor & op­posed. It looks like there was a Ra­tion­ally Speak­ing epi­sode about it?

• When you’re faced with num­bers like 3^^^3, scope in­sen­si­tivity is the cor­rect re­sponse. A googol­plex is already enough to hold ev­ery pos­si­ble con­figu­ra­tion of Life as we know it. “Ham­let, but with ex­tra com­mas in these three places, performed by in­tel­li­gent starfish” is in there some­where in over a googol differ­ent va­ri­eties. What, then, does 3^^^3 add ex­cept more copies of the same?

• Noth­ing, if your defi­ni­tion of a copy is suffi­ciently gen­eral :-)

Am I un­der­stand­ing you right that you be­lieve in some­thing like a com­pu­ta­tional the­ory of iden­tity and think there’s some sort of bound on how com­plex some­thing we’d at­tribute moral pa­tient­hood or in­ter­est­ing­ness to can get? I agree with the former, but don’t see much rea­son for be­liev­ing the lat­ter.

• I have no idea if there is such a bound. I will never have any idea if there is such a bound, and I sus­pect that nei­ther will any en­tity in this uni­verse. Given that fact, I’d rather make the as­sump­tion that doesn’t turn me stupid when Pas­cal’s Wager comes up.

• I just re­al­ised that the prob­lem of the limited size of the Uni­verse is iso­mor­phic to the prob­lem of how to sur­vive the end of the uni­verse, which I analysed here, but the es­cape routs de­scribed by OP are differ­ent, and more rely on acausal trade and simu­la­tion hack­ing than on physic ma­nipu­la­tion.

• MUH doesn’t im­ply the ex­is­tence of halt­ing or­a­cles. In­deed, the Com­putable Uni­verse Hy­poth­e­sis is sup­posed to be an ex­ten­sion of the Math­e­mat­i­cal Uni­verse Hy­poth­e­sis, but CUH says that halt­ing or­a­cles do not ex­ist.

• There may be sev­eral con­fu­sions hap­pen­ing here. First I’ve been us­ing MUH to mean “ul­ti­mate en­sem­ble the­ory” (i.e., the idea that the Level IV mul­ti­verse of all math­e­mat­i­cal struc­tures ex­ists), be­cause Wikipe­dia says MUH is “also known as the ul­ti­mate en­sem­ble the­ory”. But Teg­mark cur­rently defines MUH as “Our ex­ter­nal phys­i­cal re­al­ity is a math­e­mat­i­cal struc­ture” which seems to be talk­ing just about our par­tic­u­lar uni­verse and not say­ing that all math­e­mat­i­cal struc­tures ex­ist. Se­cond if by “MUH doesn’t im­ply the ex­is­tence of halt­ing or­a­cles” you mean that MUH doesn’t nec­es­sar­ily im­ply the ex­is­tence of halt­ing or­a­cles in our uni­verse, then I agree. What I meant in the OP is that the ul­ti­mate en­sem­ble the­ory im­plies that uni­verses con­tain­ing halt­ing or­a­cles ex­ist in the Level IV mul­ti­verse.

Hope­fully that clar­ifies things?

• Isn’t all this mas­sively de­pen­dent on how your util­ity $U$ scales with the to­tal num­ber $N$ of well-spent com­pu­ta­tions (e.g. one-bit com­putes)?

That is, I’m ask­ing for a gut feel­ing here: What are your rel­a­tive util­ities for $10^{100}$, $10^{110}$, $10^{120}$, $10^{130}$ uni­verses?

Say, $U(0)=0$, $U(10^100)=1$ (gauge fix­ing); in­stant pain-free end-of-uni­verse is zero util­ity, and a suc­cess­ful coloniza­tion of the en­tire uni­verse with a sub­op­ti­mal black hole-farm­ing near heat-death is unit util­ity.

Now, per defi­ni­tionem, the util­ity $U(N)$ of a $N$-com­pu­ta­tion out­come is the in­verse of the prob­a­bil­ity $p$ at which you be­come in­differ­ent to the fol­low­ing gam­ble: Im­me­di­ate end-of-the-world at prob­a­bil­ity $(1-p)$ vs an up­grade of com­pu­ta­tional world-size to $N$ at propa­bil­ity $p$.

I would per­son­ally guess that $U(10^{130})< 2$; i.e. this up­grade would prob­a­bly not be worth a 50% risk of ex­tinc­tion. This is mas­sively sub­lin­ear scal­ing.

• is available by press­ing CTR+4/​CMD+4 in­stead of us­ing ‘\$’

• I am not sure how one can talk about the ob­served uni­verse and the num­ber 3^^^3 in the same sen­tence, given that the max­i­mum in­for­ma­tional con­tent is roughly 10^120 qubits, the rest is out­side the cos­molog­i­cal hori­zon. Alter­na­tively, if we talk about the simu­la­tion ar­gu­ment, then the ex­pres­sion “prac­ti­cal im­pli­ca­tions” seems out of place.

• I am not sure how one can talk about the ob­served uni­verse and the num­ber 3^^^3 in the same sen­tence, given that the max­i­mum in­for­ma­tional con­tent is roughly 10^120 qubits, the rest is out­side the cos­molog­i­cal hori­zon.

Where in the post do you see it sug­gested that our uni­verse is ca­pa­ble of con­tain­ing 3^^^3 of any­thing?

Alter­na­tively, if we talk about the simu­la­tion ar­gu­ment, then the ex­pres­sion “prac­ti­cal im­pli­ca­tions” seems out of place.

How so?

• I doubt that there’s any moral differ­ence be­tween run­ning a per­son and ask­ing a mag­i­cal halt­ing or­a­cle what they would have said.