For The People Who Are Still Alive

Max Teg­mark ob­served that we have three in­de­pen­dent rea­sons to be­lieve we live in a Big World: A uni­verse which is large rel­a­tive to the space of pos­si­bil­ities. For ex­am­ple, on cur­rent physics, the uni­verse ap­pears to be spa­tially in­finite (though I’m not clear on how strongly this is im­plied by the stan­dard model).

If the uni­verse is spa­tially in­finite, then, on av­er­age, we should ex­pect that no more than 10^10^29 me­ters away is an ex­act du­pli­cate of you. If you’re look­ing for an ex­act du­pli­cate of a Hub­ble vol­ume—an ob­ject the size of our ob­serv­able uni­verse—then you should still on av­er­age only need to look 10^10^115 lightyears. (Th­ese are num­bers based on a highly con­ser­va­tive count­ing of “phys­i­cally pos­si­ble” states, e.g. pack­ing the whole Hub­ble vol­ume with po­ten­tial pro­tons at max­i­mum den­sity given by the Pauli Ex­clu­sion prin­ci­ple, and then al­low­ing each pro­ton to be pre­sent or ab­sent.)

The most pop­u­lar cos­molog­i­cal the­o­ries also call for an “in­fla­tion­ary” sce­nario in which many differ­ent uni­verses would be eter­nally bud­ding off, our own uni­verse be­ing only one bud. And fi­nally there are the al­ter­na­tive de­co­her­ent branches of the grand quan­tum dis­tri­bu­tion, aka “many wor­lds”, whose pres­ence is un­am­bigu­ously im­plied by the sim­plest math­e­mat­ics that fits our quan­tum ex­per­i­ments.

Ever since I re­al­ized that physics seems to tell us straight out that we live in a Big World, I’ve be­come much less fo­cused on cre­at­ing lots of peo­ple, and much more fo­cused on en­sur­ing the welfare of peo­ple who are already al­ive.

If your de­ci­sion to not cre­ate a per­son means that per­son will never ex­ist at all, then you might, in­deed, be moved to cre­ate them, for their sakes. But if you’re just de­cid­ing whether or not to cre­ate a new per­son here, in your own Hub­ble vol­ume and Everett branch, then it may make sense to have rel­a­tively lower pop­u­la­tions within each causal vol­ume, liv­ing higher qual­ities of life. It’s not like any­one will ac­tu­ally fail to be born on ac­count of that de­ci­sion—they’ll just be born pre­dom­i­nantly into re­gions with higher stan­dards of liv­ing.

Am I sure that this state­ment, that I have just emit­ted, ac­tu­ally makes sense?

Not re­ally. It dab­bles in the dark arts of an­throp­ics, and the Dark Arts don’t get much murk­ier than that. Or to say it with­out the chaotic in­ver­sion: I am stupid with re­spect to an­throp­ics.

But to ap­ply the test of sim­plifi­a­bil­ity—it seems in some raw in­tu­itive sense, that if the uni­verse is large enough for ev­ery­one to ex­ist some­where, then we should mainly be wor­ried about giv­ing ba­bies nice fu­tures rather than try­ing to “en­sure they get born”.

Imag­ine tak­ing a sur­vey of the whole uni­verse. Every plau­si­ble baby gets a lit­tle check­mark in the “ex­ists” box—ev­ery­one is born some­where. In fact, the to­tal pop­u­la­tion count for each baby is some­thing-or-other, some large num­ber that may or may not be “in­finite” -

(I should men­tion at this point that I am an in­finite set athe­ist, and my main hope for be­ing able to main­tain this in the face of a spa­tially in­finite uni­verse is to sug­gest that iden­ti­cal Hub­ble vol­umes add in the same way as any other iden­ti­cal con­figu­ra­tion of par­ti­cles. So in this case the uni­verse would be ex­po­nen­tially large, the size of the branched de­co­her­ent dis­tri­bu­tion, but the spa­tial in­finity would just fold into that very large but finite ob­ject. And I could still be an in­finite set athe­ist. I am not a physi­cist so my fond hope may be ruled out for some rea­son of which I am not aware.)

- so the first ques­tion, an­throp­i­cally speak­ing, is whether mul­ti­ple re­al­iza­tions of the ex­act same phys­i­cal pro­cess count as more than one per­son. Let’s say you’ve got an up­load run­ning on a com­puter. If you look in­side the com­puter and re­al­ize that it con­tains triply re­dun­dant pro­ces­sors run­ning in ex­act syn­chrony, is that three peo­ple or one per­son? How about if the pro­ces­sor is a flat sheet—if that sheet is twice as thick, is there twice as much per­son in­side it? If we split the sheet and put it back to­gether again with­out desyn­chro­niz­ing it, have we cre­ated a per­son and kil­led them?

I sup­pose the an­swer could be yes; I have con­fessed my­self stupid about an­throp­ics.

Still: I, as I sit here, am fran­ti­cally branch­ing into ex­po­nen­tially vast num­bers of quan­tum wor­lds. I’ve come to terms with that. It all adds up to nor­mal­ity, af­ter all.

But I don’t see my­self as hav­ing a lit­tle util­ity counter that fran­ti­cally in­creases at an ex­po­nen­tial rate, just from my sit­ting here and split­ting. The thought of split­ting at a faster rate does not much ap­peal to me, even if such a thing could be ar­ranged.

What I do want for my­self, is for the largest pos­si­ble pro­por­tion of my fu­ture selves to lead eu­daimonic ex­is­tences, that is, to be happy. This is the “prob­a­bil­ity” of a good out­come in my ex­pected util­ity max­i­miza­tion. I’m not con­cerned with hav­ing more of me—re­ally, there are plenty of me already—but I do want most of me to be hav­ing fun.

I’m not sure whether or not there ex­ists an im­per­a­tive for moral civ­i­liza­tions to try to cre­ate lots of happy peo­ple so as to en­sure that most ba­bies born will be happy. But sup­pose that you started off with 1 baby ex­ist­ing in un­happy re­gions for ev­ery 999 ba­bies ex­ist­ing in happy re­gions. Would it make sense for the happy re­gions to cre­ate ten times as many ba­bies lead­ing one-tenth the qual­ity of life, so that the uni­verse was “99.99% sorta happy and 0.01% un­happy” in­stead of “99.9% re­ally happy and 0.1% un­happy”? On the face of it, I’d have to an­swer “No.” (Though it de­pends on how un­happy the un­happy re­gions are; and if we start off with the uni­verse mostly un­happy, well, that’s a pretty un­pleas­ant pos­si­bil­ity...)

But on the whole, it looks to me like if we de­cide to im­ple­ment a policy of rou­tinely kil­ling off cit­i­zens to re­place them with hap­pier ba­bies, or if we lower stan­dards of liv­ing to cre­ate more peo­ple, then we aren’t giv­ing the “gift of ex­is­tence” to ba­bies who wouldn’t oth­er­wise have it. We’re just set­ting up the uni­verse to con­tain the same ba­bies, born pre­dom­i­nantly into re­gions where they lead short lifes­pans not con­tain­ing much hap­piness.

Once some­one has been born into your Hub­ble vol­ume and your Everett branch, you can’t undo that; it be­comes the re­spon­si­bil­ity of your re­gion of ex­is­tence to give them a happy fu­ture. You can’t hand them back by kil­ling them. That just makes their av­er­age lifes­pan shorter.

It seems to me that in a Big World, the peo­ple who already ex­ist in your re­gion have a much stronger claim on your char­ity than ba­bies who have not yet been born into your re­gion in par­tic­u­lar.

And that’s why, when there is re­search to be done, I do it not just for all the fu­ture ba­bies who will be born—but, yes, for the peo­ple who already ex­ist in our lo­cal re­gion, who are already our re­spon­si­bil­ity.

For the good of all of us, ex­cept the ones who are dead.

• I’m com­pletely not get­ting this. If all pos­si­ble mind-his­to­ries are in­stan­ti­ated at least once, and their be­ing in­stan­ti­ated at least once is all that mat­ters, then how does any­thing we do mat­ter?

If you be­came con­vinced that peo­ple had not just lit­tle check­marks but lit­tle con­tin­u­ous di­als rep­re­sent­ing their de­gree of ex­is­tence (as mea­sured by al­gorith­mic com­plex­ity), how would that change your goals?

• Vladimir, many of these an­thropic-sound­ing ques­tions can also trans­late di­rectly into “What should I ex­pect to see hap­pen to me, in situ­a­tions where there are a billion X-po­ten­tially-mes and one Y-po­ten­tially-mes?” If X is a kind of me, I should al­most cer­tainly ex­pect to see X; if not, I should ex­pect to see Y. I can­not quite man­age to bring my­self to dis­pense with the ques­tion “What should I ex­pect to see hap­pen next?” or, even worse, “Why am I see­ing some­thing so or­derly rather than chaotic?” For ex­am­ple, say­ing “I only care about peo­ple in or­derly situ­a­tions” does not cut it as an ex­pla­na­tion—it doesn’t seem like a ques­tion that I could an­swer with a util­ity func­tion.

I have not been able to dis­solve “the amount of re­al­ity-fluid” with­out also dis­solv­ing my be­lief that most peo­ple-weight is in or­dered uni­verses and that most of my fu­tures are in or­dered uni­verses, with­out which I have no ex­pla­na­tion for why I find my­self in an or­dered uni­verse and no ex­pec­ta­tion of a fu­ture that is or­dered as well.

In par­tic­u­lar, I have not been able to dis­solve re­al­ity-fluid into my util­ity func­tion with­out con­clud­ing that, by virtue of car­ing only about copies of me who win the lot­tery, I could ex­pect to win the lot­tery and ac­tu­ally see that as a re­sult.

Robin, the dis­junc­tive sup­port in fa­vor of a Big World is strong enough that I’m will­ing to call it pretty much a done deal at this point—the strongest pillar be­ing MWI. With re­gards to MWI, I would sug­gest that the num­ber of de­co­her­ent re­gions of the con­figu­ra­tion space would be vastly larger than the space of pos­si­bil­ities for neu­rons firing or not firing.

• I find it sus­pi­cious that peo­ple’s prefer­ences over pop­u­la­tion, lifes­pan, stan­dard of liv­ing, and di­ver­sity seem to be “kinked” near their fa­mil­iar world. A world with 1% of the pop­u­la­tion, stan­dard of liv­ing, lifes­pan, or di­ver­sity of their own world seems to most a ter­rible trav­esty, al­most a hor­ror, while a world with 100 times as much of one of these fac­tors seems to them at most a small gain, hardly worth men­tion­ing. I sus­pect a se­ri­ous sta­tus quo bias.

• Couldn’t this ar­gu­ment cut the other way? Maybe the only rea­son we think a small pop­u­la­tion with an av­er­age util­ity of 100 is worse than a billion peo­ple with an av­er­age util­ity of 99 is that we’re “kinked” to a world in­hab­ited by billions.

Per­son­ally, when I read “The City and the Stars,” which takes place on a very sparsely pop­u­lated fu­ture Earth, I agreed with the au­thor that it was a bad thing that the lo­cal pop­u­la­tion was less am­bi­tious and cu­ri­ous than the hu­mans of the past. But I did not think it was a hor­rible trav­esty that there were so few peo­ple. I as­sume that for the du­ra­tion of my read­ing I em­pathized with the in­hab­itants, and hence found their cur­rent pop­u­la­tion lev­els de­sir­able. I’ve no­ticed the same thing when read­ing other books set in sparsely pop­u­lated set­tings. I wish the in­hab­itants were bet­ter off, but don’t think there need to be more of them.

A typ­i­cal ar­gu­ment against “qual­ity” fo­cused pop­u­la­tion ethics is that they fa­vor much smaller pop­u­la­tions with higher qual­ities of life than we cur­rently have, while an ar­gu­ment against “quan­tity” fo­cused pop­u­la­tion ethics is that they fa­vor much larger pop­u­la­tions with lower qual­ities of life than we cur­rently have. Both of these seem counter-in­tu­itive, but which in­tu­ition should be kept and which should be re­jected? Con­sid­er­ing that our moral in­tu­itions de­vel­oped in small hunter gath­erer bands, I wouldn’t be sur­prised if the qual­ity fo­cused pop­u­la­tion ethics was ac­tu­ally the cor­rect one.

• … huh. I started to dis­agree with you, and found all the ex­am­ples I came up with didn’t ac­tu­ally seem that bad—up to and in­clud­ing a lone loner roam­ing an empty uni­verse.

On the other hand, they do seem a bit … dull? Lack­ing the sort of ex­plo­sive va­ri­ety I pic­ture in the Good Fu­ture.

• On the other hand, they do seem a bit … dull? Lack­ing the sort of ex­plo­sive va­ri­ety I pic­ture in the Good Fu­ture.

I agree, I think that the rea­son that sparsely pop­u­lated sce­nar­ios seem re­pug­nant to us isn’t be­cause we want to max­i­mize to­tal util­ity, and they have a lower to­tal util­ity level. Rather it’s be­cause we value things like di­ver­sity, friend­ship, love, and in­ter­per­sonal en­tan­gle­ments, and we find the idea of a fu­ture where these things do not ex­ist to be re­pug­nant.

One ar­gu­ment hard­core to­tal util­i­tar­i­ans use to claim peo­ple have in­con­sis­tent prefer­ences about pop­u­la­tion ethics is that when rank­ing the fol­low­ing pop­u­la­tions:

A) Ten billion peo­ple with ten thou­sand util­ity each, for a to­tal util­ity of 100 trillion. B) 200 trillion peo­ple with one util­ity each, for a to­tal util­ity of 200 trillion. C) One util­ity mon­ster with 50 trillion util­ity.

Peo­ple con­sider A to be bet­ter than both B and C. “Aha!” cry the to­tal util­i­tar­i­ans. “So in one sce­nario util­ity is too heav­ily con­cen­trated, and in an­other it isn’t con­cen­trated enough! In­tran­si­tive prefer­ences! Sta­tus quo bias!”

What the hard­core to­tal util­i­tar­i­ans fail to re­al­ize is that the rea­son peo­ple find C re­pug­nant isn’t be­cause util­ity is heav­ily con­cen­trated, it’s that in or­der to have such high util­ity when it is the lone be­ing in the uni­verse, the util­ity mon­ster must place no value at all on di­ver­sity, friend­ship, love, and in­ter­per­sonal en­tan­gle­ments, and so forth. C isn’t re­pug­nant be­cause util­ity is too con­cen­trated, or be­cause of sta­tus quo bias, it’s re­pug­nant be­cause the lone in­hab­itant of C lacks a large por­tion of the gifts we give to to­mor­row.

To test this the­ory I de­cided to com­pare pop­u­la­tions A, B, and C again, with the stipu­la­tion that the mul­ti­tude in­hab­it­ing of A and B were all her­mits who never saw each other, and in­stead of di­verse in­di­vi­d­u­als they were re­peated ge­netic du­pli­cates of the same per­son. Sure enough I found all three pop­u­la­tions re­pug­nant. But I might have found C to be a lit­tle less re­pug­nant than A and B.

• It’s pos­si­ble I’m more of a loner than you, so I find the idea of her­mits less re­pug­nant.

On the other hand, clones tend to re­ally mess up my in­tu­itions re­gard­less of the hy­po­thet­i­cal. I’m pretty sure they should be pe­nal­ized for lack­ing di­ver­sity, but as for the ac­tual amount …

EDIT: also, be care­ful you’re not imag­in­ing these her­mits not do­ing any­thing fun. Agents get­ting util­ity from things we don’t value is a sure­fire way to suck the worth out of a num­ber.

• It’s pos­si­ble I’m more of a loner than you, so I find the idea of her­mits less re­pug­nant.

Maybe I was us­ing too strong a word when I said I found it “re­pug­nant.”

be care­ful you’re not imag­in­ing these her­mits not do­ing any­thing fun.

I took your ad­vice and tried to imag­ine the her­mits do­ing things I like do­ing when I am alone. That was hard at first, since most of the things I like do­ing alone still re­quire some other per­sonat some point (read­ing a book re­quires an au­thor, for in­stance). But imag­in­ing a her­mit study­ing na­ture, in­ter­act­ing with plants and an­i­mal (the an­i­mals ob­vi­ously have to be bugs and other non­sapi­ent, non­sen­tient an­i­mals to pre­serve the pu­rity of the sce­nario, but that’s fine with me), do­ing sci­ence ex­per­i­ments, etc, that doesn’t seem re­pug­nant at all.

But I still pre­fer, or am in­differ­ent to, one util­ity mon­ster her­mit vs. many nor­mal her­mits, es­pe­cially if the her­mits are all clones liv­ing in very similar en­vi­ron­ments.

On the other hand, clones tend to re­ally mess up my in­tu­itions re­gard­less of the hy­po­thet­i­cal. I’m pretty sure they should be pe­nal­ized for lack­ing di­ver­sity, but as for the ac­tual amount …

I’m not sure how much I value di­ver­sity that isn’t ap­pre­ci­ated. I think I’d pre­fer a di­verse group of her­mits to a non­di­verse group, but the fact that the her­mits never meet and are un­able to ap­pre­ci­ate each oth­ers di­ver­sity seems to make it less valuable to me, the same way a paint­ing that’s locked in a room where no one will ever see it is less valuable. That may come back to my be­lief that value usu­ally needs both an ob­jec­tive and sub­jec­tive com­po­nent. On the other hand I might value di­ver­sity ter­mi­nally as well, as I said the fact that no one ap­pre­ci­ated the her­mit’s di­ver­sity made it less valuable to me, but not val­ue­less.

• I’m just in­cred­ibly skep­ti­cal of at­tempts to do moral rea­son­ing by in­vok­ing ex­otic meta­phys­i­cal con­sid­er­a­tions such as an­throp­ics, even if one is con­fi­dent that ul­ti­mately one will have to do so. Hu­man ra­tio­nal­ity has enough trou­ble deal­ing with sci­ence. It’s nice that we seen to be able to do bet­ter than that, but THIS MUCH bet­ter? REALLY? I think that there are ter­ribly strong bi­ases to­wards de­cid­ing that “it all adds up to nor­mal­ity” in­volved here, even when it’s not clear what ‘nor­mal­ity’ means. When one doesn’t de­cide that, it seems that the ten­dency is to de­cide that it all adds up to some cliche, which seems VERY un­likely. I’m also not at all sure how cer­tain we should be of a big uni­verse, but per­son­ally I don’t feel very con­fi­dent of it. I’d say it’s the way to bet, but not at what odds it re­mains the way to bet. I rarely find my­self in prac­ti­cal situ­a­tions where my ac­tions would be differ­ent if I had some par­tic­u­lar meta­phys­i­cal be­lief rather than an­other, though it does come up and have some in­fluence on e.g. my thoughts on veg­e­tar­i­anism.

• I’m fa­mil­iar with Parfit’s Repug­nant Con­clu­sion, and was ac­tu­ally plan­ning to do a post on it at some point or an­other, be­cause I took one look and said “Isn’t that just scope in­sen­si­tivity?” But I also au­to­mat­i­cally trans­lated the prob­lem into Small World terms so that new peo­ple were ac­tu­ally be­ing brought into ex­is­tence; and, in ret­ro­spect, even then, vi­su­al­ized it in terms of a num­ber of peo­ple small enough that they could have rea­son­ably unique ex­pe­riences (that is, not a thou­sand copies of Robin Han­son look­ing at a dust speck in slightly differ­ent places).

With those pro­vi­sos in place, the Repug­nant Con­clu­sion is straight­for­wardly “re­pug­nant” only be­cause of scope in­sen­si­tivity. By speci­fi­ca­tion, each new birth is some­thing to cel­e­brate rather than to re­gret—it can’t be an ex­is­tence just marginally good enough to avoid mercy-kil­ling af­ter be­ing born, with the di­su­til­ity of the death taken into ac­count. It has to be an ex­is­tence con­tain­ing enough joys to out­weigh any sor­rows, so that we cel­e­brate its birth. If each new birth is some­thing to cel­e­brate, then the “re­pug­nance” of the Repug­nant Con­clu­sion is just be­cause we’re toss­ing the thou­sand­fold mul­ti­plier of a thou­sand cel­e­bra­tions out the win­dow.

But if there are diminish­ing moral re­turns on di­ver­sity, or if peo­ple already ex­ist and we’re al­lo­cat­ing re­al­ity-fluid among them, then you can “shut up and calcu­late” and find that you shouldn’t cre­ate new low-qual­ity peo­ple; and then the Repug­nant Con­clu­sion fails be­cause each ad­di­tional birth is not some­thing to cel­e­brate.

By say­ing “we should ask our­selves what we want”, I didn’t mean to im­ply that we could trust the an­swers. But I don’t think that my own an­swer leads to a prefer­ence re­ver­sal (in the ab­sence of an­thropic para­doxes, where I don’t know what to ex­pect to see ei­ther). If I’ve missed the re­ver­sal, by all means point it out.

• Well, this is moral­ity we’re talk­ing about, right? So in that case we should ask our­selves what we want.

Let’s say that there are already 10^10^20 peo­ple out there, and you’re sud­denly blessed with a thou­sand times the re­sources. Would you rather have 10^(10^20 + 3) peo­ple in ex­is­tence, or raise the stan­dard of liv­ing by a fac­tor of a thou­sand?

To look at it an­other way, let’s say that you re­cently glanced up out of the cor­ner of your eye and saw a dust speck. I have a thou­sand units of re­source. Would you pre­fer that I simu­late a thou­sand differ­ent ver­sions of Robin who saw the dust speck in slightly differ­ent lo­ca­tions in a 10 x 10 x 10 grid, or would you rather have a thou­sand times as much money?

For me, the value of cre­at­ing new ex­is­tences is linked to their di­ver­sity; as you cre­ate more peo­ple, you run out of di­ver­sity, and so it be­comes more im­por­tant to cre­ate the best peo­ple rather than to cre­ate new peo­ple.

Sup­pose that Earth were the only planet, the only branch, and the only re­gion in all of ex­is­tence. Then we might want to have math­e­mat­i­ci­ans share all pos­si­ble de­vel­op­ments with each other, in or­der to pre­vent them from du­pli­cat­ing each other’s work and let them prove as many new the­o­rems as pos­si­ble; be­cause if some­one here doesn’t prove a the­o­rem, no one will ever know that the­o­rem ever.

But if there are zillions of Earths, then math­e­mat­i­ci­ans may want not to peek at spoilers, sav­ing the joy of dis­cov­er­ing es­pe­cially fun the­o­rems for them­selves—they will con­cen­trate on in­di­vi­d­u­ally ex­pe­rienc­ing the high­est-qual­ity the­o­rems, rather than try­ing to cover as much space as pos­si­ble as a group.

• I con­fessed my­self con­fused! Really, I did! But even be­ing con­fused, I’ve got to up­date as best I can. In a suffi­ciently large uni­verse, I care more about bet­ter lives and less about cre­at­ing more peo­ple. Is that re­ally so com­pli­cated?

• “So in that case we should ask our­selves what we want.”

Eliezer,

The stan­dard prob­lem is that peo­ple have in­co­her­ent prefer­ences over var­i­ous pop­u­la­tion sce­nar­ios. They pre­fer to sub­stan­tially in­crease the pop­u­la­tion in ex­change for a small change in QOL, but they re­ject the re­sult of many such trade­offs in se­quence. Crit­i­cal-level views, or ones that weight both QOL and to­tal in­de­pen­dently, all fail at re­s­olu­tion.

• The idea is that you can’t change whether a mind ex­ists but you can, pos­si­bly, change how much of it ex­ists, or per­haps, how much of differ­ent fu­tures it has. By mul­ti­ply in­stan­ti­at­ing it? I guess so. It doesn’t seem to make much sense, but if I don’t pre­sume some­thing like this, I have to weight Boltz­mann brains the same as my­self.

I’m not try­ing to rest this ar­gu­ment on the de­tails of the an­throp­ics. Some­thing more along the lines of—in a Big World, I don’t have to worry as much about cre­at­ing di­ver­sity or giv­ing pos­si­bil­ities a chance to ex­ist, rel­a­tive to how much I worry about av­er­age qual­ity of life for sen­tients. If we cre­ate a com­fortable num­ber of di­verse peo­ple with high stan­dards of liv­ing in our own Everett branch, we can rely on other di­verse peo­ple be­ing re­al­ized el­se­where.

I have con­fessed my own con­fu­sion about an­throp­ics; I do not at pre­sent have any non-para­dox­i­cal vi­su­al­iza­tion of this prob­lem in hand. Still—in a Big World, it sounds a lit­tle more okay to have fewer peo­ple lo­cally with a higher qual­ity of life; do you see the in­tu­itive ap­peal?

We’re not talk­ing about “few peo­ple” in any ab­solute sense; there’s six billion of us already. But say that, as we spread across galax­ies, that num­ber goes up to six quadrillion (10^15) in­stead of six decillion (10^30) and ev­ery­one has 10^15 times the stan­dard of liv­ing, or how­ever that scales.

When the vast ma­jor­ity of or­ders of mag­ni­tude in the di­ver­sity of re­al­ized pos­si­bil­ities, 10^some­thing or­ders of mag­ni­tude, come from quan­tum branch­ing, isn’t it okay to just take fif­teen or­ders of mag­ni­tude for the stan­dard of liv­ing im­prove­ment?

• “It seems to me that in a Big World, the peo­ple who already ex­ist in your re­gion have a much stronger claim on your char­ity than ba­bies who have not yet been born into your re­gion in par­tic­u­lar.”

This doesn’t make sense to me. A su­per­in­tel­li­gence could:

1. A su­per­in­tel­li­gence could cre­ate a semi-ran­dom plau­si­ble hu­man brain em­u­la­tion de novo, and what­ever this em­u­la­tion was, it would be the con­tinu­a­tion of some set of hu­man lives.

2. A su­per­in­tel­li­gence could con­duct simu­la­tions to ex­plore the likely dis­tri­bu­tion of minds across the mul­ti­verse, as well as the de­gree to which em­u­la­tions con­tin­u­ing their lives (in de­sir­able fash­ions) would serve its al­tru­is­tic goals. Vast num­bers of copies could then be run ac­cord­ingly, and the costs of ex­plo­ra­tory simu­la­tion would be neg­ligible by com­par­i­son, so there would be lit­tle ad­van­tage to con­tin­u­ing the lives of be­ings within our causal re­gion as op­posed to en­tities dis­cov­ered in ex­plo­ra­tory simu­la­tion.

3. If we’re only con­cerned about pro­por­tions within ‘ex­tended-be­ings,’ then there’s more bang for the buck in run­ning em­u­la­tions of rare and ex­otic be­ings (fewer em­u­la­tions are re­quired to change their pro­por­tions). The mere fact that we find cur­rent peo­ple to ex­ist sug­gests an­throp­i­cally that they are rel­a­tively com­mon (and thus that it’s ex­pen­sive to change their pro­por­tions) , so cur­rent lo­cal peo­ple would ac­tu­ally be ne­glected al­most en­tirely by your sort of Big World av­er­age util­i­tar­ian.

• Carl is right; this is a minefield in terms of mis­lead­ing in­tu­itions. Also, there is already a sub­stan­tial philos­o­phy liter­a­ture deal­ing with it; best to start with what they’ve learned.

• Steven, I call the lit­tle con­tin­u­ous di­als the “amount of re­al­ity-fluid” to re­mind my­self of how con­fused I am.

“Un­pleas­ant pos­si­bil­ity” isn’t an ar­gu­ment but I didn’t feel like go­ing into the rather com­plex is­sues in­volved (prob­a­bil­ity of UnFriendly AI run­ning an­ces­tor simu­la­tions, how many of them, ver­sus prob­a­bil­ity of Friendly AI, ver­sus prob­a­bil­ity of hit­ting the Un­happy Valley with a near-miss FAI or a med­dling-dab­bler AGI trained on smil­ing faces, ver­sus prob­a­bil­ity of in­hu­man aliens cre­at­ing minds that we care about, plus go­ing into the is­sues of QTI).

Nazgul, you can act swiftly to cap­ture all re­sources in your im­me­di­ate vicinity re­gard­less of whether you plan to share them out among few or many in­di­vi­d­u­als.

Robin, spa­tial in­finity would definitely be large rel­a­tive to the vol­ume of phys­i­cal pos­si­bil­ities (in­finite ver­sus finite). With many-wor­lds and a man­gling cut­off… then not ev­ery phys­i­cal pos­si­bil­ity would be re­al­ized, but I would ex­pect most pos­si­ble ba­bies would be. All the ba­bies worth mak­ing could be du­pli­cated many times over among the Everett branches of all moral civ­i­liza­tions, even if any given branch kept their pop­u­la­tions low and liv­ing stan­dards high. Does it look differ­ent to you?

• The data you point to only seem to sug­gest the uni­verse is large; how do they also sug­gest it “is large rel­a­tive to the space of phys­i­cal pos­si­bil­ities”? The like­li­hood ra­tio seems pretty close as far as I can see.

With steven, I don’t see how, on your ac­count, any of your ac­tions can in fact effect the “pro­por­tion of my fu­ture selves to lead eu­daimonic ex­is­tences”. If peo­ple in your past couldn’t effect the to­tal chance of your ex­ist­ing, how is it that you can effect the to­tal chance of any par­tic­u­lar fu­ture you ex­ist­ing? And how can there be a differ­ing rel­a­tive chance if the to­tal chances all stay con­stant?

• Since I prob­a­bly don’t care about ab­stract ex­is­tence of mu­sic, but about ex­pe­rienc­ing mu­sic, this is cor­rect for mu­sic for the wrong rea­sons, namely limited at­ten­tion band­width. Anal­ogy se­duces, but doesn’t seem to carry over...

• Robin, I think I’m be­ing con­sis­tent in car­ing about lifes­pan, stan­dard of liv­ing, and di­ver­sity while not car­ing about pop­u­la­tion. (Diver­sity will look like con­cern for pop­u­la­tion but it will run into diminish­ing re­turns; still, if our Earth were the only civ­i­liza­tion, then in­deed there would be lots of ex­pe­riences as-yet un­re­al­ized and the di­ver­sity mo­tive would be strong. In other words, I’d con­sis­tently want a hun­dred times as much di­ver­sity as what we see in the im­me­di­ate world around us.)

Sup­pose that in­stead of talk­ing about peo­ple, we were just talk­ing about mu­sic or the­o­rems.

It seems to me that a lot of what I have to say on this sub­ject car­ries right over—that if there’s very lit­tle mu­sic or math already, then we are con­cerned about cre­at­ing more of it so that ex­pe­riences don’t go un­re­al­ized. But if the space is already well-cov­ered up to the gran­u­lar­ity at which we care about di­ver­sity (which is less than tiny vari­a­tions in note length) then we are more in­ter­ested in hear­ing the best mu­sic, and less in­ter­ested in hear­ing new mu­sic.

• Robin,

Some brute prefer­ences and val­ues may be in­cul­cated by con­nected so­cial pro­cesses. So­cial psy­chol­ogy seems to point to flex­ible moral learn­ing among young peo­ple (e.g. de­vel­op­ing strong moral feel­ings about rit­ual pu­rity as one’s cul­ture defines it through early ex­po­sure to adults re­act­ing in the pre­scribed ways). Sex­ual psy­chol­ogy seems to show similar effects: there is a dizzy­ing va­ri­ety of learned sex­ual fetishes, and they tend to be cul­turally laden and con­nected to the ex­pe­riences of to­day, but that doesn’t make them wrong. Mo­ral ed­u­ca­tion ded­i­cated to up­hold­ing the sta­tus quo may cre­ate real prefer­ences for that sta­tus quo, (in ad­di­tion to the bias you men­tion, not in place of it) in a ‘moral mir­a­cle’ but not a phys­i­cal one:

http://​​less­wrong.com/​​lw/​​sa/​​the_gift_we_give_to_to­mor­row/​​

• Eliezer, it seems you are just ex­press­ing the usual in­tu­ition against the the “re­pug­nant con­clu­sion”, that as long as the uni­verse has a lot more crea­tures than are on Earth now, hav­ing even more crea­tures can’t be very im­por­tant rel­a­tive to each one’s qual­ity of life.

But in tech­ni­cal terms if you can talk about how much of a mind ex­ists, and can pro­mote more of one kind of mind rel­a­tive to an­other, then you can talk about how much they all ex­ist, and can want to pro­mote more minds ex­ist­ing to a larger de­gree.

• I still see no ad­e­quate an­swer to the ques­tion of how you can change P(A|B) if you can’t change P(A) or P(B). If ev­ery pos­si­ble mind ex­ists some­where, and if all that mat­ters about a mind is that it ex­ists some­where, then no ac­tions make any differ­ence to what mat­ters.

• Most of the con­cepts here are eth­i­cal. Whether some con­trap­tion has the same per­sonal iden­tity as you do, and whether it’s good to have that con­trap­tion copied/​de­stroyed, is a moral ques­tion, in a case when the un­nat­u­ral con­cept of what’s right gets ex­tended to very strange situ­a­tions. Whether we cut this ques­tion in terms of per­sonal iden­tity or pat­terns of el­e­men­tary par­ti­cles is a mat­ter of cog­ni­tive al­gorithm used to de­ter­mine the de­ci­sion. It doesn’t mat­ter whether an up­load is called “the same per­son” as its biolog­i­cal preimage, it only mat­ters whether it’s a good de­ci­sion to make this change. In our en­vi­ron­ment, the only analo­gous de­ci­sion is to whether you leave a per­son al­ive, maybe trad­ing a life for some­thing greater. When try­ing to ex­tend a con­cept of per­sonal iden­tity it­self, peo­ple make a mis­take of try­ing to ex­tend its in­stru­men­tal value along with it, but this value breaks along with the con­cept when con­text be­comes suffi­ciently un­nat­u­ral.

Our moral­ity evolved with­out tak­ing into ac­count the fine prop­er­ties of phys­i­cal world, and so at least moral de­ci­sions drawn in con­text so close to our evolu­tion­ary en­vi­ron­ment that all the clas­si­cal hal­lu­ci­na­tions still hold, shouldn’t re­quire those prop­er­ties of phys­i­cal world to be taken into ac­count. The de­ci­sion to de­ter­mine more av­er­age peo­ple vs. less hap­pier peo­ple shouldn’t be jus­tified in terms of many-wor­lds, it should be jus­tified just as well in terms of out cog­ni­tive ar­chi­tec­ture with­out break­ing out from the frame­work of clas­si­cal hal­lu­ci­na­tions.

• What I do want for my­self, is for the largest pos­si­ble pro­por­tion of my fu­ture selves to lead eu­daimonic ex­is­tences, that is, to be happy. This is the “prob­a­bil­ity” of a good out­come in my ex­pected util­ity max­i­miza­tion. I’m not con­cerned with hav­ing more of me—re­ally, there are plenty of me already—but I do want most of me to be hav­ing fun.

Are you at­tracted to quan­tum suicide to win the lot­tery then? (Put to one side for a mo­ment the con­se­quences for your friends, etc who would have to deal with your pass­ing away)

• Are you at­tracted to quan­tum suicide then?

How does quan­tum suicide in­crease the pro­por­tion of one’s fu­ture selves who are happy?

• You could, for ex­am­ple, play the lot­tery and cor­re­late your sur­vival with win­ning…

• As long as you don’t count the fu­ture selves who die in the other wor­lds in the de­nom­i­na­tor. It’s not clear to me that they shouldn’t count. Us­ing that logic, though, you could just com­mit painless suicide any­time you’re slightly un­happy, and your only sur­viv­ing selves would never be un­happy!

• And what’s wrong with this idea?

Evolu­tion gave us a strong in­stinct to not die, but evolu­tion also gave us the false im­pres­sion that our pro­gres­sion through time re­sem­bled a line rather than a tree, and that there’s only one planet earth. Know­ing now that you are (the al­gorithm of) a tree, per­haps it is worth re­think­ing the dy­ing=bad idea? Death, if used se­lec­tively, could mean a very happy (if less dense) tree.

If we live in a big world, this logic be­comes very com­pel­ling. Who cares about kil­ling 99% of your­self if you’re in­finite any­way, and the up­side is that you end up with an in­finite amount of hap­piness rather than an in­finite sad/​happy mix­ture?

• I can’t tell if you’re play­ing devil’s ad­vo­cate or not… Surely you’ve heard of the cat­e­gor­i­cal im­per­a­tive and can pre­dict the rad­i­cal de­crease in the hap­piness den­sity of the uni­verse if that was the rea­son­ing em­ployed by the all sapi­ent be­ings.

• I’m not fol­low­ing. If all sapi­ent be­ings ap­plied this rea­son­ing, only the most happy would de­cide not to die, and the hap­piness den­sity would in­crease.

• Wrote this and hit reload, but Kaj beat me to it.

I’m think­ing most in­tel­li­gences would kill them­selves a lot in this sce­nario lead­ing to a very empty uni­verse for any par­tic­u­lar one of them. The rele­vant den­sity is “su­per happy en­tity per cu­bic par­sec” not “su­per happy en­tity per to­tal en­tities”.

Con­sider, right now, if all mem­bers of some re­li­gion kil­led them­selves un­less their mir­a­cles started com­ing true. From the per­spec­tive of al­most all the mea­sure of non-mem­bers of the re­li­gion, it would look like a sim­ple suicide cult.

Or imag­ine the LHC re­ally could cre­ate a black hole and de­stroy the earth. Every­one votes on a low prob­a­bil­ity pos­i­tive event and we trig­ger the LHC if it doesn’t hap­pen. From the per­spec­tive of the mea­sure of al­most all the aliens in the uni­verse (if they ex­ist) our sun has a black hole or­bit­ing at 93 mil­lion miles.

If this sort of pro­cess was con­stantly hap­pen­ing among all in­tel­li­gent species on all planets, we’d be in an empty uni­verse (well, one with a lot of lit­tle black­holes any­way). The prob­a­bil­ity of run­ning into other in­tel­li­gent life “post an­thropic prin­ci­ple” would be their prac­ti­cally non-ex­is­tent mea­sure times our prac­ti­cally non-ex­is­tent mea­sure.

Some­thing I’ve ac­tu­ally won­dered about is whether the first repli­cat­ing molecule with the evolu­tion­ary po­ten­tial to gen­er­ate in­tel­li­gent life was rad­i­cally un­likely (re­quiring a feat of quan­tum chi­canery), and that’s why the uni­verse ap­pears empty to us. I don’t know of any­one who pub­lished this first, but I as­sume some­one beat me to it be­cause it of­ten seems to me that all think­able thoughts have gen­er­ally been gen­er­ated by some­one else decades or cen­turies ago :-P

• If this sort of pro­cess was con­stantly hap­pen­ing among all in­tel­li­gent species on all planets, we’d be in an empty uni­verse (well, one with a lot of lit­tle black­holes any­way). The prob­a­bil­ity of run­ning into other in­tel­li­gent life “post an­thropic prin­ci­ple” would be their prac­ti­cally non-ex­is­tent mea­sure times our prac­ti­cally non-ex­is­tent mea­sure.

Huh.

That’s the most in­ter­est­ing ex­pla­na­tion for the Fermi para­dox in a while. (Not ex­actly plau­si­ble, mind you, but an in­ter­est­ing idea nev­er­the­less.)

• To be pre­cise, the ar­gu­ment would run that the uni­verse will end up be­ing dom­i­nated by be­ings that care more about their mea­sure, and so there is a cat­e­gor­i­cal im­per­a­tive for hap­pier be­ings to care more about their mea­sure.

• Sure, if ev­ery­one re­al­ized what a great idea quan­tum suicide was. But I think you can rest as­sured that that’s not go­ing to hap­pen. As­sum­ing, that is, that it is ac­tu­ally a good idea…

Also I don’t gov­ern my ac­tion with the cat­e­gor­i­cal im­per­a­tive. It works in some cases, but in gen­eral it is awful.

• You have to as­sume that ev­ery­one will join in on this scheme, if you’re try­ing to ar­gue in fa­vor of it. If only a limited sub­set of peo­ple kill them­selves when they’re un­happy, then that leaves a huge num­ber of peo­ple mourn­ing the (to them) mean­ingless death of their loved ones. You’d have to not only kill your­self, but also make sure that any­one who was hurt by your death died as well.

• I was as­sum­ing that you were un­con­cerned with the sad­ness/​mourn­ing of those around you, or were pre­pared to make that trade­off for some rea­son. (For ex­am­ple, ego­ism, or per­haps lack of friends/​re­la­tions, or ex­treme need for the money)

• Huh. Copen­hagen in­ter­pre­ta­tion of quan­tum me­chan­ics isn’t pretty, but I’m not ready to die for it.

• Carl, that as­sumes QTI, i.e., no sub­jec­tive con­di­tional prob­a­bil­ity ever con­tains a Death event. Things do get strange then.

• Thanks for the Por­tal refer­ence. That was great.

• Eliezer, also con­sider this: sup­pose I am a mad sci­en­tist try­ing to de­cide be­tween mak­ing one copy of Eliezer and tor­tur­ing it for 50 years, or on the other hand, mak­ing 1000 copies of Eliezer and tor­tur­ing them all for 50 years.

The sec­ond pos­si­bil­ity is much, much worse for you per­son­ally. For in the first pos­si­bil­ity, you would sub­jec­tively have a 50% chance of be­ing tor­tured. But in the sec­ond pos­si­bil­ity, you would have a sub­jec­tive chance of 99.9% of be­ing tor­tured. This im­plies that the sec­ond pos­si­bil­ity is much worse, so cre­at­ing copies of bad ex­pe­riences mul­ti­plies, even with­out di­ver­sity. But this im­plies that copies of good ex­pe­riences should also mul­ti­ply: if I make a mil­lion copies of Eliezer hav­ing billions of units of util­ity, this would be much bet­ter than mak­ing only one, which would give you only a 50% chance of ex­pe­rienc­ing this.

• I’m find­ing Eliezer’s view at­trac­tive, but it does have a few coun­ter­in­tu­itive con­se­quences of its own. If we some­how en­coun­tered shock­ing new ev­i­dence that MWI, &c. is false and that we live in a small world, would weird peo­ple sud­denly be­come much more im­por­tant? Did Eliezer think (or should he have thought) that weird peo­ple are more im­por­tant be­fore com­ing to be­lieve in a big world?

• it seems in some raw in­tu­itive sense, that if the uni­verse is large enough for ev­ery­one to ex­ist some­where, then we should mainly be wor­ried about giv­ing ba­bies nice fu­tures rather than try­ing to “en­sure they get born”.

That’s an in­ter­est­ing in­tu­ition, but one that I don’t share. I con­cur with Steven and Vladimir. The whole point of the clas­si­cal-util­i­tar­ian “Each to count for one and none for more than one” prin­ci­ple is that the iden­tity of the col­lec­tion of atoms ex­pe­rienc­ing an emo­tion is ir­rele­vant. What mat­ters is in­creas­ing the num­ber of con­figu­ra­tions of atoms in states pro­duc­ing con­scious hap­piness and re­duc­ing those pro­duc­ing con­scious suffer­ing—hence reg­u­lar to­tal util­i­tar­i­anism. (Of course, figur­ing out what it means to “in­crease” and “re­duce” things that oc­cur in­finitely many times is an­other mat­ter.)

• Not sure global di­ver­sity, as op­posed to lo­cal di­ver­sity or just sheer quan­tity of ex­pe­rience, is the only rea­son I pre­fer there to be more (happy) peo­ple.

• Be hon­est, how many of you finished the Por­tal Song at the end of this post?

• I was go­ing to make about the same ob­jec­tion steven makes—if you take this stuff (MWI, an­thropic prin­ci­ple, large uni­verses) se­ri­ously as a guide to prac­ti­cal, ev­ery­day eth­i­cal de­ci­sion-mak­ing, it seems to lead in­ex­orably to nihilism—no de­ci­sion you make mat­ters very much. That doesn’t sound at all de­sire­able, so my in­stinct is to sus­pect that there is some­thing wrong ei­ther with the physics ideas, or (more likely) with the way they are be­ing ap­plied. But maybe not! Maybe nihilism is valid, but then why are we both­er­ing to be ra­tio­nal or to do any­thing at all?

Scott Aaron­son’s ob­jec­tions might carry more weight:

But what could NP-hard­ness pos­si­bly have to do with the An­thropic Prin­ci­ple? Well, when I talked be­fore about com­pu­ta­tional com­plex­ity, I for­got to tell you that there’s at least one foolproof way to solve NP-com­plete prob­lems in polyno­mial time. The method is this: first guess a solu­tion at ran­dom, say by mea­sur­ing elec­tron spins. Then, if the solu­tion is wrong, kill your­self! If you ac­cept the many-wor­lds in­ter­pre­ta­tion of quan­tum me­chan­ics, then there’s cer­tainly some branch of the wave­func­tion where you guessed right, and that’s the only branch where you’re around to ask whether you guessed right! It’s a won­der more peo­ple don’t try this… Now, if you ac­cept the NP Hard­ness As­sump­tion (as I do), then you also be­lieve that what­ever else might be true of the An­thropic Prin­ci­ple, the pre­vi­ous two ex­am­ples must have been in­cor­rect uses of it. In other words, you now have a non­triv­ial con­straint on an­thropic the­o­riz­ing: No ap­pli­ca­tion of the An­thropic Prin­ci­ple can be valid, if its val­idity would give us a means to solve NP-com­plete prob­lems in polyno­mial time.
• mtra­ven, Why we are “both­er­ing to be ra­tio­nal or to do any­thing at all” (rather than be­ing nihilists), if nihilism seems likely to be valid? Well, as long as there is a chance, say, only a .0000000000000001 chance, that nihilism is in­valid, there is noth­ing to lose and pos­si­bly some­thing to gain from as­sum­ing that nihilism is in­valid. This re­futes nihilism com­pletely as a se­ri­ous al­ter­na­tive.

I think ba­si­cally the same is true about Yud­kowsky’s fear that there are in­finitely many copies of each per­son. Even if there is only a .0000000000000001 chance that there are only finitely many copies of each of us, we should as­sume that that is the case, since that is the only type of sce­nario where there can be any­thing to gain or lose, and thus the only pos­si­ble type of sce­nario that might be a good idea to as­sume to be the case.

That is, given the as­sump­tion that one can­not af­fect in­finite amounts by adding, no mat­ter how much one adds. To this, I am an ag­nos­tic, if not an athe­ist. For ex­am­ple, adding an in­finite amount A to an in­finite amount A can, I think, make 2A rather than 1A. Ask your­self which you would pre­fer: 1) Be­ing happy one day per year and suffer the rest of the time of each year, for an in­finite num­ber of years, or 2) The other way around? Would you re­ally not care which of these two would hap­pen?

You would. Note that this is the case even when you re­al­ize that a year is only finitely more than a day, mean­ing that each of al­ter­na­tives 1 and 2 would give you in­finitely much hap­piness and in­finitely much suffer­ing. This strongly sug­gests that adding an in­finite amount A to an in­finite amount A pro­duces more than A. Then why wouldn’t also adding a finite amount B to an in­finite amount A pro­duce more than A? I would ac­tu­ally sug­gest that, even given clas­si­cal util­i­tar­i­anism, my life would not be worth­less just be­cause there are in­finitely much hap­piness and in­finitely much suffer­ing in the world with or with­out me. Each per­son’s finite amount of hap­piness must be of some value re­gard­less of the ex­is­tence of in­finite amounts of hap­piness el­se­where. I find this to be plau­si­ble, be­cause were it not for pre­cisely the in­di­vi­d­ual finite be­ings with their finite amounts of hap­piness each, there would be no in­finite sums of hap­piness in the uni­verse. If ev­ery sin­gle one of all of the uni­verse’s in­finitely many, each finitely happy, be­ing’s hap­piness were worth­less, the in­finite sum of their hap­piness would have to be worth­less too. And that an in­finite sum of hap­piness would be worth­less is sim­ply too ridicu­lous a thought to be taken se­ri­ously—given that any­thing at all is to be re­garded valuable, an as­sump­tion I con­cluded valid in the be­gin­ning of this post.

• Eliezer, I don’t think your re­al­ity fluid is the same thing as my con­tin­u­ous di­als, which were in­tended as an al­ter­na­tive to your bi­nary check marks. I think we can use al­gorith­mic com­plex­ity the­ory to an­swer the ques­tion “to what de­gree is a struc­ture (e.g. a mind-his­tory) im­ple­mented in the uni­verse” and then just make sure valuable struc­tures are im­ple­mented to a high de­gree and dis­valuable struc­tures are im­ple­mented to a low de­gree. The rea­son most minds should ex­pect to see or­dered uni­verses is be­cause it’s much eas­ier to spec­ify an or­dered uni­verse and then lo­cate a mind within it, than it is to spec­ify a mind from scratch. If this com­mits me to be­liev­ing funny stuff like peo­ple with ar­rows point­ing at them are more al­ive than peo­ple not with ar­rows point­ing at them, I’m in­clined to say “so be it”.

• Eliezer, our data only show that the uni­verse looks pretty flat, not that it is ex­actly flat. And it could be finite and ex­actly flat with a non-triv­ial topol­ogy. On if all ba­bies are du­pli­cated in MWI, it seems to de­pends on ex­actly what part of the lo­cal phys­i­cal state is re­quired to be the same.

• I’m wor­ried this is just an elab­o­rate jus­tifi­ca­tion to not have as many chil­dren as pos­si­ble. But I’m not con­vinced that I’m obli­gated to help all other ‘be­ings’, of any class or cat­e­gory, in­stead of merely not harm­ing (most of) them.

• 20 Jun 2012 20:02 UTC
0 points

… But there’s no sense cry­ing over ev­ery mis­take, you just keep on try­ing till you run out of ne­gen­tropy.

• Do you have any poin­ter on why you be­lieve so firmly in an in­finite uni­verse ? Read­ing books on physics (from main­stream au­thors like Stephen Hawk­ing or Chris­tian Mag­nan, or from less con­ven­tional books like Ju­lian Bar­bour’s End of Time) I got the im­pres­sion that the cur­rent con­sen­sus is that the uni­verse is finite, ex­pand­ing, but cur­rently finite. There may be no limit of its size if, as it seems now, the ex­pan­sion rate is grow­ing—but right now it has a finite size.

And from a purely the­o­ret­i­cal point of view, in­finity doesn’t seem very co­her­ent to me. In­finity doesn’t well, ex­ist, it’s only a limit of a finite pro­cess. Say­ing “the uni­verse is in­finite” doesn’t mean much. Your rea­son­ing seem like it is, to quote your own word, “as­sum­ing an in­finity that has not been ob­tained as the limit of a finite calcu­la­tion”, which is an ille­gal op­er­a­tion in maths.

• Try this or this or this. Pop­u­lar physics books are re­ally bad about these things.

• I no­ticed you changed units be­tween the av­er­age dis­tance of an­other you and the av­er­age dis­tance of an­other iden­ti­cal uni­verse. That seems rather pointless. A lightyear is only 16 or­ders of mag­ni­tude larger than a me­ter, and is lost in round­ing com­pared to 10^115 or­ders of mag­ni­tude.

You men­tioned a por­tion of peo­ple. I don’t think there’s any rea­son to be­lieve that the uni­verse is this big but still finite, and if it is in­finite, there’s no way to mea­sure a frac­tion of peo­ple. There are in­finity peo­ple who’s lives are worth liv­ing and in­finity who’s lives are not. If you add it to­gether, the re­sult de­pends on what or­der you add them in. Di­vid­ing is similarly non­sen­si­cal. You can’t change the pro­por­tion of peo­ple who are happy be­cause there is no pro­por­tion of peo­ple who are happy.

“But on the whole, it looks to me like if we de­cide to im­ple­ment a policy of rou­tinely kil­ling off cit­i­zens to re­place them with hap­pier ba­bies … We’re just set­ting up the uni­verse to con­tain the same ba­bies, born pre­dom­i­nantly into re­gions where they lead short lifes­pans not con­tain­ing much hap­piness.”

That would mean more hap­piness. Also, I don’t see the prob­lem with short lifes­pans. My in­stinct is that you think con­scious­ness end­ing is bad, but that hap­pens ev­ery time you go to sleep, and I don’t see you com­plain­ing about that.

• Ever since I re­al­ized that physics seems to tell us straight out that we live in a Big World, I’ve be­come much less fo­cused on cre­at­ing lots of peo­ple, and much more fo­cused on en­sur­ing the welfare of peo­ple who are already al­ive.

I don’t like that rea­son­ing. If you cre­ate an in­ter­est­ing per­son here, in our hub­ble vol­ume, their in­ter­est­ing­ness can re­flect back to you. The other “copies” 10^(10^50) or so light years away will never have any­thing to do with you.

• Eliezer, when­ever you start think­ing about peo­ple who are com­pletely causally un­con­nected with us as morally rele­vant, alarm bells should go off.

What’s worse though, is that if your opinion on this is driven by a de­sire to jus­tify not agree­ing with the “re­pug­nant con­clu­sion”, it may sig­nify prob­lems with your moral­ity that could an­nihilate hu­man­ity if you give your moral­ity to an AI. The re­pug­nant con­clu­sion re­quires valu­ing the bring­ing into ex­is­tence of hy­po­thet­i­cal peo­ple with to­tal util­ity x by as much as re­duc­ing the util­ity of ex­ist­ing peo­ple by x, or an­nihilat­ing peo­ple with util­ity x. Give that moral­ity to a fast take­off AI and they’ll quickly re­place all hu­mans with en­tities with greater ca­pac­ity for util­ity. If the AI is pro­grammed to be­lieve the prob­lem with the “re­pug­nant con­clu­sion” is what you claim, the AI will in­stead cre­ate ran­dom­ized (for high unique­ness) minds with high ca­pac­ity for util­ity, still an­nihilat­ing hu­mans.

• I think many value the qual­ity of life of their friends and loved ones more than they value hy­po­thet­i­cal far-fu­ture ab­strac­tions. This has to do with evolu­tion’s im­pact on psy­chol­ogy—and doesn’t have much to do with how big the uni­verse is.

• in a Big World, I don’t have to worry as much about cre­at­ing di­ver­sity or giv­ing pos­si­bil­ities a chance to ex­ist, rel­a­tive to how much I worry about av­er­age qual­ity of life for sen­tients.

Can’t say fairer than that.

Eliezer, given the pro­por­tion of your selves that get run over ev­ery day, have you stopped cross­ing the road? Leav­ing the house?

Or do you just make sure that you im­prove the stan­dard of liv­ing for ev­ery­one in your Hub­ble Sphere by a cer­tain num­ber of utilons and call it a good day on av­er­age?

• You might be in­ter­ested in the last sec­tion of Mo­tion Moun­tain, the free on­line physics text­book. It pre­sents ab­solute limits for var­i­ous mea­sures of the uni­verse, de­rived from quan­tum me­chan­ics and gen­eral rel­a­tivity. It ap­pears that we live in a finite uni­verse, though all of this stuff is pretty spec­u­la­tive.

• Good lives ver­sus many life­forms? Yes please.

• Eliezer:

Vladimir, many of these an­thropic-sound­ing ques­tions can also trans­late di­rectly into “What should I ex­pect to see hap­pen to me, in situ­a­tions where there are a billion X-po­ten­tially-mes and one Y-po­ten­tially-mes?” If X is a kind of me, I should al­most cer­tainly ex­pect to see X; if not, I should ex­pect to see Y. I can­not quite man­age to bring my­self to dis­pense with the ques­tion “What should I ex­pect to see hap­pen next?” or, even worse, “Why am I see­ing some­thing so or­derly rather than chaotic?” For ex­am­ple, say­ing “I only care about peo­ple in or­derly situ­a­tions” does not cut it as an ex­pla­na­tion—it doesn’t seem like a ques­tion that I could an­swer with a util­ity func­tion.
I cur­rently think a sub­jec­tive point of view should be as­sumed only for a sin­gle de­ci­sion, all the se­man­tics pre­con­figured in the util­ity max­i­mizer that makes the de­ci­sion. No con­ti­nu­ity of ex­pe­rience en­ters this pic­ture, if agent op­er­ates con­tin­u­ously, it’s just a se­quence of util­ity max­i­mizer con­figu­ra­tions, which are to be de­ter­mined from each of the de­ci­sion points to hold the best be­liefs, and gen­er­ally any kind of cog­ni­tive fea­tures (if it’s a se­quence, then cer­tain kinds of cog­ni­tive rit­u­als be­come effi­cient). So, there is no fu­ture “me”, fu­ture “me” is a de­ci­sion point that needs to be de­ter­mined ac­cord­ing to prefer­ences of cur­rent de­ci­sion, and it might be that there is no fu­ture “me” planned at all. This re­duces ex­pec­ta­tion to both util­ity and prob­a­bil­ity, as you both have un­cer­tain knowl­edge about your fu­ture ver­sion, and value as­so­ci­ated with its pos­si­ble states. So, you don’t plan to see some­thing chaotic be­cause you don’t pre­dict some­thing chaotic to hap­pen.

I have not been able to dis­solve “the amount of re­al­ity-fluid” with­out also dis­solv­ing my be­lief that most peo­ple-weight is in or­dered uni­verses and that most of my fu­tures are in or­dered uni­verses, with­out which I have no ex­pla­na­tion for why I find my­self in an or­dered uni­verse and no ex­pec­ta­tion of a fu­ture that is or­dered as well.
You pre­dict the fu­ture to be or­dered, and you are con­figured to know the en­vi­ron­ment to be or­dered. An Oc­cam’s ra­zor-like prior is ex­pected to con­verge on a true dis­tri­bu­tion, what­ever that is, and so, be­ing a gen­eral pre­dic­tor, you weight pos­si­bil­ities this way.

In par­tic­u­lar, I have not been able to dis­solve re­al­ity-fluid into my util­ity func­tion with­out con­clud­ing that, by virtue of car­ing only about copies of me who win the lot­tery, I could ex­pect to win the lot­tery and ac­tu­ally see that as a re­sult.
You can’t ac­tu­ally see that re­sult, you may only ex­pect your fu­ture state to see that re­sult. If there is a point in prepar­ing to win­ning/​los­ing the lot­tery, and you only care about win­ning (that is, in case you don’t win, any­thing you’ve done won’t mat­ter), you’ll make prepa­ra­tions for the win­ning op­tion re­gard­less of your chances, that is you’ll act as if you ex­pect to win. If you in­clude your thoughts, prob­a­bil­ity dis­tri­bu­tion and util­ity, in the do­main of de­ci­sions, you might as well re­con­figure your­self to be­lieve that you’ll most cer­tainly win. Not a re­al­is­ti­cally plau­si­ble situ­a­tion, and changes the se­man­tics of truth in rep­re­sen­ta­tion, and hence coun­ter­in­tu­itive, but de­liv­ers the same win.

• Eliezer: I’m not sure you’d re­ally get much in­terfer­ence effects be­tween in­dis­t­in­guish­able hub­ble vol­umes.

What I mean is you’d need some event that has in its causal his­tory stuff from two “equiv­a­lent” hub­ble vol­umes, right?

Other­wise, well, how would any non­triv­ial in­terfer­ence effects re­lated to the in­dis­t­in­guisha­bil­ity be­tween mul­ti­ple hub­ble vol­umes man­i­fest? Con­figu­ra­tion space isn’t over the hub­ble vol­umes but over the en­tirety of the uni­verse, right?

• and where I just said “uni­verse” I meant a 4D thing, with the di­als each refer­ring to a 4D struc­ture and time never en­ter­ing into the pic­ture.

• the most im­por­tant adap­ta­tion an ide­ol­ogy can make to im­prove its in­clu­sive fit­ness for con­sump­tion by the hu­man brain is to

1. re­frain from mak­ing falsifi­able claims

2. con­vince its fol­low­ers to ag­gres­sively expand

1 is ac­com­plished by mak­ing the ide­ol­ogy rest on a pri­ori claims. ev­ery­thing that rests on top of that claim can be perfectly log­i­cal given the premise. since most peo­ple don’t ex­am­ine their be­liefs ax­io­mat­i­cally, few will ques­tion the premise as long as they are pro­vided the bare min­i­mum of com­fort. 2 is ac­com­plished by ac­ti­vat­ing the “morally righ­teous” cen­ters of the brain. We’re not ag­gres­sively ex­pand­ing, we’re bring­ing democ­racy/​com­mu­nism/​god/​what­ever to the hea­thens.

Hav­ing a high stan­dard of liv­ing seems in­com­pat­i­ble with nat­u­ral se­lec­tion. Like sad­ness and pain lead­ing to greater in­clu­sive fit­ness in an in­di­vi­d­ual, de­vot­ing more re­sources to ex­pan­sion in­creases the in­clu­sive fit­ness of any so­cial sys­tem. Those who don’t ex­pand are swal­lowed by those who do. It only takes one ag­gres­sively ex­pan­sion­ist civ­i­liza­tion per hub­ble vol­ume to wipe out all other forms of civ­i­liza­tion.

• Also “stan­dard model” doesn’t mean what you think it means and “un­pleas­ant pos­si­bil­ity” isn’t an ar­gu­ment.

• You shouldn’t waste your time figur­ing out how to act in an ex­pand­ing mul­ti­verse, as op­posed to a sim­ple, sin­gle and uni­tary world. The prob­lem of how to act and live even in the lat­ter case is tough enough. Con­di­tion­ing your choices on the former per­spec­tive is try­ing to think a god, when you’re in fact an an­i­mal.

• Eliezer, you know perfectly well that the the­ory you are sug­gest­ing here leads to cir­cu­lar prefer­ences. On an­other oc­ca­sion when this came up, I started to in­di­cate the path that would show this, and you did not re­spond. If cir­cu­lar prefer­ences are jus­tified on the grounds that you are con­fused, then you are jus­tify­ing those who said that dust specks are prefer­able to tor­ture.

• I don’t buy the idea of Everet branch­ing from at least this rea­son:

Let say, that in an ex­per­i­ment, a par­allel Uni­verse is cre­ated with the prob­a­bil­ity 12. In some Uni­verses this ex­per­i­ment will be con­tinued and par­allel Uni­verses will be cre­ated, in some will not.

Ques­tion. Is the par­allel Uni­verse of the par­allel Uni­verse our par­allel Uni­verse? Some­times not.

So, we have the par­allel and the semi-par­allel wor­lds. And so on.