Solving the Doomsday argument

The Dooms­day ar­gu­ment gives an an­thropic ar­gu­ment for why we might ex­pect doom to come rea­son­ably soon. It’s known that the Dooms­day ar­gu­ment works un­der SSA, but not un­der SIA.

Ok, but since differ­ent an­thropic prob­a­bil­ity the­o­ries are cor­rect an­swers to differ­ent ques­tions, what are the ques­tion ver­sions of the Dooms­day ar­gu­ment, and is the origi­nal claim cor­rect?

No Dooms­day on birth rank

Sim­plify the model into as­sum­ing there is a large uni­verse (no Dooms­day any time soon) with many, many fu­ture hu­mans, and a small one (a Dooms­day rea­son­ably soon—within the next 200 billion peo­ple, say), with equal prob­a­bil­ity. In or­der to think in terms of fre­quen­cies, which comes more nat­u­rally to hu­mans, we can imag­ine run­ning the uni­verse many, many times, each with the Dooms­day chance.

There are roughly a 108.5 billion hu­mans who have ever lived. So, ask­ing:

• What pro­por­tion of peo­ple with birth rank 108.5 billion, live in a small uni­verse (with a Dooms­day rea­son­ably soon)?

The an­swer to that ques­tion con­verges to , the SIA prob­a­bil­ity. Half of the peo­ple with that birth rank live in small uni­verses, half in large uni­verses.

Dooms­day for time travellers

To get an SSA ver­sion of the prob­lem, we can ask:

• What pro­por­tion of uni­verses, where a ran­domly se­lected hu­man has a birthrank of 108.5 billion, will be small (with a Dooms­day rea­son­ably soon)?

This will give an an­swer close to as it con­verges on the SSA prob­a­bil­ity.

But note that this is gen­er­ally not the ques­tion that the Dooms­day ar­gu­ment is pos­ing. If there is a time trav­el­ler who is choos­ing peo­ple at ran­dom from amongst all of space and time—then if they hap­pen to choose you, that is a bad sign for the fu­ture (and yet an­other rea­son you should go with them). Note that this is con­sis­tent with con­ser­va­tion of ex­pected ev­i­dence: if the time trav­el­ler is out there but doesn’t choose you, then this a (very mild) up­date to­wards no Dooms­day.

But for the clas­si­cal non-time-travel situ­a­tion, the Dooms­day ar­gu­ment fails.

• Hi Stu­art. It’s a while since I’ve posted.

Here’s one way of ask­ing the ques­tion which does lead nat­u­rally to the Dooms­day an­swer.

Con­sider two uni­verses. They’re both in­finite (or if you don’t like ac­tual in­fini­ties, are very very large, so they both have a re­ally huge num­ber of civil­i­sa­tions).

In uni­verse 1, al­most all the civil­i­sa­tions die off be­fore spread­ing through space, so that the av­er­age pop­u­la­tion of a civil­i­sa­tion through time is less than a trillion.

In uni­verse 2, a fair pro­por­tion of the civil­i­sa­tions sur­vive and grow to galaxy-size or big­ger, so that the av­er­age pop­u­la­tion of a civil­i­sa­tion through time is much more than a trillion trillion.

Now con­sider two more uni­verses. Uni­verse 3 is like Uni­verse 1 ex­cept that the microwave back­ground ra­di­a­tion 14 billion years af­ter Big Bang is 30K rather than 3K. Uni­verse 4 is like Uni­verse 2 again ex­cept for the differ­ence in microwave back­ground ra­di­a­tion. Both Uni­verse 3 and Uni­verse 4 are so big (or in­finite) that they con­tain civil­i­sa­tions that be­lieve the back­ground ra­di­a­tion has tem­per­a­ture 3K be­cause ev­ery mea­sure­ment they’ve ever made of it has ac­ci­den­tally given the same wrong an­swer.

Here’s the ques­tion to think about.

Is there a sen­si­ble way of do­ing an­throp­ics (or in­deed sci­ence in gen­eral) that would lead us to con­clude we are prob­a­bly in Uni­verse 1 or 2 (rather than Uni­verse 3 or 4) with­out also con­clud­ing that we are prob­a­bly in Uni­verse 1 (rather than Uni­verse 2)?

• “How many copies of peo­ple like me are there in each uni­verse?”

Then as long as your copies know that 3K has been ob­served, and ex­clud­ing simu­la­tions and such, the an­swers are “(a lot, a lot, not many, not many)” in the four uni­verses (I’m in­ter­pret­ing “die off be­fore spread­ing through space” as “die off just be­fore spread­ing through space”).

This is the SIA an­swer, since I asked the SIA ques­tion.

• Thanks Stu­art.

The difficulty is that, by con­struc­tion, there are in­finitely many copies of me in each uni­verse (if the uni­verses are all in­finite) or there are a colos­sally huge num­ber of copies of me in each uni­verse, so big that it sat­u­rates my util­ity bounds (as­sum­ing that my util­ities are finite and bounded, be­cause if they’re not, the de­ci­sion the­ory leads to chaotic re­sults any­way).

So SIA is not an ap­proach to an­throp­ics (or sci­ence in gen­eral) which al­lows us to con­clude we are prob­a­bly in uni­verse 1 or 2 (rather than 3 or 4). All SIA re­ally says is “You are in some sort of re­ally big or in­finite uni­verse, but be­yond that I can’t help you work out which”. That’s not helpful for de­ci­sion mak­ing, and doesn’t al­low for sci­ence in gen­eral to work.

In­ci­den­tally, when you say there are “not many” copies of me in uni­verses 3 and 4, then you pre­sum­ably mean “not a high pro­por­tion, com­pared to the vast to­tal of ob­servers”. That’s im­plic­itly the SSA rea­son­ing be­ing used for to dis­crim­i­nate against uni­verses 3 and 4… but then of course it also dis­crim­i­nates against uni­verse 2.

I’ve worked through pretty much all the an­thropic ap­proaches over the years, and they all seem to stum­ble on this ques­tion. All the ap­proaches which con­fi­dently sep­a­rate uni­verses 3 and 4 also sep­a­rate 1 from 2.

• If we set aside in­finity, which I don’t know how to deal with, then the SIA an­swer does not de­pend on util­ity bounds—un­like my an­thropic de­ci­sion the­ory post.

Q1: “How many copies of peo­ple (cur­rently) like me are there in each uni­verse?” is well-defined in all finite set­tings, even huge ones.

In­ci­den­tally, when you say there are “not many” copies of me in uni­verses 3 and 4, then you pre­sum­ably mean “not a high pro­por­tion, com­pared to the vast to­tal of ob­servers”

No, I mean not many, as com­pared with how many there are in uni­verses 1 and 2. Other ob­servers are not rele­vant to Q1.

I’ll re­it­er­ate my claim that differ­ent an­thropic prob­a­bil­ity the­o­ries are “cor­rect an­swers to differ­ent ques­tions”: https://​​www.less­wrong.com/​​posts/​​nxRjC93Am­sFk­fDYQj/​​an­thropic-prob­a­bil­ities-an­swer­ing-differ­ent-questions

• I get that this is a con­sis­tent way of ask­ing and an­swer­ing ques­tions, but I’m not sure this is ac­tu­ally helpful with do­ing sci­ence.

If, say, uni­verses 1 and 2 con­tain TREE(3) copies of me while uni­verses 3 and 4 con­tain BusyBeaver(1000) then I still don’t know which I’m more likely to be in, un­less I can some­how work out which of these vast num­bers is vaster. Reg­u­lar sci­en­tific in­fer­ence is just go­ing to com­pletely ig­nore ques­tions as odd as this, be­cause it sim­ply has to. It’s go­ing to tell me that if mea­sure­ments of back­ground ra­di­a­tion keep com­ing out at 3K, then that’s what I should as­sume the tem­per­a­ture ac­tu­ally is. And I don’t need to know any­thing about the uni­verse’s size to con­clude that.

Re­turn­ing to SIA, to con­clude there are more copies of me in uni­verse 1 and 2 (ver­sus 3 or 4), SIA will have to know their rel­a­tive sizes. The larger, the bet­ter, but not in­finite please. And this is a ma­jor prob­lem, be­cause then SIA’s con­clu­sion it dom­i­nated by how finite trun­ca­tion is ap­plied to avoid the in­finite case.

Sup­pose we trun­cate all uni­verses at the same large phys­i­cal vol­ume (or 4d vol­ume) then there are strictly more copies of me in uni­verse 1 and 2 than 3 and 4 (but about the same num­ber in uni­verses 1 and 2). That works so far—it is in line with what we prob­a­bly wanted. But un­for­tu­nately this vol­ume based trun­ca­tion also favours uni­verse 5-1:

5-1. Physics is noth­ing like it ap­pears. Rather the uni­verse is full of an ex­tremely dense solid, perform­ing a colos­sal num­ber of re­ally fast com­pu­ta­tions; a high frac­tion of which simu­late ob­servers in uni­verse 1.

It’s not difficult to see that 5-1 is more favoured than 5-2, 5-3 or 5-4 (since the den­sity of ob­servers like me is high­est in 5-1).

If we in­stead trun­cate uni­verses at the same large to­tal num­ber of ob­servers (or the same large to­tal util­ity), then uni­verse 1 now has more copies of me (be­cause it has more civil­i­sa­tions in to­tal). Uni­verse 1 is favoured.

Or if I trun­cate uni­verses at the same large num­ber of to­tal copies of me (be­cause per­haps I don’t care very much about peo­ple who aren’t copies of me) then I can no longer dis­t­in­guish be­tween uni­verses 1 to 4, or in­deed 5-1 to 5-4.

So ei­ther way we’re back to the same de­press­ing con­clu­sion. How­ever the trun­ca­tion is done, uni­verse 1 is go­ing to end up preferred over the oth­ers (or per­haps uni­verse 5-1 is preferred over the oth­ers), or there is no prefer­ence among any of the uni­verses.

• Th­ese are valid points, but we have wan­dered a bit away from the ini­tial ar­gu­ment, and we’re now talk­ing about num­bers that can’t be com­pared (my money is on TREE(3) be­ing smaller in this ex­am­ple, but that’s ir­rele­vant to your gen­eral point), or ways of trun­cat­ing in the in­finite case.

But we seem to have solved the finite-and-com­pa­rable case.

Now, back to the in­finite case. First of all, there may be a cor­rect de­ci­sion even if prob­a­bil­ities can­not be com­puted.

If we have a suit­able util­ity func­tion, we may de­cide sim­ply not to care about what hap­pens in uni­verses that are of the type 5, which would rule them out com­pletely.

Or maybe the trun­ca­tion can be im­proved slightly. For ex­am­ple, we could give each ob­server a bub­ble of ra­dius 20 mega-light years, which is defined ac­cord­ing to their own sub­jec­tive ex­pe­rience: how many in­di­vi­d­u­als do they ex­pect to en­counter within that ra­dius, if they were made im­mor­tal and al­lowed to ex­plore it fully.

Then we trun­cate by this sub­jec­tive bub­ble, or some­thing similar.

But yeah, in gen­eral, the in­finite case is not solved.

• Thanks again for the use­ful re­sponse.

My ini­tial ar­gu­ment was re­ally a ques­tion “Is there any ap­proach to an­thropic rea­son­ing that al­lows us to do ba­sic sci­en­tific in­fer­ence, but does not lead to Dooms­day con­clu­sions?” So far I’m skep­ti­cal.

The best re­sponse you’ve got is I think twofold.

1. Use SIA but please ig­nore the in­finite case (even though the in­ter­nal logic of SIA forces the in­finite case) be­cause we don’t know how to han­dle it. When ap­ply­ing SIA to large finite cases, trun­cate uni­verses by a large vol­ume cut­off (4d vol­ume) rather than by a large pop­u­la­tion cut­off or large util­ity cut­off. Oh and ig­nore simu­la­tions be­cause if you take those into ac­count it leads to odd con­clu­sions as well.

That might per­haps work, but it does look hor­ribly con­voluted. To me it does seem like de­ter­min­ing the con­clu­sion in ad­vance (you want SIA to favour uni­verses 1 and 2 over 3 and 4, but not favour 1 over 2) and then hack­ing around with SIA un­til it gives that re­sult.

In­ci­den­tally, I think you’re still not out of the woods with a vol­ume cut­off. If it is very large in the time di­men­sion then SIA is start go­ing to favour uni­verses which have Boltz­mann Brains in the very far fu­ture over uni­verses whose physics don’t ever al­low Boltz­mann Brains. And then SIA is go­ing to sug­gest that not only are we prob­a­bly in a uni­verse with lots of BBs, we most likely are BBs our­selves (be­cause al­most all ob­servers with ex­actly our ex­pe­riences are BBs). So SIA calls for fur­ther surgery ei­ther to re­move BBs from con­sid­er­a­tion or to ap­ply the 4vol­ume cut­off in a way that doesn’t lead to lots of Boltz­mann Brains.

1. For­get about both SIA and SSA and re­vert to an un­der­ly­ing de­ci­sion the­ory: viz your ADT. Let the util­ity func­tion take the strain.

The prob­lem with this is that ADT with un­bounded util­ity func­tions doesn’t lead to sta­ble con­clu­sions. So you have to bound or trun­cate the util­ity func­tion.

But then ADT is go­ing to pay the most at­ten­tion to uni­verses whose util­ity is close to the cut­off … namely ver­sions of uni­verse 1,2,3,4 which have util­ity at or near the max­i­mum. For the rea­sons I’ve already dis­cussed above, that’s not in gen­eral go­ing to give the same re­sults as ap­ply­ing a vol­ume cut­off. If the util­ity scales with the to­tal num­ber of ob­servers (or ob­servers like me), then ADT is not go­ing to say “Make de­ci­sions as if you were in uni­verse 1 or 2 … but with no prefer­ence be­tween these … rather than as if you were in uni­verse 3 or 4”

I think the most work­able util­ity func­tion you’ve come up with is the one based on sub­jec­tive bub­bles of or­der galac­tic vol­ume or there­abouts i.e. the util­ity func­tion scales roughly lin­early with the num­ber of ob­servers in the vol­ume sur­round­ing you, but doesn’t care about what hap­pens out­side that re­gion (or in any simu­la­tions, if they are of differ­ent re­gions). Us­ing that is roughly equiv­a­lent to ap­ply­ing a vol­ume trun­ca­tion us­ing reg­u­lar as­tro­nom­i­cal vol­umes (rather than much larger vol­umes).

How­ever the hack to avoid simu­la­tions looks a bit un­nat­u­ral to me (why wouldn’t I care about simu­la­tions which hap­pen to be in the same lo­cal vol­ume?) Also, I think this util­ity func­tion might then tend to favour “zoo” hy­pothe­ses or “plane­tar­ium” hy­pothe­ses (I.e. de­ci­sions are made as if in a uni­verse densely packed with plane­taria con­tain­ing hu­man level civil­i­sa­tions, rather than simu­la­tions of said simu­la­tions).

More wor­ry­ingly, I doubt if any­one re­ally has a util­ity func­tion that looks like this ie. one that cares about ob­servers 1 mil­lion light years away just as much as it cares about ob­servers here on Earth, but then stops car­ing if they hap­pen to be 1 trillion light years away...

So again I think this looks rather like as­sum­ing the right an­swer, and then hack­ing around with ADT un­til it gives the an­swer you were look­ing for.

• The Dooms­day ar­gu­ment is ut­ter BS be­cause one can­not re­li­ably eval­u­ate prob­a­bil­ities with­out fix­ing a prob­a­bil­ity dis­tri­bu­tion first. Without know­ing more than just the num­ber of hu­mans ex­ist­ing so far, the ar­gu­ment de­volves into ar­gu­ing which prob­a­bil­ity dis­tri­bu­tion to pick out of un­countable num­ber of pos­si­bil­ities. An hon­est at­tempt to ad­dress this ques­tion would start with mod­el­ing hu­man pop­u­la­tion fluc­tu­a­tions in­clud­ing var­i­ous ex­tinc­tion events. In such a model there are mul­ti­ple free pa­ram­e­ters, such as rate of growth, dis­tri­bu­tion of odds of var­i­ous ex­tinc­tion-level events, dis­tri­bu­tion of odds of sur­viv­ing each type of events, event clus­ter­ing and so on. The the min­i­mum num­ber of hu­mans does not con­strain the mod­els in any in­ter­est­ing way, i.e. to priv­ilege a cer­tain class of mod­els over oth­ers, or a cer­tain set of free pa­ram­e­ters over oth­ers to the de­gree where we could eval­u­ate a model-in­de­pen­dent up­per bound for the to­tal num­ber of hu­mans with any de­gree of con­fi­dence.

If you want to pro­duc­tively talk about Dooms­day, you have to get your hands dirty and deal with spe­cific x-risks and their effects, not arm­chair-the­o­rize based on a sin­gle num­ber and a few so-called se­lec­tion/​in­di­ca­tion prin­ci­ples that have noth­ing to do with the ac­tual hu­man pop­u­la­tion dy­nam­ics.

• The DA, in it’s SSA form (where it is rigor­ous) comes as a pos­te­rior ad­just­ment to all prob­a­bil­ities com­puted in the way above—it’s not an ar­gu­ment that doom is likely, just that doom is more likely than ob­jec­tive odds would im­ply, in a pre­cise way that de­pends on fu­ture (and past) pop­u­la­tion size.

How­ever my post shows that the SSA form does not ap­ply to the ques­tion that peo­ple gen­er­ally ask, so the DA is wrong.

• In­ter­est­ing post. Could the same ar­gu­ment not be used against the Si­mu­la­tion ar­gu­ment?

Sim­plify the model into as­sum­ing there is a uni­verse in which I, the ob­server, are one of many many ob­servers in an an­ces­tor simu­la­tion run by some fu­ture civ­i­liza­tion, and a uni­verse in which I am a biolog­i­cal hu­man nat­u­rally cre­ated by evolu­tion on earth, with equal prob­a­bil­ity. Again, we can imag­ine run­ning the uni­verse many, many times. But no mat­ter how many peo­ple are in the con­sid­ered uni­verse, I can only have the ex­pe­rience of be­ing one at a time. So, ask­ing:

• What pro­por­tion of peo­ple whose ex­pe­riences I have live in a simu­lated world?

The an­swer to that ques­tion con­verges to 12, as well. But if ev­ery ob­server rea­soned like this when asked whether they are in a simu­la­tion, most would get the wrong an­swer (as­sum­ing there are more simu­lated than real ob­servers)! How can we deal with this ap­par­ent in­con­sis­tency? Of course, as you say, differ­ent an­swers to differ­ent ques­tions. But which one should we con­sider to be valid, when both seem in­tu­itively to make sense?

• Lets say you do not know your birth rank at first. Then some­one asks you to guess whether the uni­verse is around 200 billion or some very large num­ber. Without any ad­di­tional data you should es­ti­mate 50% for ei­ther one. Then you get to know that your birth rank is around 100 billion. Do you not then up­date, that smaller uni­verse have big­ger than 50% chance es­ti­mated pre­vi­ously?

• Again, we have to be clear about the ques­tion. But if it’s “what pro­por­tions of ver­sions of me are likely to be in a large uni­verse”, then the an­swer is close to 1 (which is the SIA odds). Then you up­date on your birthrank, no­tice, to your great sur­prise, that it is suffi­ciently low to ex­ist in both large and small uni­verses, so up­date to­wards small and end up at 50:50.

• So what you are say­ing, is, be­fore one knows his birth rank, one should as­sume in­finite uni­verse? This does ac­tu­ally cor­re­sponds to ev­i­dence about uni­verse size, but not about hu­man pop­u­la­tion size.

• Again, it’s what ques­tion you’re ask­ing. “If a copy of me ex­isted, would it be more likely to ex­ist in small uni­verse or in an in­finite one” has a pretty clear an­swer :-)

• It is prob­a­bly wrong to in­ter­pret DA as “doom is im­mi­nent”. DA just say that we are likely in the mid­dle of to­tal pop­u­la­tion of all hu­mans (or other rele­vant ob­servers) ever born.

For some emo­tional rea­sons we are not satis­fied to be in the mid­dle and in­ter­prets it as “doom”—but there are 100 billion more peo­ple in the fu­ture ac­cord­ing to DA. It be­comes look like a doom, if we ac­count for ex­pected pop­u­la­tion growth, as in that case, the next 100 billion peo­ple will ap­pear in a few hun­dreds years.

More over, DA tells that doom very soon is very un­likely, which I call “re­verse DA”.

• There are two ver­sions of the DA; the first is “we should roughly be in the mid­dle”, and the sec­ond is “our birth rank is less likely if there were many more hu­mans in the fu­ture”.

I was more think­ing of the sec­ond case, but I’ve changed the post slightly to make it more com­pat­i­ble with the first.