Anthropic paradoxes transposed into Anthropic Decision Theory

An­thropic De­ci­sion The­ory (ADT) re­places an­thropic prob­a­bil­ities (SIA and SSA) with a de­ci­sion the­ory that doesn’t need an­thropic prob­a­bil­ities to func­tion. And, roughly speak­ing, ADT shows that to­tal util­i­tar­i­ans will have a be­havi­our that looks as if it was us­ing SIA, while av­er­age util­i­tar­i­ans look like they are us­ing SSA.

That means that the var­i­ous para­doxes of SIA and SSA can be trans­lated into ADT for­mat. This post will do that, and show how the para­doxes feel a lot less counter-in­tu­itive un­der ADT. Some of these have been pre­sented be­fore, but I wanted to gather them in one lo­ca­tion. The para­doxes ex­am­ined are:

  1. The Dooms­day Ar­gu­ment.

  2. The Adam and Eve prob­lem.

  3. The UN++ prob­lem.

  4. The Pre­sump­tu­ous Philoso­pher.

  5. Katja Grace’s SIA dooms­day ar­gu­ment.

The first three are are para­doxes of SSA (which in­creases the prob­a­bil­ity of “small” uni­verses with few ob­servers), while the last three are para­doxes of SIA (which in­creases the prob­a­bil­ity of “large” uni­verses with many ob­servers).

No Dooms­day, just a differ­ent weight­ing of rewards

The fa­mous Dooms­day Ar­gu­ment claims that, be­cause of SSA’s prefer­ences for small num­bers of ob­servers, the end of the hu­man species is closer than we might oth­er­wise think.

How can we trans­late that into ADT? I’ve found it’s gen­er­ally harder to trans­late SSA para­doxes into ADT that SIA ones, be­cause av­er­age util­i­tar­i­anism is a bit more finicky to work with.

But here is a pos­si­ble for­mu­la­tion: a dis­aster may hap­pen 10 years from now, with 50% prob­a­bil­ity, and will end hu­man­ity with a to­tal of hu­mans. If hu­mans sur­vive the dis­aster, there will be hu­mans to­tal.

The agent has the op­tion of con­sum­ing re­sources now, or con­sum­ing re­sources in 20 years time. If this were a nar­row-minded self­ish agent, then it will con­sume early if , and late if .

How­ever, if the agent is an av­er­age util­i­tar­ian, the amount of ex­pected util­ity they de­rive from from con­sum­ing early is (the ex­pected av­er­age util­ity of , av­er­aged over sur­vival and doom), while the ex­pected util­ity for con­sum­ing late is (since con­sum­ing late means sur­vival).

This means that the breakeven point for the ADT av­er­age util­i­tar­ian is when:

  • .

If is much larger than , then the ADT agent will only de­lay con­sump­tion if is similarly larger than .

This looks like a nar­row-minded self­ish agent that is con­vinced that doom is al­most cer­tain. But it’s only be­cause of the weird fea­tures of av­er­age util­i­tar­i­anism.

Adam and Eve and differ­en­tially plea­surable sex and pregnancy

In the Adam and Eve thought ex­per­i­ment, the pair of hu­mans want to sleep to­gether, but don’t want to get preg­nant. The snake re­as­sures them that be­cause a preg­nancy would lead to billions of de­scen­dants, SSA’s prefer­ences for small uni­verses means that this is al­most im­pos­si­bly un­likely, so, time to get frisky.

There are two util­ities to com­pare here: the pos­i­tive util­ity of sex (), and the nega­tive util­ity of preg­nancy (). As­sume a chance of preg­nancy from hav­ing sex, and a sub­se­quent de­scen­dants.

Given an av­er­age util­i­tar­ian ADT cou­ple, the util­ity de­rived from sex is , while the di­su­til­ity from preg­nancy is . For large enough , those terms will be ap­prox­i­mately and .

So the di­su­til­ity of preg­nancy is buried in the much larger pop­u­la­tion.

There are more ex­treme ver­sions of the Adam and Eve prob­lem, but they are closely re­lated to the next para­dox.

UN++: more peo­ple to dilute the sorrow

In the UN++ thought ex­per­i­ment, a fu­ture world gov­ern­ment seeks to pre­vent dam­ag­ing but non-fatal gamma ray bursts by com­mit­ting to cre­at­ing many many more hu­mans, if the bursts hap­pen. The para­dox is that SSA im­plies that this should lower the prob­a­bil­ity of the bursts.

In ADT, this be­havi­our is perfectly ra­tio­nal: if we as­sume that the gamma ray-bursts will cause pain to the cur­rent pop­u­la­tion, then cre­at­ing a lot of new hu­mans (of same baseline hap­piness) will dilute this pain, by av­er­ag­ing it out over a larger pop­u­la­tion.

So in ADT, the SSA para­doxes just seem to be arte­facts of the weird­ness of av­er­age util­i­tar­i­anism.

Philoso­pher: not pre­sump­tu­ous, but gam­bling for high rewards

We turn now to SIA, re­plac­ing our av­er­age util­i­tar­ian ADT agent with a to­tal util­i­tar­ian one.

In the Pre­sump­tu­ous Philoso­pher thought ex­per­i­ment, there are only two pos­si­ble the­o­ries about the uni­verse: and . Both posit large uni­verses, but posits a much larger uni­verse than , with trillions of times more ob­servers.

Physi­cists are about to do an ex­per­i­ment to see which the­ory is true, but the SIA-us­ing Pre­sump­tu­ous Philoso­pher (PP) in­ter­rupts them, say­ing that is al­most cer­tain be­cause of SIA. In­deed, they are will­ing to bet on at odds of up to a trillion-to-one.

With that bet­ting idea, the prob­lem is quite easy to for­mu­late in ADT. As­sume that all PP are to­tal util­i­tar­i­ans to­wards each other, and will all reach the same de­ci­sion. Then there are a trillion times more PPs in than in . Which means that win­ning a bet in is a trillion times more valuable than win­ning it in .

Thus, un­der ADT, the Pre­sump­tu­ous Philoso­pher will in­deed bet on at odds of up to a trillion to one, but the be­havi­our is sim­ple to ex­plain: they are sim­ply go­ing for a low-prob­a­bil­ity, high-util­ity bet with higher ex­pected util­ity than the op­po­site. There does not seem to be any para­dox re­main­ing.

SIA Dooms­day: care more about mosquito nets in large universes

Back to SIA. The SIA Dooms­day Ar­gu­ment, some­what sim­plified, is since SIA means that we should ex­pect there to be a lot of ob­servers like our­selves, then it is more likely that the Fermi para­dox is ex­plained by a late Great Filter (which kills civ­i­liza­tions that are more ad­vanced than us) than a early Great Filter (which kills life at an ear­lier stage or stops it from evolv­ing in the first place). The rea­son for this is that, ob­vi­ously, there are more ob­servers like us for a late Great Filter than an early one.

To analyse this in de­ci­sion the­ory, use the same setup as for the stan­dard Dooms­day Ar­gu­ment: choos­ing be­tween con­sum­ing now (or donat­ing to AMF, or similar), or in twenty years, with a risk of hu­man ex­tinc­tion in ten years.

To com­plete the model, as­sume that if the Great Filter is early, there will be no hu­man ex­tinc­tion, while if it is late, there is a chance of ex­tinc­tion. If the Great Filter is late, there are ad­vanced civ­i­liza­tions across the uni­verse, while if it is early, there are only . As­sume that the agent cur­rently es­ti­mates late-vs-early Great Filters as 50-50.

With the usual ADT agent as­sum­ing that all their al­most-copies reach the same de­ci­sion in ev­ery civ­i­liza­tion, the util­ity from early con­sump­tion is (to­tal util­ity av­er­aged over late vs early Great Filters), while the util­ity from late con­sump­tion is .

So a to­tal util­i­tar­ian ADT agent will be more likely to go for early con­sump­tion than the ob­jec­tive odds would im­ply. And the more dev­as­tat­ing the late Great Filter, the stronger this effect.

For large , these ap­prox­i­mate to and .