A possible solution to the Fermi Paradox

[tl;dr: I ar­gue that, if Many Wor­lds is true, the Sur­vival Bias might ex­plain the lack of ob­served life in the uni­verse, un­der the as­sump­tion that there are no hu­mans in most wor­lds where life on a reach­able planet has reached tech­nolog­i­cal ma­tu­rity.]

[Con­tent Warn­ing: pos­si­bly un­set­tling]

Sup­pose you are offered a deal: you’ll be put to sleep, cloned 99 times, and each of the 100 ver­sions of you will be sent to an iden­ti­cal-look­ing room where they’ll be wo­ken up. Sup­pose that you all share the same con­scious­ness: the lights are on for the same en­tity in all 100 copies (but they have no way of com­mu­ni­cat­ing with each other). A minute af­ter wak­ing up, one ran­domly cho­sen clone will re­ceive five mil­lion dol­lars, while the other 99 will die a painless death, so quick that they will nei­ther see it com­ing nor feel any pain.

It’s not rele­vant for the plau­si­bil­ity of this ar­gu­ment whether you would take this deal (though it might have other im­pli­ca­tions). For now, sup­pose you take it. You’re put to sleep and next thing you know, you wake up in a room. After you wait for a minute, the ex­per­i­menter en­ters the room, hands you your five mil­lion dol­lars, and po­litely thanks you for your par­ti­ci­pa­tion.

Should you be sur­prised? I’d say no. Noth­ing sur­pris­ing has hap­pened; in fact, there was only one way this could have gone all along. On the other hand, if the 99 un­lucky copies were not kil­led but put in prison, I would ar­gue that sur­prise is war­ranted. The sur­vival bias is real, but it only ap­plies when there is an ex­per­i­ment on a set of peo­ple, and a sub­set of those won’t be able to tell af­ter­wards.

Now con­sider how many times we nar­rowly avoided nu­clear war. The base the­ory for why this hap­pened is that we got lucky. But if nu­clear war always re­sults in your death, and if Many-Wor­lds is true, then us be­ing still al­ive isn’t sur­pris­ing at all; rather it’s the only ob­ser­va­tion pos­si­ble.

Okay, so let’s ex­am­ine the Fermi Prob­lem. Let be the odds that prim­i­tive life on some planet re­sults in a species in­vent­ing space travel, and let be the num­ber of other planets in reach with prim­i­tive life on them. A clas­si­cal ex­pla­na­tion for our ob­ser­va­tions ei­ther re­quires that species who reach earth gen­er­ally choose to leave us undis­turbed, or that be suffi­ciently large (that’s the prob­a­bil­ity that no alien species in reach makes it to space travel). One way this could be the case is if the first step to­wards in­tel­li­gent life is ex­tremely hard and there­fore is ac­tu­ally fairly small, per­haps .

The Many-Wor­lds look on the sur­vival of our own species helps the clas­si­cal ex­pla­na­tion out by mak­ing it more plau­si­ble that is very small. A mix­ture of both might also be true, per­haps if is and is .

What I’m ar­gu­ing in this post is to con­sider a differ­ent ex­pla­na­tion that works through the sur­vival bias. Sup­pose that, if a space-trav­el­ing species reaches an­other planet, they don’t gen­er­ally leave them to them­selves, rather they al­most always end life there. Then, the only pos­si­ble ob­ser­va­tion we could have is the cur­rent one, re­gard­less of the val­ues of and . Put plainly: there are lots of tech­nolog­i­cally ma­ture species out there, they do travel to other planets, and in a large ma­jor­ity of wor­lds, they’ve reached earth and hu­man­ity doesn’t ex­ist. But be­cause of quan­tum physics, there are still wor­lds where a chance has come true, and this is one such.

But is that as­sump­tion plau­si­ble? Many might dis­agree, but I would say yes. A pa­per­clip­per sce­nario on a reach­able planet would cer­tainly lead to the ex­tinc­tion of life on earth, but even a species with an al­igned AI would prob­a­bly find more effec­tive ways to use this planet than to al­low life and suffer­ing to con­tinue there, es­pe­cially con­sid­er­ing that, in a vast ma­jor­ity of cases, life on earth would be in­cred­ibly prim­i­tive at the time of their ar­rival. The ques­tion seems to de­pend pri­mar­ily on how one images the moral­ity of a tech­nolog­i­cally ma­ture civ­i­liza­tion to look like.

[Foot­note #1]: I don’t know how wrong this as­sump­tion is. If some­one feels qual­ified to es­ti­mate the prob­a­bil­ity of per­sonal sur­vival in the case that any one of the in­ci­dents listed on Wikipe­dia had gone wrong, please feel free to do so.