Exterminating life is rational

Fol­lowup to This Failing Earth, Our so­ciety lacks good self-preser­va­tion mechanisms, Is short term plan­ning in hu­mans due to a short life or due to bias?

I don’t mean that de­cid­ing to ex­ter­mi­nate life is ra­tio­nal. But if, as a so­ciety of ra­tio­nal agents, we each max­i­mize our ex­pected util­ity, this may in­evitably lead to our ex­ter­mi­nat­ing life, or at least in­tel­li­gent life.

Ed Regis re­ports on p 216 of “Great Mambo Chicken and the Tran­sHu­man Con­di­tion,” (Pen­guin Books, Lon­don, 1992):

Ed­ward Tel­ler had thought about it, the chance that the atomic ex­plo­sion would light up the sur­round­ing air and that this con­fla­gra­tion would then prop­a­gate it­self around the world. Some of the bomb mak­ers had even calcu­lated the nu­mer­i­cal odds of this ac­tu­ally hap­pen­ing, com­ing up with the figure of three chances in a mil­lion they’d in­cin­er­ate the Earth. Nev­er­the­less, they went ahead and ex­ploded the bomb.

Was this a bad de­ci­sion? Well, con­sider the ex­pected value to the peo­ple in­volved. Without the bomb, there was a much, much greater than 31,000,000 chance that ei­ther a) they would be kil­led in the war, or b) they would be ruled by Nazis or the Ja­panese. The loss to them if they ig­nited the at­mo­sphere would be an­other 30 or so years of life. The loss to them if they lost the war and/​or were kil­led by their en­e­mies would also be an­other 30 or so years of life. The loss in be­ing con­quered would also be large. Easy de­ci­sion, re­ally.

Sup­pose that, once a cen­tury, some party in a con­flict chooses to use some tech­nique to help win the con­flict that has a p=3/​1,000,000 chance of elimi­nat­ing life as we know it. Then our ex­pected sur­vival time is 100 times the sum from n=1 to in­finity of np(1-p)n-1. If I’ve done my math right, that’s ≈ 33,777,000 years.

This sup­po­si­tion seems rea­son­able to me. There is a bal­ance be­tween offen­sive and defen­sive ca­pa­bil­ity that shifts as tech­nol­ogy de­vel­ops. If tech­nol­ogy keeps chang­ing, it is in­evitable that, much of the time, a tech­nol­ogy will provide the abil­ity to de­stroy all life be­fore the counter-tech­nol­ogy to defend against it has been de­vel­oped. In the near fu­ture, biolog­i­cal weapons will be more able to wipe out life than we are able to defend against them. We may then de­velop the abil­ity to defend against biolog­i­cal at­tacks; we may then be safe un­til the next dan­ger­ous tech­nol­ogy.

If you be­lieve in ac­cel­er­at­ing change, then the num­ber of im­por­tant events in a given time in­ter­val in­creases ex­po­nen­tially, or, equiv­a­lently, the time in­ter­vals that should be con­sid­ered equiv­a­lent op­por­tu­ni­ties for im­por­tant events shorten ex­po­nen­tially. The 34M years re­main­ing to life is then in sub­jec­tive time, and must be mapped into re­al­time. If we sup­pose the sub­jec­tive/​real time ra­tio dou­bles ev­ery 100 years, this gives life an ex­pected sur­vival time of 2000 more re­al­time years. If we in­stead use Ray Kurzweil’s figure of about 2 years, this gives life about 40 re­main­ing re­al­time years. (I don’t recom­mend Ray’s figure. I’m just giv­ing it for those who do.)

Please un­der­stand that I am not yet an­other “prophet” be­moan­ing the fool­ish­ness of hu­man­ity. Just the op­po­site: I’m say­ing this is not some­thing we will out­grow. If any­thing, be­com­ing more ra­tio­nal only makes our doom more cer­tain. For the agents who must ac­tu­ally make these de­ci­sions, it would be ir­ra­tional not to take these risks. The fact that this level of risk-tol­er­ance will in­evitably lead to the snuf­fing out of all life does not make the ex­pected util­ity of these risks nega­tive for the agents in­volved.

I can think of only a few ways that ra­tio­nal­ilty can not in­evitably ex­ter­mi­nate all life in the cos­molog­i­cally (even ge­olog­i­cally) near fu­ture:

  • We can out­run the dan­ger: We can spread life to other planets, and to other so­lar sys­tems, and to other galax­ies, faster than we can spread de­struc­tion.

  • Tech­nol­ogy will not con­tinue to de­velop, but will sta­bi­lize in a state in which all defen­sive tech­nolo­gies provide ab­solute, 100%, fail-safe pro­tec­tion against all offen­sive tech­nolo­gies.

  • Peo­ple will stop hav­ing con­flicts.

  • Ra­tional agents in­cor­po­rate the benefits to oth­ers into their util­ity func­tions.

  • Ra­tional agents with long lifes­pans will pro­tect the fu­ture for them­selves.

  • Utility func­tions will change so that it is no longer ra­tio­nal for de­ci­sion-mak­ers to take tiny chances of de­stroy­ing life for any amount of util­ity gains.

  • In­de­pen­dent agents will cease to ex­ist, or to be free (the Sin­gle­ton sce­nario).

Let’s look at these one by one:

We can out­run the dan­ger.

We will colonize other planets; but we may also figure out how to make the Sun go nova on de­mand. We will colonize other star sys­tems; but we may also figure out how to liber­ate much of the en­ergy in the black hole at the cen­ter of our galaxy in a gi­ant ex­plo­sion that will move out­ward at near the speed of light.

One prob­lem with this idea is that apoc­a­lypses are cor­re­lated; one may trig­ger an­other. A dis­ease may spread to an­other planet. The choice to use a planet-bust­ing bomb on one planet may lead to its re­tal­i­a­tory use on an­other planet. It’s not clear whether spread­ing out and in­creas­ing in pop­u­la­tion ac­tu­ally makes life more safe. If you think in the other di­rec­tion, a smaller hu­man pop­u­la­tion (say ten mil­lion) stuck here on Earth would be safer from hu­man-in­sti­gated dis­asters.

But nei­ther of those are my fi­nal ob­jec­tion. More im­por­tant is that our com­pres­sion of sub­jec­tive time can be ex­po­nen­tial, while our abil­ity to es­cape from ever-broader swaths of de­struc­tion is limited by light­speed.

Tech­nol­ogy will sta­bi­lize in a safe state.

Maybe tech­nol­ogy will sta­bi­lize, and we’ll run out of things to dis­cover. If that were to hap­pen, I would ex­pect that con­flicts would in­crease, be­cause peo­ple would get bored. As I men­tioned in an­other thread, one good ex­pla­na­tion for the in­ces­sant and coun­ter­pro­duc­tive wars in the mid­dle ages—a rea­son some of the ac­tors them­selves gave in their writ­ings—is that the no­bil­ity were bored. They did not have the con­cept of progress; they were just look­ing for some­thing to give them pur­pose while wait­ing for Je­sus to re­turn.

But that’s not my fi­nal re­jec­tion. The big prob­lem is that by “safe”, I mean re­ally, re­ally safe. We’re talk­ing about bring­ing ex­is­ten­tial threats to chances less than 1 in a mil­lion per cen­tury. I don’t know of any defen­sive tech­nol­ogy that can guaran­tee a less than 1 in a mil­lion failure rate.

Peo­ple will stop hav­ing con­flicts.

That’s a nice thought. A lot of peo­ple—maybe the ma­jor­ity of peo­ple—be­lieve that we are in­evitably pro­gress­ing along a path to less vi­o­lence and greater peace.

They thought that just be­fore World War I. But that’s not my fi­nal re­jec­tion. Evolu­tion­ary ar­gu­ments are a more pow­er­ful rea­son to be­lieve that peo­ple will con­tinue to have con­flicts. Those that avoid con­flict will be out-com­peted by those that do not.

But that’s not my fi­nal re­jec­tion ei­ther. The big­ger prob­lem is that this isn’t some­thing that arises only in con­flicts. All we need are de­sires. We’re will­ing to tol­er­ate risk to in­crease our util­ity. For in­stance, we’re will­ing to take some un­known, but clearly greater than one in a mil­lion chance, of the col­lapse of much of civ­i­liza­tion due to cli­mate warm­ing. In re­turn for this risk, we can en­joy a bet­ter lifestyle now.

Also, we haven’t burned all physics text­books along with all physi­cists. Yet I’m con­fi­dent there is at least a one in a mil­lion chance that, in the next 100 years, some physi­cist will figure out a way to re­duce the earth to pow­der, if not to crack space­time it­self and undo the en­tire uni­verse. (In fact, I’d guess the chance is nearer to 1 in 10.)1 We take this ex­is­ten­tial risk in re­turn for a con­tinued flow of benefits such as bet­ter graph­ics in Halo 3 and smaller iPods. And it’s rea­son­able for us to do this, be­cause an im­prove­ment in util­ity of 1% over an agent’s lifes­pan is, to that agent, ex­actly bal­anced by a 1% chance of de­stroy­ing the Uni­verse.

The Wikipe­dia en­try on Large Had­con Col­lider risk says, “In the book Our Fi­nal Cen­tury: Will the Hu­man Race Sur­vive the Twenty-first Cen­tury?, English cos­mol­o­gist and as­tro­physi­cist Martin Rees calcu­lated an up­per limit of 1 in 50 mil­lion for the prob­a­bil­ity that the Large Hadron Col­lider will pro­duce a global catas­tro­phe or black hole.” The more au­thor­i­ta­tive “Re­view of the Safety of LHC Col­li­sions” by the LHC Safety Assess­ment Group con­cluded that there was at most a 1 in 1031 chance of de­stroy­ing the Earth.

The LHC con­clu­sions are crim­i­nally low. Their ev­i­dence was this: “Na­ture has already con­ducted the LHC ex­per­i­men­tal pro­gramme about one billion times via the col­li­sions of cos­mic rays with the Sun—and the Sun still ex­ists.” There fol­lowed a cou­ple of sen­tences of hand­wav­ing to the effect that if any other stars had turned to black holes due to col­li­sions with cos­mic rays, we would know it—ap­par­ently due to our flawless abil­ity to de­tect black holes and as­cer­tain what caused them—and there­fore we can mul­ti­ply this figure by the num­ber of stars in the uni­verse.

I be­lieve there is much more than a one-in-a-billion chance that our un­der­stand­ing in one of the steps used in ar­riv­ing at these figures is in­cor­rect. Based on my ex­pe­rience with peer-re­viewed pa­pers, there’s at least a one-in-ten chance that there’s a ba­sic ar­ith­metic er­ror in their pa­per that no one has no­ticed yet. I’m think­ing more like one-in-a-mil­lion, once you cor­rect for the an­thropic prin­ci­ple and for the chance that there is a mis­take in the ar­gu­ment. (That’s based on a be­lief that pri­ors for any­thing likely enough that smart peo­ple even thought of the pos­si­bil­ity should be larger than one in a billion, un­less they were speci­fi­cally try­ing to think of ex­am­ples of low-prob­a­bil­ity pos­si­bil­ities such as all of the air molecules in the room mov­ing to one side.)

The Trinity test was done for the sake of win­ning World War II. But the LHC was turned on for… well, no prac­ti­cal ad­van­tage that I’ve heard of yet. It seems that we are will­ing to tol­er­ate one-in-a-mil­lion chances of de­stroy­ing the Earth for very lit­tle benefit. And this is ra­tio­nal, since the LHC will prob­a­bly im­prove our lives by more than one part in a mil­lion.

Ra­tional agents in­cor­po­rate the benefits to oth­ers into their util­ity func­tions.

“But,” you say, “I wouldn’t risk a 1% chance of de­stroy­ing the uni­verse for a 1% in­crease in my util­ity!”

Well… yes, you would, if you’re a ra­tio­nal ex­pec­ta­tion max­i­mizer. It’s pos­si­ble that you would take a much higher risk, if your util­ity is at risk of go­ing nega­tive; it’s not pos­si­ble that you would not ac­cept a .999% risk, un­less you are not max­i­miz­ing ex­pected value, or you as­sign the null state af­ter uni­verse-de­struc­tion nega­tive util­ity. (This seems difficult, but is worth ex­plor­ing.) If you still think that you wouldn’t, it’s prob­a­bly be­cause you’re think­ing a 1% in­crease in your util­ity means some­thing like a 1% in­crease in the plea­sure you ex­pe­rience. It doesn’t. It’s a 1% in­crease in your util­ity. If you fac­tor the rest of your uni­verse into your util­ity func­tion, then it’s already in there.

The US na­tional debt should be enough to con­vince you that peo­ple act in their self-in­ter­est. Even the most moral peo­ple—in fact, es­pe­cially the “most moral” peo­ple—do not in­cor­po­rate the benefits to oth­ers, es­pe­cially fu­ture oth­ers, into their util­ity func­tions. If we did that, we would en­gage in mas­sive eu­gen­ics pro­grams. But eu­gen­ics is con­sid­ered the great­est im­moral­ity.

But maybe they’re just not as ra­tio­nal as you. Maybe you re­ally are a ra­tio­nal saint who con­sid­ers your own plea­sure no more im­por­tant than the plea­sure of ev­ery­one else on Earth. Maybe you have never, ever bought any­thing for your­self that did not bring you as much benefit as the same amount of money would if spent to re­pair cleft palates or dis­tribute vac­cines or mosquito nets or wa­ter pumps in Africa. Maybe it’s re­ally true that, if you met the girl of your dreams and she loved you, and you won the lot­tery, put out an album that went plat­inum, and got pub­lished in Science, all in the same week, it would make an im­per­cep­ti­ble change in your util­ity ver­sus if ev­ery­one you knew died, Bernie Mad­off spent all your money, and you were un­fairly con­victed of mur­der and di­ag­nosed with can­cer.

It doesn’t mat­ter. Be­cause you would be adding up ev­ery­one else’s util­ity, and ev­ery­one else is get­ting that 1% ex­tra util­ity from the bet­ter graph­ics cards and the smaller iPods.

But that will stop you from risk­ing at­mo­spheric ig­ni­tion to defeat the Nazis, right? Be­cause you’ll in­cor­po­rate them into your util­ity func­tion? Well, that is a sub­set of the claim “Peo­ple will stop hav­ing con­flicts.” See above.

And even if you some­how worked around all these ar­gu­ments, evolu­tion, again, thwarts you.2 Even if you don’t agree that ra­tio­nal agents are self­ish, your un­selfish agents will be out-com­peted by self­ish agents. The claim that ra­tio­nal agents are not self­ish im­plies that ra­tio­nal agents are un­fit.

Ra­tional agents with long lifes­pans will pro­tect the fu­ture for them­selves.

The most fa­mil­iar idea here is that, if peo­ple ex­pect to live for mil­lions of years, they will be “wiser” and take fewer risks with that time. But the flip side is that they also have more time to lose. If they’re de­cid­ing whether to risk ig­nit­ing the at­mo­sphere in or­der to lower the risk of be­ing kil­led by Nazis, lifes­pan can­cels out of the equa­tion.

Also, if they live a mil­lion times longer than us, they’re go­ing to get a mil­lion times the benefit of those nicer iPods. They may be less will­ing to take an ex­is­ten­tial risk for some­thing that will benefit them only tem­porar­ily. But benefits have a way of in­creas­ing, not de­creas­ing, over time. The dis­cov­ery of the law of grav­ity and of the in­visi­ble hand benefit us in the 21st cen­tury more than they did the peo­ple of the 17th cen­tury.

But that’s not my fi­nal re­jec­tion. More im­por­tant is time-dis­count­ing. Agents will time-dis­count, prob­a­bly ex­po­nen­tially, due to un­cer­tainty. If you con­sid­ered benefits to the fu­ture with­out ex­po­nen­tial time-dis­count­ing, the benefits to oth­ers and to fu­ture gen­er­a­tions would out­weigh any benefits to your­self so much that in many cases you wouldn’t even waste time try­ing to figure out what you wanted. And, since fu­ture gen­er­a­tions will be able to get more util­ity out of the same re­sources, we’d all be obliged to kill our­selves, un­less we rea­son­ably think that we are con­tribut­ing to the de­vel­op­ment of that ca­pa­bil­ity.

Time dis­count­ing is always (so far) ex­po­nen­tial, be­cause non-asymp­totic func­tions don’t make sense. I sup­posed you could use a tri­gono­met­ric func­tion in­stead for time dis­count­ing, but I don’t think it would help.

Could a con­tinued ex­po­nen­tial pop­u­la­tion ex­plo­sion out­weigh ex­po­nen­tial time-dis­count­ing? Well, you can’t have a con­tinued ex­po­nen­tial pop­u­la­tion ex­plo­sion, be­cause of the speed of light and the Planck con­stant. (I leave the de­tails as an ex­er­cise to the reader.)

Also, even if you had no time-dis­count­ing, I think that a ra­tio­nal agent must do iden­tity-dis­count­ing. You can’t stay you for­ever. If you change, the fu­ture you will be less like you, and weigh less strongly in your util­ity func­tion. Ob­jec­tions to this gen­er­ally as­sume that it makes sense to trace your iden­tity by fol­low­ing your phys­i­cal body. Phys­i­cal bod­ies will not have a 1-1 cor­re­spon­dence with per­son­al­ities for more than an­other cen­tury or two, so just for­get that idea. And if you don’t change, well, what’s the point of liv­ing?

Evolu­tion­ary ar­gu­ments may help us with self-dis­count­ing. Evolu­tion­ary forces en­courage agents to em­pha­size con­ti­nu­ity or an­ces­try over re­sem­blance in an agent’s self­ness func­tion. The ma­jor vari­able is re­pro­duc­tion rate over lifes­pan. This ap­plies to genes or memes. But they can’t help us with time-dis­count­ing.

I think there may be a way to make this one work. I just haven’t thought of it yet.

A benev­olent sin­gle­ton will save us all.

This case takes more anal­y­sis than I am will­ing to do right now. My short an­swer is that I place a very low ex­pected util­ity on sin­gle­ton sce­nar­ios. I would al­most rather have the uni­verse eat, drink, and be merry for 34 mil­lion years, and then die.

I’m not ready to place my faith in a sin­gle­ton. I want to work out what is wrong with the rest of this ar­gu­ment, and how we can sur­vive with­out a sin­gle­ton.

(Please don’t con­clude from my ar­gu­ments that you should go out and cre­ate a sin­gle­ton. Creat­ing a sin­gle­ton is hard to undo. It should be deferred nearly as long as pos­si­ble. Maybe we don’t have 34 mil­lion years, but this es­say doesn’t give you any rea­son not to wait a few thou­sand years at least.)

In conclusion

I think that the figures I’ve given here are con­ser­va­tive. I ex­pect ex­is­ten­tial risk to be much greater than 31,000,000 per cen­tury. I ex­pect there will con­tinue to be ex­ter­nal­ities that cause sub­op­ti­mal be­hav­ior, so that the ac­tual risk will be greater even than the already-suffi­cient risk that ra­tio­nal agents would choose. I ex­pect pop­u­la­tion and tech­nol­ogy to con­tinue to in­crease, and ex­is­ten­tial risk to be pro­por­tional to pop­u­la­tion times tech­nol­ogy. Ex­is­ten­tial risk will very pos­si­bly in­crease ex­po­nen­tially, on top of the sub­jec­tive-time ex­po­nen­tial.

Our great­est chance for sur­vival is that there’s some other pos­si­bil­ity I haven’t thought of yet. Per­haps some of you will.

1 If you ar­gue that the laws of physics may turn out to make this im­pos­si­ble, you don’t un­der­stand what “prob­a­bil­ity” means.

2 Evolu­tion­ary dy­nam­ics, the speed of light, and the Planck con­stant are the three great en­ablers and pre­ven­ters of pos­si­ble fu­tures, which en­able us to make pre­dic­tions farther into the fu­ture and with greater con­fi­dence than seem in­tu­itively rea­son­able.