Exterminating life is rational

Followup to This Failing Earth, Our society lacks good self-preservation mechanisms, Is short term planning in humans due to a short life or due to bias?

I don’t mean that deciding to exterminate life is rational. But if, as a society of rational agents, we each maximize our expected utility, this may inevitably lead to our exterminating life, or at least intelligent life.

Ed Regis reports on p 216 of “Great Mambo Chicken and the TransHuman Condition,” (Penguin Books, London, 1992):

Edward Teller had thought about it, the chance that the atomic explosion would light up the surrounding air and that this conflagration would then propagate itself around the world. Some of the bomb makers had even calculated the numerical odds of this actually happening, coming up with the figure of three chances in a million they’d incinerate the Earth. Nevertheless, they went ahead and exploded the bomb.

Was this a bad decision? Well, consider the expected value to the people involved. Without the bomb, there was a much, much greater than 31,000,000 chance that either a) they would be killed in the war, or b) they would be ruled by Nazis or the Japanese. The loss to them if they ignited the atmosphere would be another 30 or so years of life. The loss to them if they lost the war and/​or were killed by their enemies would also be another 30 or so years of life. The loss in being conquered would also be large. Easy decision, really.

Suppose that, once a century, some party in a conflict chooses to use some technique to help win the conflict that has a p=3/​1,000,000 chance of eliminating life as we know it. Then our expected survival time is 100 times the sum from n=1 to infinity of np(1-p)n-1. If I’ve done my math right, that’s ≈ 33,777,000 years.

This supposition seems reasonable to me. There is a balance between offensive and defensive capability that shifts as technology develops. If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed. In the near future, biological weapons will be more able to wipe out life than we are able to defend against them. We may then develop the ability to defend against biological attacks; we may then be safe until the next dangerous technology.

If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially. The 34M years remaining to life is then in subjective time, and must be mapped into realtime. If we suppose the subjective/​real time ratio doubles every 100 years, this gives life an expected survival time of 2000 more realtime years. If we instead use Ray Kurzweil’s figure of about 2 years, this gives life about 40 remaining realtime years. (I don’t recommend Ray’s figure. I’m just giving it for those who do.)

Please understand that I am not yet another “prophet” bemoaning the foolishness of humanity. Just the opposite: I’m saying this is not something we will outgrow. If anything, becoming more rational only makes our doom more certain. For the agents who must actually make these decisions, it would be irrational not to take these risks. The fact that this level of risk-tolerance will inevitably lead to the snuffing out of all life does not make the expected utility of these risks negative for the agents involved.

I can think of only a few ways that rationalilty can not inevitably exterminate all life in the cosmologically (even geologically) near future:

  • We can outrun the danger: We can spread life to other planets, and to other solar systems, and to other galaxies, faster than we can spread destruction.

  • Technology will not continue to develop, but will stabilize in a state in which all defensive technologies provide absolute, 100%, fail-safe protection against all offensive technologies.

  • People will stop having conflicts.

  • Rational agents incorporate the benefits to others into their utility functions.

  • Rational agents with long lifespans will protect the future for themselves.

  • Utility functions will change so that it is no longer rational for decision-makers to take tiny chances of destroying life for any amount of utility gains.

  • Independent agents will cease to exist, or to be free (the Singleton scenario).

Let’s look at these one by one:

We can outrun the danger.

We will colonize other planets; but we may also figure out how to make the Sun go nova on demand. We will colonize other star systems; but we may also figure out how to liberate much of the energy in the black hole at the center of our galaxy in a giant explosion that will move outward at near the speed of light.

One problem with this idea is that apocalypses are correlated; one may trigger another. A disease may spread to another planet. The choice to use a planet-busting bomb on one planet may lead to its retaliatory use on another planet. It’s not clear whether spreading out and increasing in population actually makes life more safe. If you think in the other direction, a smaller human population (say ten million) stuck here on Earth would be safer from human-instigated disasters.

But neither of those are my final objection. More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed.

Technology will stabilize in a safe state.

Maybe technology will stabilize, and we’ll run out of things to discover. If that were to happen, I would expect that conflicts would increase, because people would get bored. As I mentioned in another thread, one good explanation for the incessant and counterproductive wars in the middle ages—a reason some of the actors themselves gave in their writings—is that the nobility were bored. They did not have the concept of progress; they were just looking for something to give them purpose while waiting for Jesus to return.

But that’s not my final rejection. The big problem is that by “safe”, I mean really, really safe. We’re talking about bringing existential threats to chances less than 1 in a million per century. I don’t know of any defensive technology that can guarantee a less than 1 in a million failure rate.

People will stop having conflicts.

That’s a nice thought. A lot of people—maybe the majority of people—believe that we are inevitably progressing along a path to less violence and greater peace.

They thought that just before World War I. But that’s not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.

But that’s not my final rejection either. The bigger problem is that this isn’t something that arises only in conflicts. All we need are desires. We’re willing to tolerate risk to increase our utility. For instance, we’re willing to take some unknown, but clearly greater than one in a million chance, of the collapse of much of civilization due to climate warming. In return for this risk, we can enjoy a better lifestyle now.

Also, we haven’t burned all physics textbooks along with all physicists. Yet I’m confident there is at least a one in a million chance that, in the next 100 years, some physicist will figure out a way to reduce the earth to powder, if not to crack spacetime itself and undo the entire universe. (In fact, I’d guess the chance is nearer to 1 in 10.)1 We take this existential risk in return for a continued flow of benefits such as better graphics in Halo 3 and smaller iPods. And it’s reasonable for us to do this, because an improvement in utility of 1% over an agent’s lifespan is, to that agent, exactly balanced by a 1% chance of destroying the Universe.

The Wikipedia entry on Large Hadcon Collider risk says, “In the book Our Final Century: Will the Human Race Survive the Twenty-first Century?, English cosmologist and astrophysicist Martin Rees calculated an upper limit of 1 in 50 million for the probability that the Large Hadron Collider will produce a global catastrophe or black hole.” The more authoritative “Review of the Safety of LHC Collisions” by the LHC Safety Assessment Group concluded that there was at most a 1 in 1031 chance of destroying the Earth.

The LHC conclusions are criminally low. Their evidence was this: “Nature has already conducted the LHC experimental programme about one billion times via the collisions of cosmic rays with the Sun—and the Sun still exists.” There followed a couple of sentences of handwaving to the effect that if any other stars had turned to black holes due to collisions with cosmic rays, we would know it—apparently due to our flawless ability to detect black holes and ascertain what caused them—and therefore we can multiply this figure by the number of stars in the universe.

I believe there is much more than a one-in-a-billion chance that our understanding in one of the steps used in arriving at these figures is incorrect. Based on my experience with peer-reviewed papers, there’s at least a one-in-ten chance that there’s a basic arithmetic error in their paper that no one has noticed yet. I’m thinking more like one-in-a-million, once you correct for the anthropic principle and for the chance that there is a mistake in the argument. (That’s based on a belief that priors for anything likely enough that smart people even thought of the possibility should be larger than one in a billion, unless they were specifically trying to think of examples of low-probability possibilities such as all of the air molecules in the room moving to one side.)

The Trinity test was done for the sake of winning World War II. But the LHC was turned on for… well, no practical advantage that I’ve heard of yet. It seems that we are willing to tolerate one-in-a-million chances of destroying the Earth for very little benefit. And this is rational, since the LHC will probably improve our lives by more than one part in a million.

Rational agents incorporate the benefits to others into their utility functions.

“But,” you say, “I wouldn’t risk a 1% chance of destroying the universe for a 1% increase in my utility!”

Well… yes, you would, if you’re a rational expectation maximizer. It’s possible that you would take a much higher risk, if your utility is at risk of going negative; it’s not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility. (This seems difficult, but is worth exploring.) If you still think that you wouldn’t, it’s probably because you’re thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience. It doesn’t. It’s a 1% increase in your utility. If you factor the rest of your universe into your utility function, then it’s already in there.

The US national debt should be enough to convince you that people act in their self-interest. Even the most moral people—in fact, especially the “most moral” people—do not incorporate the benefits to others, especially future others, into their utility functions. If we did that, we would engage in massive eugenics programs. But eugenics is considered the greatest immorality.

But maybe they’re just not as rational as you. Maybe you really are a rational saint who considers your own pleasure no more important than the pleasure of everyone else on Earth. Maybe you have never, ever bought anything for yourself that did not bring you as much benefit as the same amount of money would if spent to repair cleft palates or distribute vaccines or mosquito nets or water pumps in Africa. Maybe it’s really true that, if you met the girl of your dreams and she loved you, and you won the lottery, put out an album that went platinum, and got published in Science, all in the same week, it would make an imperceptible change in your utility versus if everyone you knew died, Bernie Madoff spent all your money, and you were unfairly convicted of murder and diagnosed with cancer.

It doesn’t matter. Because you would be adding up everyone else’s utility, and everyone else is getting that 1% extra utility from the better graphics cards and the smaller iPods.

But that will stop you from risking atmospheric ignition to defeat the Nazis, right? Because you’ll incorporate them into your utility function? Well, that is a subset of the claim “People will stop having conflicts.” See above.

And even if you somehow worked around all these arguments, evolution, again, thwarts you.2 Even if you don’t agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents. The claim that rational agents are not selfish implies that rational agents are unfit.

Rational agents with long lifespans will protect the future for themselves.

The most familiar idea here is that, if people expect to live for millions of years, they will be “wiser” and take fewer risks with that time. But the flip side is that they also have more time to lose. If they’re deciding whether to risk igniting the atmosphere in order to lower the risk of being killed by Nazis, lifespan cancels out of the equation.

Also, if they live a million times longer than us, they’re going to get a million times the benefit of those nicer iPods. They may be less willing to take an existential risk for something that will benefit them only temporarily. But benefits have a way of increasing, not decreasing, over time. The discovery of the law of gravity and of the invisible hand benefit us in the 21st century more than they did the people of the 17th century.

But that’s not my final rejection. More important is time-discounting. Agents will time-discount, probably exponentially, due to uncertainty. If you considered benefits to the future without exponential time-discounting, the benefits to others and to future generations would outweigh any benefits to yourself so much that in many cases you wouldn’t even waste time trying to figure out what you wanted. And, since future generations will be able to get more utility out of the same resources, we’d all be obliged to kill ourselves, unless we reasonably think that we are contributing to the development of that capability.

Time discounting is always (so far) exponential, because non-asymptotic functions don’t make sense. I supposed you could use a trigonometric function instead for time discounting, but I don’t think it would help.

Could a continued exponential population explosion outweigh exponential time-discounting? Well, you can’t have a continued exponential population explosion, because of the speed of light and the Planck constant. (I leave the details as an exercise to the reader.)

Also, even if you had no time-discounting, I think that a rational agent must do identity-discounting. You can’t stay you forever. If you change, the future you will be less like you, and weigh less strongly in your utility function. Objections to this generally assume that it makes sense to trace your identity by following your physical body. Physical bodies will not have a 1-1 correspondence with personalities for more than another century or two, so just forget that idea. And if you don’t change, well, what’s the point of living?

Evolutionary arguments may help us with self-discounting. Evolutionary forces encourage agents to emphasize continuity or ancestry over resemblance in an agent’s selfness function. The major variable is reproduction rate over lifespan. This applies to genes or memes. But they can’t help us with time-discounting.

I think there may be a way to make this one work. I just haven’t thought of it yet.

A benevolent singleton will save us all.

This case takes more analysis than I am willing to do right now. My short answer is that I place a very low expected utility on singleton scenarios. I would almost rather have the universe eat, drink, and be merry for 34 million years, and then die.

I’m not ready to place my faith in a singleton. I want to work out what is wrong with the rest of this argument, and how we can survive without a singleton.

(Please don’t conclude from my arguments that you should go out and create a singleton. Creating a singleton is hard to undo. It should be deferred nearly as long as possible. Maybe we don’t have 34 million years, but this essay doesn’t give you any reason not to wait a few thousand years at least.)

In conclusion

I think that the figures I’ve given here are conservative. I expect existential risk to be much greater than 31,000,000 per century. I expect there will continue to be externalities that cause suboptimal behavior, so that the actual risk will be greater even than the already-sufficient risk that rational agents would choose. I expect population and technology to continue to increase, and existential risk to be proportional to population times technology. Existential risk will very possibly increase exponentially, on top of the subjective-time exponential.

Our greatest chance for survival is that there’s some other possibility I haven’t thought of yet. Perhaps some of you will.

1 If you argue that the laws of physics may turn out to make this impossible, you don’t understand what “probability” means.

2 Evolutionary dynamics, the speed of light, and the Planck constant are the three great enablers and preventers of possible futures, which enable us to make predictions farther into the future and with greater confidence than seem intuitively reasonable.