Stable and Unstable Risks

Related: Existential Risk, 926 is Petrov Day

Existential risks—risks that, in the words of Nick Bostrom, would “either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential,” are a significant threat to the world as we know it. In fact, they may be one of the most pressing issues facing humanity today.

The likelihood of some risks may stay relatively constant over time—a basic view of asteroid impact is that there is a certain probability that a “killer asteroid” hits the Earth and that this probability is more or less the same every year. This is what I refer to as a “stable risk.”

However, the likelihood of other existential risks seems to fluctuate, often quite dramatically. Many of these “unstable risks” are related to human activity.

For instance, the likelihood of a nuclear war at sufficient scale to be an existential threat seems contingent on various geopolitical factors that are difficult to predict in advance. That said, the likelihood of this risk has clearly changed throughout recent history. Nuclear war was obviously not an existential risk before nuclear weapons were invented, and was fairly clearly more of a risk during the Cuban Missile Crisis than it is today.

Many of these unstable, human-created risks seem based largely on advanced technology. Potential risks like gray goo rely on theorized technologies that have yet to be developed (and indeed may never be developed). While this is good news for the present day, it also means that we have to be vigilant for the emergence of potential new threats as human technology increases.

GiveWell’s recent conversation with Carl Shulman contains some arguments as to why the risk of human extinction may be decreasing over time. However, it strikes me as perhaps more likely that the risk of human extinction is increasing over time—or at the very least becoming less stable—as technology increases the amount of power available to individuals and civilizations.

After all, the very concept of human-created unstable existential risks is a recent one. Even if Julius Caesar, Genghis Khan, or Queen Victoria for some reason decided to destroy human civilization, it seems almost certain that they would fail, even given all the resources of their empires.

The same cannot be said for Kennedy or Khrushchev.