Existential Risk is a single category

A lot of people speak in terms of “existential risk from artificial intelligence” or “existential risk from nuclear war.” While this is fine to some approximation, I rarely see it pointed out that this is not how risk works. Existential risk refers to the probability of a set of outcomes, and those outcomes are not defined in terms of their cause.

To illustrate why this is a problem, observe that there are numerous ways for two or more things-we-call-existential-risks to contribute equally to a bad outcome. Imagine nuclear weapons leading to a partial collapse of civilization, leading to an extremist group ending the world with an engineered virus. Do we attribute this to existential risk from nuclear weapons or from Bio-Terrorism? That question is neither well-defined, nor does it matter. All that matters is how much each factor contributes to [existential risk of any form].

Thus, ask not “is climate change an existential risk,” but “does climate change contribute to existential risk?” Everything we care about is contained in the second question.