Ex­is­ten­tial Risk

TagLast edit: 3 Feb 2021 4:28 UTC by Rob Bensinger

An existential risk (or x-risk) is a risk that poses astronomically large negative consequences for humanity, such as human extinction or permanent global totalitarianism.

Nick Bostrom introduced the term “existential risk” in his 2002 paper “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.”1 In the paper, Bostrom defined an existential risk as:

One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

The Oxford Future of Humanity Institute (FHI) was founded by Bostrom in 2005 in part to study existential risks. Other institutions with a generalist focus on existential risk include the Centre for the Study of Existential Risk.

FHI’s FAQ notes regarding the definition of “existential risk”:

An existential risk is one that threatens the entire future of humanity. [...]

“Humanity”, in this context, does not mean “the biological species Homo sapiens”. If we humans were to evolve into another species, or merge or replace ourselves with intelligent machines, this would not necessarily mean that an existential catastrophe had occurred — although it might if the quality of life enjoyed by those new life forms turns out to be far inferior to that enjoyed by humans.

Classification of Existential Risks

Bostrom2 proposes a series of classifications for existential risks:

The total negative results of an existential risk could amount to the total of potential future lives not being realized. A rough and conservative calculation3 gives us a total of 10^54 potential future humans lives – smarter, happier and kinder then we are. Hence, almost no other task would amount to so much positive impact than existential risk reduction.

Existential risks also present an unique challenge because of their irreversible nature. We will never, by definition, experience and survive an extinction risk4 and so cannot learn from our mistakes. They are subject to strong observational selection effects 5. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our present existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high extinction risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.

Another related idea is that of a suffering risk (or s-risk).


The focus on existential risks on LessWrong dates back to Bostrom’s 2002 paper Astronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that “the chief goal for utilitarians should be to reduce existential risk”. Bostrom writes:

If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

The concept is expanded upon in his 2012 paper Existential Risk Prevention as Global Priority

Highlighted Posts



  1. BOSTROM, Nick. (2002) “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards”. Journal of Evolution and Technology, Vol. 9, March 2002.

  2. BOSTROM, Nick. (2012) “Existential Risk Reduction as the Most Important Task for Humanity”. Global Policy, forthcoming, 2012.

  3. BOSTROM, Nick & SANDBERG, Anders & CIRKOVIC, Milan. (2010) “Anthropic Shadow: Observation Selection Effects and Human Extinction Risks” Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506.

  4. Nick Bostrom, Milan M. Ćirković, ed (2008). Global Catastrophic Risks. Oxford University Press.

  5. Milan M. Ćirković (2008). “Observation Selection Effects and global catastrophic risks”. Global Catastrophic Risks. Oxford University Press.

  6. Eliezer S. Yudkowsky (2008). “Cognitive Biases Potentially Affecting Judgment of Global Risks”. Global Catastrophic Risks. Oxford University Press. (PDF)

  7. Richard A. Posner (2004). Catastrophe Risk and Response. Oxford University Press. (DOC)

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew_Critch19 Nov 2020 3:18 UTC
168 points
37 comments50 min readLW link

[Question] Fore­cast­ing Thread: Ex­is­ten­tial Risk

Amandango22 Sep 2020 3:44 UTC
42 points
40 comments2 min readLW link

Devel­op­men­tal Stages of GPTs

orthonormal26 Jul 2020 22:03 UTC
127 points
73 comments7 min readLW link

Some cruxes on im­pact­ful al­ter­na­tives to AI policy work

Richard_Ngo10 Oct 2018 13:35 UTC
149 points
13 comments12 min readLW link

What Mul­tipo­lar Failure Looks Like, and Ro­bust Agent-Ag­nos­tic Pro­cesses (RAAPs)

Andrew_Critch31 Mar 2021 23:50 UTC
128 points
48 comments22 min readLW link

Se­cond-Order Ex­is­ten­tial Risk

Ideopunk1 Jul 2020 18:46 UTC
2 points
1 comment3 min readLW link

Com­par­ing AI Align­ment Ap­proaches to Min­i­mize False Pos­i­tive Risk

G Gordon Worley III30 Jun 2020 19:34 UTC
5 points
0 comments9 min readLW link

How can I re­duce ex­is­ten­tial risk from AI?

lukeprog13 Nov 2012 21:56 UTC
60 points
92 comments8 min readLW link

Bayesian Ad­just­ment Does Not Defeat Ex­is­ten­tial Risk Charity

steven046117 Mar 2013 8:50 UTC
77 points
92 comments34 min readLW link

Critch on ca­reer ad­vice for ju­nior AI-x-risk-con­cerned researchers

Rob Bensinger12 May 2018 2:13 UTC
110 points
25 comments4 min readLW link

A model I use when mak­ing plans to re­duce AI x-risk

Ben Pace19 Jan 2018 0:21 UTC
66 points
41 comments6 min readLW link

[Question] What are the rea­sons to *not* con­sider re­duc­ing AI-Xrisk the high­est pri­or­ity cause?

capybaralet20 Aug 2019 21:45 UTC
29 points
27 comments1 min readLW link

What I talk about when I talk about AI x-risk: 3 core claims I want ma­chine learn­ing re­searchers to ad­dress.

capybaralet2 Dec 2019 18:20 UTC
27 points
13 comments3 min readLW link

Coron­avirus as a test-run for X-risks

SDM13 Jun 2020 21:00 UTC
66 points
10 comments18 min readLW link

In­ter­per­sonal Ap­proaches for X-Risk Education

TurnTrout24 Jan 2018 0:47 UTC
10 points
10 comments1 min readLW link

[Question] Im­pli­ca­tions of the Dooms­day Ar­gu­ment for x-risk reduction

maximkazhenkov2 Apr 2020 21:42 UTC
5 points
17 comments1 min readLW link

Other Ex­is­ten­tial Risks

multifoliaterose17 Aug 2010 21:24 UTC
40 points
124 comments11 min readLW link

Ex­is­ten­tial Risk and Ex­is­ten­tial Hope: Definitions

owencb10 Jan 2015 19:09 UTC
14 points
38 comments1 min readLW link

Cli­mate change: ex­is­ten­tial risk?

katydee6 May 2011 6:19 UTC
7 points
26 comments1 min readLW link

A list of good heuris­tics that the case for AI x-risk fails

capybaralet2 Dec 2019 19:26 UTC
22 points
14 comments2 min readLW link

“Tak­ing AI Risk Se­ri­ously” (thoughts by Critch)

Raemon29 Jan 2018 9:27 UTC
109 points
68 comments13 min readLW link

State Space of X-Risk Trajectories

David_Kristoffersson9 Feb 2020 13:56 UTC
8 points
0 comments9 min readLW link

Mini ad­vent cal­en­dar of Xrisks: nanotechnology

Stuart_Armstrong5 Dec 2012 11:02 UTC
6 points
25 comments1 min readLW link

Mini ad­vent cal­en­dar of Xrisks: Pandemics

Stuart_Armstrong6 Dec 2012 13:44 UTC
4 points
21 comments1 min readLW link

Mini ad­vent cal­en­dar of Xrisks: nu­clear war

Stuart_Armstrong4 Dec 2012 11:13 UTC
8 points
35 comments1 min readLW link

Mini ad­vent cal­en­dar of Xrisks: syn­thetic biology

Stuart_Armstrong4 Dec 2012 11:15 UTC
8 points
26 comments1 min readLW link

Mini ad­vent cal­en­dar of Xrisks: Ar­tifi­cial Intelligence

Stuart_Armstrong7 Dec 2012 11:26 UTC
5 points
5 comments1 min readLW link

Don’t Fear The Filter

Scott Alexander29 May 2014 0:45 UTC
7 points
17 comments6 min readLW link

Risk of Mass Hu­man Suffer­ing /​ Ex­tinc­tion due to Cli­mate Emer­gency

willfranks14 Mar 2019 18:32 UTC
4 points
3 comments1 min readLW link

Agen­tial Risks: A Topic that Al­most No One is Talk­ing About

philosophytorres15 Oct 2016 18:41 UTC
16 points
31 comments8 min readLW link

Dis­cus­sion: weight­ing in­side view ver­sus out­side view on ex­tinc­tion events

Ilverin25 Feb 2016 5:18 UTC
5 points
4 comments1 min readLW link

Sleep­walk bias, self-defeat­ing pre­dic­tions and ex­is­ten­tial risk

Stefan_Schubert22 Apr 2016 18:31 UTC
19 points
11 comments3 min readLW link

Ex­is­ten­tial risks open thread

John_Maxwell31 Mar 2013 0:52 UTC
16 points
47 comments1 min readLW link

Ex­is­ten­tial Risk is a sin­gle category

Rafael Harth9 Aug 2020 17:47 UTC
24 points
7 comments1 min readLW link

Evolu­tion, bias and global risk

Giles23 May 2011 0:32 UTC
5 points
10 comments5 min readLW link

Nu­clear war is un­likely to cause hu­man extinction

landfish7 Nov 2020 5:42 UTC
82 points
36 comments11 min readLW link

[Cer­e­mony In­tro + ] Darkness

Ruby21 Feb 2021 18:06 UTC
23 points
0 comments4 min readLW link

FLI Pod­cast: The Precipice: Ex­is­ten­tial Risk and the Fu­ture of Hu­man­ity with Toby Ord

Palus Astra1 Apr 2020 1:02 UTC
7 points
1 comment46 min readLW link

Don’t Con­di­tion on no Catastrophes

Scott Garrabrant21 Feb 2018 21:50 UTC
31 points
8 comments2 min readLW link

Jaan Tal­linn’s Philan­thropic Pledge

jaan22 Feb 2020 10:03 UTC
73 points
1 comment1 min readLW link


Spiracular17 Sep 2019 2:41 UTC
76 points
15 comments18 min readLW link2 nominations2 reviews

Should ethi­cists be in­side or out­side a pro­fes­sion?

Eliezer Yudkowsky12 Dec 2018 1:40 UTC
77 points
6 comments9 min readLW link

Global in­sect de­clines: Why aren’t we all dead yet?

eukaryote1 Apr 2018 20:38 UTC
28 points
26 comments1 min readLW link

New or­ga­ni­za­tion—Fu­ture of Life In­sti­tute (FLI)

Vika14 Jun 2014 23:00 UTC
69 points
36 comments1 min readLW link

The Vuln­er­a­ble World Hy­poth­e­sis (by Bostrom)

Ben Pace6 Nov 2018 20:05 UTC
50 points
17 comments4 min readLW link

Rus­sian x-risks newslet­ter, sum­mer 2019

avturchin7 Sep 2019 9:50 UTC
39 points
5 comments4 min readLW link

Up­date on es­tab­lish­ment of Cam­bridge’s Cen­tre for Study of Ex­is­ten­tial Risk

Sean_o_h12 Aug 2013 16:11 UTC
60 points
15 comments3 min readLW link

Be­ing Half-Ra­tional About Pas­cal’s Wager is Even Worse

Eliezer Yudkowsky18 Apr 2013 5:20 UTC
41 points
166 comments9 min readLW link

At­tend­ing to Now

ialdabaoth8 Nov 2017 16:53 UTC
27 points
2 comments3 min readLW link

Rus­sian x-risks newslet­ter spring 2020

avturchin4 Jun 2020 14:27 UTC
16 points
4 comments1 min readLW link

[AN #93]: The Precipice we’re stand­ing at, and how we can back away from it

rohinmshah1 Apr 2020 17:10 UTC
24 points
0 comments7 min readLW link

“Can We Sur­vive Tech­nol­ogy” by von Neumann

Ben Pace18 Aug 2019 18:58 UTC
32 points
2 comments1 min readLW link

LA-602 vs. RHIC Review

Eliezer Yudkowsky19 Jun 2008 10:00 UTC
45 points
62 comments6 min readLW link

Alle­gory On AI Risk, Game The­ory, and Mithril

James_Miller13 Feb 2017 20:41 UTC
41 points
57 comments3 min readLW link

A Pro­posed Ad­just­ment to the Astro­nom­i­cal Waste Argument

Nick_Beckstead27 May 2013 3:39 UTC
34 points
38 comments12 min readLW link

Q&A with ex­perts on risks from AI #1

XiXiDu8 Jan 2012 11:46 UTC
45 points
67 comments9 min readLW link

Ex­is­ten­tial Risk

lukeprog15 Nov 2011 14:23 UTC
34 points
108 comments4 min readLW link

A De­tailed Cri­tique of One Sec­tion of Steven Pinker’s Chap­ter “Ex­is­ten­tial Threats” in En­light­en­ment Now (Part 1)

philosophytorres12 May 2018 13:34 UTC
18 points
1 comment17 min readLW link

against “AI risk”

Wei_Dai11 Apr 2012 22:46 UTC
35 points
91 comments1 min readLW link

A Parable of Elites and Takeoffs

gwern30 Jun 2014 23:04 UTC
39 points
98 comments5 min readLW link

Ab­sent co­or­di­na­tion, fu­ture tech­nol­ogy will cause hu­man extinction

landfish3 Feb 2020 21:52 UTC
21 points
12 comments5 min readLW link

[Speech] Wor­lds That Never Were

mingyuan12 Jan 2019 19:53 UTC
23 points
0 comments3 min readLW link

Should We Ban Physics?

Eliezer Yudkowsky21 Jul 2008 8:12 UTC
14 points
22 comments2 min readLW link

[Question] What risks con­cern you which don’t seem to have been se­ri­ously con­sid­ered by the com­mu­nity?

plex28 Oct 2020 18:27 UTC
5 points
34 comments1 min readLW link

Tech­niques for op­ti­miz­ing worst-case performance

paulfchristiano28 Jan 2019 21:29 UTC
23 points
12 comments8 min readLW link

A De­tailed Cri­tique of One Sec­tion of Steven Pinker’s Chap­ter “Ex­is­ten­tial Threats” in En­light­en­ment Now (Part 2)

philosophytorres13 May 2018 19:41 UTC
12 points
1 comment17 min readLW link

Selec­tion Effects in es­ti­mates of Global Catas­trophic Risk

bentarm4 Nov 2011 9:14 UTC
32 points
64 comments1 min readLW link

The mind-killer

Paul Crowley2 May 2009 16:49 UTC
29 points
160 comments2 min readLW link

Peo­ple who want to save the world

Giles15 May 2011 0:44 UTC
5 points
247 comments1 min readLW link

Grey Goo Re­quires AI

harsimony15 Jan 2021 4:45 UTC
8 points
11 comments4 min readLW link

Notes on “Bioter­ror and Biowar­fare” (2006)

MichaelA2 Mar 2021 0:43 UTC
7 points
3 comments4 min readLW link

Texas Freeze Ret­ro­spec­tive: meetup notes

jchan3 Mar 2021 14:48 UTC
55 points
6 comments11 min readLW link