Note: I’m writing every day in November, see my blog for disclaimers.
When considering existential risk, there’s a particular instance of survivorship bias that seems ever-present and which (in my opinion) impacts how x-risk debates tend to go.
We do not exist in the world that got obliterated during the cold war. We do not exist in the world that got wiped out by COVID. We can draw basically zero insight about the probability of existential risks, because we’ll only ever be alive in the universes where we survived a risk.
This has some significant effects: we can’t really say how effective our governments are at handling existential-level disasters. To some degree, it’s inevitable that we survived the Cuban Missile Crisis, that the Nazis didn’t build & launch a nuclear bomb, that Stanislav Petrov waited for more evidence. I’m going to paste the items from Wikipedia’s list of nuclear close calls, just to stress how many possibly-existential threats we’ve managed to get through:
1950–1953: Korean War
1954: First Indochina War
1956: Suez Crisis
1957: US accidental bomb drop in New Mexico
1958: Second Taiwan Strait Crisis
1958: US accidental bomb drop in Savannah, Georgia
1960: US false alarm from moonrise
1961: US false alarm from communications failure
1961: US strategic bomber crash in California
1961: US strategic bomber crash in North Carolina
1962: Cuban Missile Crisis
1962: Soviet averted launch of nuclear torpedo
1962: Soviet nuclear weapons in Cuba
1962: US false alarm at interceptor airbase
1962: US loss of ICBM launch authority
1962: US mistaken order during Cuban Missile Crisis
1962: US scramble of interceptors
1964: US strategic bomber crash in Maryland
1965: US attack aircraft falling off carrier
1965: US false alarm from blackout computer errors
1966: French false alarm from weather (likely)
1966: US strategic bomber crash in Spain
1967: US false alarm from weather
1968: US strategic bomber crash in Greenland
1968–1969: Vietnam War
1969: DPRK shootdown of US EWAC aircraft
1969: Sino-Soviet conflict
1973: Yom Kippur War
1979: US false alarm from computer training scenario
1980: Explosion at US missile silo
1980: US false alarm from Soviet missile exercise
1983: Able Archer 83 NATO exercise
1983: Soviet false alarm from weather (likely)
1991: Coalition nuclear weapons
1991: Gulf War
1991: Israeli nuclear weapons
1991: Tornado at US strategic bomber airbase
1995: Russian false alarm from Norwegian research rocket
2007: Improper transport of US nuclear weapons
2017–2018: North Korea crisis
2019 India-Pakistan conflict
2022–present: Russian invasion of Ukraine
That’s… a lot of luck.
And sure, very few of them would likely have been completely humanity-ending existential-level threats. But the list of laboratory biosecurity incidents is hardly short either:
1903 (Burkholderia mallei): Lab worker infected with glanders during guinea pig autopsy
1932 (B virus): Researcher died after monkey bite; virus named after victim Brebner
1943-04-27 (Scrub typhus): Dora Lush died from accidental needle prick while developing vaccine
1960–1993 (Foot-and-mouth disease): 13+ accidental releases from European labs causing outbreaks
1966 (Smallpox): Outbreak began with photographer at Birmingham Medical School
1967 (Marburg virus): 31 infected (7 died) after exposure to imported African green monkeys
1969 (Lassa fever): Two scientists infected, one died in lab accident
1971-07-30 (Smallpox): Soviet bioweapons test caused 10 infections, 3 deaths
1972-03 (Smallpox): Lab assistant infected 4 others at London School of Hygiene
1963–1977 (Various viruses): Multiple infections at Ibadan Virus Research Laboratory
1976 (Ebola): Accidental needle stick caused lab infection in UK
1977–1979 (H1N1 influenza): Possible lab escape of 1950s virus in Soviet Union/China
1978-08-11 (Smallpox): Janet Parker died, last recorded smallpox death from lab exposure
1978 (Foot-and-mouth disease): Released to animals outside Plum Island center
1979-04-02 (Anthrax): Sverdlovsk leak killed ~100 from Soviet military facility
1988 (Marburg virus): Researcher Ustinov died after accidental syringe prick
1990 (Marburg virus): Lab accident in Koltsovo killed one worker
1994 (Sabia Virus): Centrifuge accident at BSL3 caused infection
2001 (Anthrax): Mailed anthrax letters killed 5, infected 17; traced to researcher’s lab
2002 (Anthrax): Fort Detrick containment breach
2002 (West Nile virus): Two infections through dermal punctures
2002 (Arthroderma benhamiae): Lab incident in Japan
2003-08 (SARS): Student infected during lab renovations in Singapore
2003-12 (SARS): Scientist infected due to laboratory misconduct in Taiwan
2004-04 (SARS): Two researchers infected in Beijing, spread to ~6 others
2004-05-05 (Ebola): Russian researcher died after accidental needle prick
2004 (Foot-and-mouth disease): Two outbreaks at Plum Island
2004 (Tuberculosis): Three staff infected while developing vaccine
2005 (H2N2 influenza): Pandemic strain sent to 5,000+ labs in testing kits
2005–2015 (Anthrax): Army facility shipped live anthrax 74+ times to dozens of labs
2007-07 (Foot-and-mouth disease): UK lab leak via broken pipes infected farms, 2,000+ animals culled
2006 (Brucella): Lab infection at Texas A&M
2006 (Q fever): Lab infection at Texas A&M
2009-03-12 (Ebola): German researcher infected in lab accident
2009-09-13 (Yersinia pestis): Malcolm Casadaban died from exposure to plague strain
2010 (Classical swine fever): Two animals infected, then euthanized
2010 (Cowpox): First US lab-acquired human cowpox from cross-contamination
2011 (Dengue): Scientist infected through mosquito bite in Australian lab
2012 (Anthrax): UK lab sent live anthrax samples by mistake
2012-04-28 (Neisseria meningitidis): Richard Din died during vaccine research
2013 (H5N1 influenza): Researcher punctured hand with needle at Milwaukee lab
2014 (H1N1 influenza): Eight mice possibly infected with SARS/H1N1 escaped containment
2014-03-12 (H5N1 influenza): CDC accidentally shipped H5N1-contaminated vials
2014-06-05 (Anthrax): 75 CDC personnel exposed to viable anthrax
2014-07-01 (Smallpox): Six vials of viable 1950s smallpox discovered at NIH
2014 (Burkholderia pseudomallei): Bacteria escaped BSL-3 lab, infected monkeys
2014 (Ebola): Senegalese epidemiologist infected at Sierra Leone BSL-4 lab
2014 (Dengue): Lab worker infected through needlestick injury in South Korea
2016 (Zika virus): Researcher infected in lab accident at University of Pittsburgh
2016 (Nocardia testacea): 30 CSIRO staff exposed to toxic bacteria in Canberra
2016–2017 (Brucella): Hospital cleaning staff infected in Nanchang, China
2018 (Ebola): Hungarian lab worker exposed but asymptomatic
2019-09-17 (Unknown): Gas explosion at Vector lab in Russia, one worker burned
2019 (Prions): Émilie Jaumain died from vCJD 10 years after lab accident
2019 (Brucella): 65 workers infected at Lanzhou institute; 10,000+ residents affected
2021 (SARS-CoV-2): Taiwan lab worker contracted COVID Delta variant from facility
2022 (Polio): Employee infected with wild poliovirus type 3 at Dutch vaccine facility
I’m making you scroll through all these things on purpose. Saying “57 lab leaks and 42 nuclear close calls” just leads to scope insensitivity about the dangers involved here. Go back and read at least two random points from the lists above. There’s some “fun” ones, like “UK lab sent live anthrax samples by mistake”.
Not every one of these is a humanity-ending event. But there is a survivorship bias at play here, and this should impact our assessment of the risks involved. It’s very easy to point towards nuclear disarmament treaties and our current precautions around bio-risks as models for how to think about AI x-risk. And I think these are great. Or at least, they’re the best we’ve got. They definitely provide some non-zero amount of risk mitigation.
But we are fundamentally unable to gauge the probability of existential risk, because the world looks look the same whether humanity had gotten 1-in-a-hundred lucky or 1-in-a-trillion lucky.
None of this should really be an update. Existential risks are absolute and forever, basically every action is worth taking in order to reduce existential risks. But in case there’s anyone reading this who thinks x-risk maybe isn’t all that bad, this one’s for you.
AI usage disclaimer: Claude 4.5 Sonnet helped reformat and summarise the lab-leak list, which I skimmed for correctness.
Not one of these would have ended humanity, or even long term particularly reduced it’s population, hence the evidence towards survivorship bias from this is effectively 0.
Claiming that literally no nuclear incident nor biological risk could “particularly reduced it’s population” seems like a very strong claim to make? Especially given that your argument only holds if you’re correct. (e.g., if one of these would have ended humanity, we wouldn’t be having this conversation)
I mean, I think it’s true. Nuclear winter was the only plausible story for even a full-out nuclear war causing something close to human extinction, and I think extreme nuclear winter is very unlikely.
Similarly, it is very hard to make a pathogen that could kill literally everyone. You just have too many isolated populations, and the human immune system is too good. It might become feasible soon, but it was not very feasible historically!
I feel my point still stands, but have been struggling to articulate why. I’ll make my case, please let me know if my logic is flawed. I’ll admit that the post was a little hot-headed. That’s my fault. But having thought for a few days, I still believe there’s something important here.
In the post I’m arguing that survivorship bias due to existential risks means that we have biased view about the risks of existential risk, and we should take this into account when thinking about existential risks.
Your position (please correct me if I’m wrong) is that the examples I give are extremely unlikely to lead to human extinction, therefore these examples don’t support my argument.
To counter, I think that 1. given that it’s never happened, it’s difficult to say with confidence what the outcome of nuclear war/global pathogens would be, but 2. even if complete extinction is very unlikely, the argument I posed still applies to 90% extinction/50% extinction/10% extinction/etc. If there are X% fewer people in the world that undergoes a global catastrophe, that’s still X% fewer people who observe that world, which leads to a survivorship bias as argued in the post.
This is similar to the argument that we should not be surprised to be alive on a hospitable planet where we can breath the air and eat the things around us. There’s a survivorship bias that selects for worlds on which we can live, and we’re not around to observe the worlds on which we can’t survive.
My claim is no nuclear bomb incident would have killed more than 25% of the population, or 500 million people in 1950, one billion 1970.
Reasoning is trivial—a single nuclear bomb can only kill a maximum of a few hundred thousand people at a time. At the height of the cold war there were a few thousand bombs on each side, most of which weren’t aimed at people but second strike capabilities in rural areas. Knock on effects like famines could kill more, but I doubt they would be worse than WW2, since number of direct deaths would be smaller. It would likely lead to war, but again WW2 is your ballpark here for number of deaths from an all out global war.
Making an anthropic update from something that at worse would have reduced world population by 25 percent is basically identical to reading tealeaves, especially if you don’t update the other way from WW1s and WW2s and other assorted disasters which majorly reduced world population.
Maybe we are the luckiest timeline. But the evidence for that is not enough to update you enough to meaningfully change your plans.