I mean, I think it’s true. Nuclear winter was the only plausible story for even a full-out nuclear war causing something close to human extinction, and I think extreme nuclear winter is very unlikely.
Similarly, it is very hard to make a pathogen that could kill literally everyone. You just have too many isolated populations, and the human immune system is too good. It might become feasible soon, but it was not very feasible historically!
I feel my point still stands, but have been struggling to articulate why. I’ll make my case, please let me know if my logic is flawed. I’ll admit that the post was a little hot-headed. That’s my fault. But having thought for a few days, I still believe there’s something important here.
In the post I’m arguing that survivorship bias due to existential risks means that we have biased view about the risks of existential risk, and we should take this into account when thinking about existential risks.
Your position (please correct me if I’m wrong) is that the examples I give are extremely unlikely to lead to human extinction, therefore these examples don’t support my argument.
To counter, I think that 1. given that it’s never happened, it’s difficult to say with confidence what the outcome of nuclear war/global pathogens would be, but 2. even if complete extinction is very unlikely, the argument I posed still applies to 90% extinction/50% extinction/10% extinction/etc. If there are X% fewer people in the world that undergoes a global catastrophe, that’s still X% fewer people who observe that world, which leads to a survivorship bias as argued in the post.
This is similar to the argument that we should not be surprised to be alive on a hospitable planet where we can breath the air and eat the things around us. There’s a survivorship bias that selects for worlds on which we can live, and we’re not around to observe the worlds on which we can’t survive.
I mean, I think it’s true. Nuclear winter was the only plausible story for even a full-out nuclear war causing something close to human extinction, and I think extreme nuclear winter is very unlikely.
Similarly, it is very hard to make a pathogen that could kill literally everyone. You just have too many isolated populations, and the human immune system is too good. It might become feasible soon, but it was not very feasible historically!
I feel my point still stands, but have been struggling to articulate why. I’ll make my case, please let me know if my logic is flawed. I’ll admit that the post was a little hot-headed. That’s my fault. But having thought for a few days, I still believe there’s something important here.
In the post I’m arguing that survivorship bias due to existential risks means that we have biased view about the risks of existential risk, and we should take this into account when thinking about existential risks.
Your position (please correct me if I’m wrong) is that the examples I give are extremely unlikely to lead to human extinction, therefore these examples don’t support my argument.
To counter, I think that 1. given that it’s never happened, it’s difficult to say with confidence what the outcome of nuclear war/global pathogens would be, but 2. even if complete extinction is very unlikely, the argument I posed still applies to 90% extinction/50% extinction/10% extinction/etc. If there are X% fewer people in the world that undergoes a global catastrophe, that’s still X% fewer people who observe that world, which leads to a survivorship bias as argued in the post.
This is similar to the argument that we should not be surprised to be alive on a hospitable planet where we can breath the air and eat the things around us. There’s a survivorship bias that selects for worlds on which we can live, and we’re not around to observe the worlds on which we can’t survive.