When I first got here I thought “existential risk” referred to a generalization of the ideas related to catastrophic climate change. That is, if we should plan for the low-probability but deadly event that climate change will be very severe, then we should also plan for other low-probability (or far-future) catastrophes: asteroid impacts, biological and nuclear weapons, and unfriendly AI, among others. I was surprised that, of the existential risks discussed, catastrophic climate change never seems to come up at all.
It’s possible that this is an innocent result of specialization: people here spend most of their time thinking about AI, and not about other things that they aren’t trained for.
If there were an organization committed to clarifying how we think about planning for low-probability risks, that organization really ought to consider climate change among other risks. It would be an interesting thing to study: how far in the future is it reasonable for present-day institutions to plan? How can scientists with predictions of possible catastrophe effectively communicate to governments, businesses, etc. that they need to plan, without starting a panic? The art of planning for existential risks in general is something that could really benefit from more study.
And it ought to include well-studied and well-publicized risks (like climate change) in addition to less-studied and less-publicized risks (like risks from technology not yet developed.) People have been planning for floods for a long time; surely people concerned about other risks can learn something from people who plan for the risk of floods.
But I don’t think SIAI or LessWrong is equipped for that mission.
When I first got here I thought “existential risk” referred to a generalization of the ideas related to catastrophic climate change. That is, if we should plan for the low-probability but deadly event that climate change will be very severe, then we should also plan for other low-probability (or far-future) catastrophes: asteroid impacts, biological and nuclear weapons, and unfriendly AI, among others. I was surprised that, of the existential risks discussed, catastrophic climate change never seems to come up at all.
It’s possible that this is an innocent result of specialization: people here spend most of their time thinking about AI, and not about other things that they aren’t trained for.
If there were an organization committed to clarifying how we think about planning for low-probability risks, that organization really ought to consider climate change among other risks. It would be an interesting thing to study: how far in the future is it reasonable for present-day institutions to plan? How can scientists with predictions of possible catastrophe effectively communicate to governments, businesses, etc. that they need to plan, without starting a panic? The art of planning for existential risks in general is something that could really benefit from more study.
And it ought to include well-studied and well-publicized risks (like climate change) in addition to less-studied and less-publicized risks (like risks from technology not yet developed.) People have been planning for floods for a long time; surely people concerned about other risks can learn something from people who plan for the risk of floods.
But I don’t think SIAI or LessWrong is equipped for that mission.
I think you’re looking for the Future of Humanity Institute and their work on Global Catastrophic Risks