GiveWell’s recent conversation with Carl Shulman contains some arguments as to why the risk of human extinction may be decreasing over time.
To be clear, those are arguments for a long-term decline of risk. One of the arguments was that eventually we will have passed major technological transitions, such as interstellar colonization or advanced artificial intelligence. I would expect short-term risk to go up as we go through those transitions before eventually falling to very low levels thereafter (if we survive until then).
Thanks for the clarification. I was definitely confused when I first read that document, because it seemed to paint a much rosier picture than what I consider the case—that said, I agree that in the event that we pass certain transitions safely, short-term risk will be much less of a concern.
To be clear, those are arguments for a long-term decline of risk. One of the arguments was that eventually we will have passed major technological transitions, such as interstellar colonization or advanced artificial intelligence. I would expect short-term risk to go up as we go through those transitions before eventually falling to very low levels thereafter (if we survive until then).
Thanks for the clarification. I was definitely confused when I first read that document, because it seemed to paint a much rosier picture than what I consider the case—that said, I agree that in the event that we pass certain transitions safely, short-term risk will be much less of a concern.