Great work as always. I’m not sure if I agree that we should be focusing on flourishing, conditional on survival. I think a bigger risk would be risks of astronomical suffering which seem like almost the default outcome. Eg digital minds, wild animals in space colonization, and unknown-unknowns. It’s possible that the interventions would be overlapping but I am skeptical.
I also don’t love the citations for a low p(doom). Toby Ord’s guess was from 2020, the Super Forecaster survey from 2022 and prediction markets aren’t really optimized for this sort of question. Something like Eli Lifland’s guess or the AI Impacts Surveys are where I would start as a jumping off point.
Great work as always. I’m not sure if I agree that we should be focusing on flourishing, conditional on survival. I think a bigger risk would be risks of astronomical suffering which seem like almost the default outcome. Eg digital minds, wild animals in space colonization, and unknown-unknowns. It’s possible that the interventions would be overlapping but I am skeptical.
I also don’t love the citations for a low p(doom). Toby Ord’s guess was from 2020, the Super Forecaster survey from 2022 and prediction markets aren’t really optimized for this sort of question. Something like Eli Lifland’s guess or the AI Impacts Surveys are where I would start as a jumping off point.