Here’s the Future of Humanity Institute’s survey results from their Global Catastrophic Risks conference. The median estimate of extinction risk by 2100 is 19%, with 5% for AI-driven extinction by 2100:
Unfortunately, the survey didn’t ask for probabilities of AI development by 2100, so one can’t get probability of catastrophe conditional on AI development from there.
There is also the unpacking bias mentioned in the survey pdf. Going the other direction are some knowledge effects. Also note that most of the attendees were not AI types, but experts on asteroids, nukes, bioweapons, cost-benefit analysis, astrophysics, and other non-AI risks. It’s still interesting that the median AI risk was more than a quarter of median total risk in light of that fact.
There’s also the possibility that people dismiss it out of hand, without even thinking, and the more you look into the facts, the more your estimate rises. In this instance, the people at the conference just have the most facts.
Here’s the Future of Humanity Institute’s survey results from their Global Catastrophic Risks conference. The median estimate of extinction risk by 2100 is 19%, with 5% for AI-driven extinction by 2100:
http://www.fhi.ox.ac.uk/selected_outputs/fohi_publications/global_catastrophic_risks_survey
Unfortunately, the survey didn’t ask for probabilities of AI development by 2100, so one can’t get probability of catastrophe conditional on AI development from there.
That sample is drawn from those who think risks are important enough to go to a conference about the subject.
That seems like a self-selected sample of those with high estimates of p(DOOM).
The fact that this is probably a biased sample from the far end of a long tail should inform interpretations of the results.
There is also the unpacking bias mentioned in the survey pdf. Going the other direction are some knowledge effects. Also note that most of the attendees were not AI types, but experts on asteroids, nukes, bioweapons, cost-benefit analysis, astrophysics, and other non-AI risks. It’s still interesting that the median AI risk was more than a quarter of median total risk in light of that fact.
There’s also the possibility that people dismiss it out of hand, without even thinking, and the more you look into the facts, the more your estimate rises. In this instance, the people at the conference just have the most facts.