I looked at Tetlock’s Existential Risk Persuasion Tournament results, and noticed some oddities. The headline result is of course “median superforecaster gave a 0.38% risk of extinction due to AI by 2100, while the median AI domain expert gave a 3.9% risk of extinction.” But all the forecasters seem to have huge disagreements from my worldview on a few questions:
They divided forecasters into “AI-Concerned” and “AI-Skeptic” clusters. The latter gave 0.0001% for AI catastrophic risk before 2030, and even lower than this (shows as 0%) for AI extinction risk. This is incredibly low, and don’t think you can have probabilities this low without a really good reference class.
Both the AI-Concerned and AI-skeptic clusters gave low probabilities for space colony before 2030, 0.01% and “0%” medians respectively.
Both groups gave numbers I would disagree with for the estimated year of extinction: year 3500 for AI-concerned, and 28000 for AI-skeptic. Page 339 suggests that none of the 585 survey participants gave a number above 5 million years, whereas it seems plausible to me and probably many EA/LW people on the “finite time of perils” thesis that humanity survives for 10^12 years or more, likely giving an expected value well over 10^10. The justification given for the low forecasts even among people who believed the “time of perils” arguments seems to be that conditional on surviving for millions of years, humanity will probably become digital, but even a 1% chance of the biological human population remaining above the “extinction” threshold of 5,000 still gives an expected value in the billions.
I am not a forecaster and would probably be soundly beaten in any real forecasting tournament, but perhaps there is a bias against outlandish-seeming forecasts, strongest in this last question, that also affects the headline results.
I believe the extinction year question was asking for a median, not an expected value. In one place in the paper it is paraphrased as asking “by what year humanity is 50% likely to go extinct”.
If extinction caused by AI or value drift is somewhat unlikely, then extinction only happens once there is no more compute in the universe, which might take a very long time. So “the year humanity is 50% likely to go extinct” could be 1044 or something.
I looked at Tetlock’s Existential Risk Persuasion Tournament results, and noticed some oddities. The headline result is of course “median superforecaster gave a 0.38% risk of extinction due to AI by 2100, while the median AI domain expert gave a 3.9% risk of extinction.” But all the forecasters seem to have huge disagreements from my worldview on a few questions:
They divided forecasters into “AI-Concerned” and “AI-Skeptic” clusters. The latter gave 0.0001% for AI catastrophic risk before 2030, and even lower than this (shows as 0%) for AI extinction risk. This is incredibly low, and don’t think you can have probabilities this low without a really good reference class.
Both the AI-Concerned and AI-skeptic clusters gave low probabilities for space colony before 2030, 0.01% and “0%” medians respectively.
Both groups gave numbers I would disagree with for the estimated year of extinction: year 3500 for AI-concerned, and 28000 for AI-skeptic. Page 339 suggests that none of the 585 survey participants gave a number above 5 million years, whereas it seems plausible to me and probably many EA/LW people on the “finite time of perils” thesis that humanity survives for 10^12 years or more, likely giving an expected value well over 10^10. The justification given for the low forecasts even among people who believed the “time of perils” arguments seems to be that conditional on surviving for millions of years, humanity will probably become digital, but even a 1% chance of the biological human population remaining above the “extinction” threshold of 5,000 still gives an expected value in the billions.
I am not a forecaster and would probably be soundly beaten in any real forecasting tournament, but perhaps there is a bias against outlandish-seeming forecasts, strongest in this last question, that also affects the headline results.
I believe the extinction year question was asking for a median, not an expected value. In one place in the paper it is paraphrased as asking “by what year humanity is 50% likely to go extinct”.
If extinction caused by AI or value drift is somewhat unlikely, then extinction only happens once there is no more compute in the universe, which might take a very long time. So “the year humanity is 50% likely to go extinct” could be 1044 or something.