I assume “Bio, Nano, AI” to mean “any global existential threats brought on by human technology”, which is a big disjunction with plenty of unknown unknowns, and we already have one example (nuclear weapons) that could not have plausibly been predicted 50 years beforehand. Even if you discount the probabilities of hard AI takeoff or nanotech development, you’d have to have a lot of evidence in order to put such a small probability on any technological development of the next hundred years threatening global extinction.
As someone who does largely discount the threats mentioned (I believe that the operationally-significant probability for foom/grey goo is order 10^-3/10^-5, and the best-guess probability is order 10^-7/10^-7), I still endorse the logic above.
Er, maybe I was being unclear. Even if you discount a few specific scenarios, where do you get the strong evidence that no other technological existential risk with probability bigger than .001 will arise in the next hundred years, given that forecasters a century ago would have completely missed the existential risk from nuclear weapons?
I agree that cataloging near-earth objects is obviously worth a much bigger current investment than it has at present, but I think that an even bigger need exists for a well-funded group of scientists from various fields to consider such technological existential risks.
I assume “Bio, Nano, AI” to mean “any global existential threats brought on by human technology”, which is a big disjunction with plenty of unknown unknowns, and we already have one example (nuclear weapons) that could not have plausibly been predicted 50 years beforehand. Even if you discount the probabilities of hard AI takeoff or nanotech development, you’d have to have a lot of evidence in order to put such a small probability on any technological development of the next hundred years threatening global extinction.
As someone who does largely discount the threats mentioned (I believe that the operationally-significant probability for foom/grey goo is order 10^-3/10^-5, and the best-guess probability is order 10^-7/10^-7), I still endorse the logic above.
Er, maybe I was being unclear. Even if you discount a few specific scenarios, where do you get the strong evidence that no other technological existential risk with probability bigger than .001 will arise in the next hundred years, given that forecasters a century ago would have completely missed the existential risk from nuclear weapons?
I agree that cataloging near-earth objects is obviously worth a much bigger current investment than it has at present, but I think that an even bigger need exists for a well-funded group of scientists from various fields to consider such technological existential risks.