I think there’s plenty of empirical data, but there’s disagreement over what counts as relevant evidence and how it should be interpreted. (E.g. Hanson and Yudkowsky both cited a number of different empirical observations in support of their respective positions, back during their debate.)
Right. So a really epistemically humble estimate would put the extinction risk at 50%. I realize this is arguable, and I think you can bring a lot of relevant indirect evidence to bear. But the claim that it’s epistemically humble to estimate a low risk seems very wrong to me.
Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan’s “epistimic humbleness” for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that’s succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.
Extinction-level AI x-risk is a plausible theoretical model without any empirical data for or against.
I think there’s plenty of empirical data, but there’s disagreement over what counts as relevant evidence and how it should be interpreted. (E.g. Hanson and Yudkowsky both cited a number of different empirical observations in support of their respective positions, back during their debate.)
Right. I’d think the only “empirical evidence” that counts is the one accepted by both sides as good evidence. I cannot think of any good examples.
Right. So a really epistemically humble estimate would put the extinction risk at 50%. I realize this is arguable, and I think you can bring a lot of relevant indirect evidence to bear. But the claim that it’s epistemically humble to estimate a low risk seems very wrong to me.
I agree that either a very low or a very high estimate of extinction due to AI is not.. epistemically humble. I asked a question about it. https://www.lesswrong.com/posts/R6kGYF7oifPzo6TGu/how-can-one-rationally-have-very-high-or-very-low
Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan’s “epistimic humbleness” for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that’s succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.