Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan’s “epistimic humbleness” for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that’s succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.
I agree that either a very low or a very high estimate of extinction due to AI is not.. epistemically humble. I asked a question about it. https://www.lesswrong.com/posts/R6kGYF7oifPzo6TGu/how-can-one-rationally-have-very-high-or-very-low
Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan’s “epistimic humbleness” for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that’s succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.