Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan’s “epistimic humbleness” for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that’s succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.
Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan’s “epistimic humbleness” for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that’s succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.