Although, now that I think about it, this survey is about risks before 2100, so the 5% risk of superintelligent AI might be that low because some of the responders belief such AI not to happen before 2100. Still, it seems in sharp contrast with Yudkowsky’s estimate.
Commenting on the first myth, Yudkowsky himself seems to be pretty sure of this when reading his comment here: http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html. I know Yudkowsky’s post is written after this LessWrong article, but it still seems relevant to mention.
He is a bit overconfident in that regards, I agree.
Agreed, especially when compared to http://www.fhi.ox.ac.uk/gcr-report.pdf.
Although, now that I think about it, this survey is about risks before 2100, so the 5% risk of superintelligent AI might be that low because some of the responders belief such AI not to happen before 2100. Still, it seems in sharp contrast with Yudkowsky’s estimate.