[Link-post] On Deference and Yudkowsky’s AI Risk Estimates

This is a link-post to a piece I just posted to the EA Forum, discussing negative aspects of Eliezer Yudkowsky’s forecasting track record. In case it also receives significant discussion here, you may also want to look at the comments on the Forum (e.g. Gwern just posted a useful critical comment there).