To clarify the argument for the people react-ing: the linked post accuses the EA movement of wilful institutional stupidity regarding AI timelines. Eliezer has also expressed this belief (that EA timeline beliefs were an example of their motivated reasoning and relative untrustworthiness) in other places, and to a group of people I was with at Manifest. However, I think if even Ilya Sutskever also took biological anchors seriously, this is some evidence that the EAs were making good-faith inferences at the time from the limited capabilities they had, instead of that mistake in particular being indicative of systemic rot in EA institutions.
Note this is a separate belief from whether OpenPhilanthropy (in concert with the vast majority of the public, commodities traders, surveyed AI researchers, etc.) had timelines that were too long in 2020, whether it was actually dumb to make AI timelines based on biological anchors, or whether humanity depended on EAs to get this question right despite its apparent difficulty.
This misunderstands my point, which I clarify in this comment: https://www.lesswrong.com/posts/awcwmAjNwJazdbHrz/nightsky81-s-shortform?commentId=S9Wa5XCYvbHJLFmji. I’m not saying that Eliezer was unjustified in attempting to correct EA timeline beliefs, but that EAs’ views were probably good faith.