The consensus notion is basically observational (based on my own social experience, and a cursory internet search revealing the average sentiment held by casual poster and journalists alike).
I would also wager that a sample of AI alignment researchers would on average find his predictions on AI risk (quoted above) to be prescient, especially considering the publication date.
Beyond that, I don’t think they’d have the impression that the parts about AI are insightful while the rest is all just deranged drivel, especially given that his discussion about AI risk is based on concepts and relations which he establishes earlier in the text.
The consensus notion is basically observational (based on my own social experience, and a cursory internet search revealing the average sentiment held by casual poster and journalists alike).
I would also wager that a sample of AI alignment researchers would on average find his predictions on AI risk (quoted above) to be prescient, especially considering the publication date.
Beyond that, I don’t think they’d have the impression that the parts about AI are insightful while the rest is all just deranged drivel, especially given that his discussion about AI risk is based on concepts and relations which he establishes earlier in the text.