in my experience obtained from reading Substack comments on posts by EA-skeptical authors, the belief that “EA wants you to care about AI safety because there’s a low probability of a really bad outcome” is both extremely prevalent and also causes EA/AI x-risk proponents/etc to be viewed quite negatively (like they’re using argumentative Dark Arts or something similar).
I have a similar experience from reading Hacker News. Seems to me that people who write it don’t really want to express an opinion on AI, more like they are using absurdity heuristic for an attack by association against EA. (Attacking EA is their goal, arguing by “AI apocalypse is unlikely” is merely a convenient weapon.)
I have a similar experience from reading Hacker News. Seems to me that people who write it don’t really want to express an opinion on AI, more like they are using absurdity heuristic for an attack by association against EA. (Attacking EA is their goal, arguing by “AI apocalypse is unlikely” is merely a convenient weapon.)