I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn’t feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.
I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn’t feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.