While I agree at a basic level, this also seems like a motte-and-bailey.
There is clearly a vibe that all doomers have obviously always been wrong. The author is clearly trying to push back against that vibe. I too prefer arguing at ‘motte’ level, but vibes (baileys) matter, and pushing back against one should not require a long airtight argument that stands up to the stronger version of the claims being made. Even though I agree the stronger version would be better, that’s true for both sides of any debate.
I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn’t feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.
While I agree at a basic level, this also seems like a motte-and-bailey.
There is clearly a vibe that all doomers have obviously always been wrong. The author is clearly trying to push back against that vibe. I too prefer arguing at ‘motte’ level, but vibes (baileys) matter, and pushing back against one should not require a long airtight argument that stands up to the stronger version of the claims being made. Even though I agree the stronger version would be better, that’s true for both sides of any debate.
I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn’t feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.