I belong to a private Discord full of geeky friends (as one does), and I constantly see this pattern you describe, where smart people dismiss AI risks because they dismiss AI capabilities. This takes several common forms:
Pattern-matching AI hype to crypto hype. Partly, this happens because some of the same arguably sociopathic VC scam artists are up to their elbows in both. “Arguably sociopathic scam artists” is sometimes a judgement made based on people’s prior social-graph proximity to the VCs in question.
An odd argument that “Of course the frontier labs say that their product has a 25% chance of causing human extinction. It’s a good sales pitch!” For me, this feels like a combination of a genuinely shrewd observation and a total failure to notice the giant pink elephant in front of them?
Focusing on the 20% of the time where AI fails at something trivial, and ignoring the 80% of the time where the dancing dog just pulled off 32 clean fouettés in a row. Like, the 80% of the time where Sonnet 4.5 nails it is the warning. When it stops failing the other 20% of the time, that’s potentially game over for the human race, you know?
A tendency to occasionally play with an AI model and then cache the worst experience out of 5 for about 12 months.
An overexposure to slop and to Google’s awful search AI.
The only way I’ve found to occasionally get someone over this hump is to give them a tool like Claude Code and let them feel the AI.
But the problem is, once you convince someone that AGI might happen, a disturbing number of people fail to really think through the consequences of really doing that. Which is perhaps why so many AI safety researchers have done so much to accelerate AI capabilities: once they believe in their bones, they almost inevitably want to build it.
So I constantly struggle with whether it actually helps to convince people of existing or future AI capabilities.
I belong to a private Discord full of geeky friends (as one does), and I constantly see this pattern you describe, where smart people dismiss AI risks because they dismiss AI capabilities. This takes several common forms:
Pattern-matching AI hype to crypto hype. Partly, this happens because some of the same arguably sociopathic VC scam artists are up to their elbows in both. “Arguably sociopathic scam artists” is sometimes a judgement made based on people’s prior social-graph proximity to the VCs in question.
An odd argument that “Of course the frontier labs say that their product has a 25% chance of causing human extinction. It’s a good sales pitch!” For me, this feels like a combination of a genuinely shrewd observation and a total failure to notice the giant pink elephant in front of them?
Focusing on the 20% of the time where AI fails at something trivial, and ignoring the 80% of the time where the dancing dog just pulled off 32 clean fouettés in a row. Like, the 80% of the time where Sonnet 4.5 nails it is the warning. When it stops failing the other 20% of the time, that’s potentially game over for the human race, you know?
A tendency to occasionally play with an AI model and then cache the worst experience out of 5 for about 12 months.
An overexposure to slop and to Google’s awful search AI.
The only way I’ve found to occasionally get someone over this hump is to give them a tool like Claude Code and let them feel the AI.
But the problem is, once you convince someone that AGI might happen, a disturbing number of people fail to really think through the consequences of really doing that. Which is perhaps why so many AI safety researchers have done so much to accelerate AI capabilities: once they believe in their bones, they almost inevitably want to build it.
So I constantly struggle with whether it actually helps to convince people of existing or future AI capabilities.