Schneier is also quite skeptical of the risk of extinction from AI. Here’s a table o3 generated just now when I asked it for some examples.
Date
Where he said it
What he said
Take-away
1 June 2023
Blog post “On the Catastrophic Risk of AI” (written two days after he signed the CAIS one-sentence “extinction risk” statement)
“I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war — a risk worth taking seriously, but not something to panic over.” (schneier.com)
Explicitly rejects the “extinction” scenario, placing AI in the same (still-serious) bucket as pandemics or nukes.
1 June 2023
Same post, quoting his 2018 book Click Here to Kill Everybody
“I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future.” (schneier.com)
Long-standing view: most dangers come from how humans use technology we already have.
9 Oct 2023
Essay “AI Risks” (New York Times, reposted on his blog)
Warns against “doomsayers” who promote “Hollywood nightmare scenarios” and urges that we “not let apocalyptic prognostications overwhelm us.” (schneier.com)
Skeptical of the extinction narrative; argues policy attention should stay on present-day harms and power imbalances.
Agreed. As a long-time reader of Schneier’s blog, I was quite surprised by Schneier’s endorsement, and I would have cited exactly those two essays. He’s written a bunch of times about bad things that humans might intentionally use AI to do, talking about things like AI propaganda, AI-powered legal hacks, and AI spam clogging requests for public comments, but I would have described him as scornful of concerns about x-risk or alignment.
Schneier is also quite skeptical of the risk of extinction from AI. Here’s a table o3 generated just now when I asked it for some examples.
Agreed. As a long-time reader of Schneier’s blog, I was quite surprised by Schneier’s endorsement, and I would have cited exactly those two essays. He’s written a bunch of times about bad things that humans might intentionally use AI to do, talking about things like AI propaganda, AI-powered legal hacks, and AI spam clogging requests for public comments, but I would have described him as scornful of concerns about x-risk or alignment.