In my experience, “normal” folks are often surprisingly open to these arguments, and I think the book is remarkably normal-person-friendly given its topic. I’d mainly recommend telling your friends what you actually think, and using practice to get better at it.
Context: One of the biggest bottlenecks on the world surviving, IMO, is the amount (and quality!) of society-wide discourse about ASI. As a consequence, I already thought one of the most useful things most people can do nowadays is to just raise the alarm with more people, and raise the bar on the quality of discourse about this topic. I’m treating the book as an important lever in that regard (and an important lever for other big bottlenecks, like informing the national security community in particular). Whether you have a large audience or just a network of friends you’re talking to, this is how snowballs get started.
If you’re just looking for text you can quote to get people interested, I’ve been using:
As the AI industry scrambles to build increasingly capable and general AI, two researchers speak out about a disaster on the horizon.
In 2023, hundreds of AI scientists and leaders in the field, including the three most cited living AI scientists, signed an open letter warning that AI poses a serious risk of causing human extinction. Today, however, the AI race is only heating up. Tech CEOs are setting their sights on smarter-than-human AI. If they succeed, the world is radically unprepared for what comes next.
In this book, Eliezer Yudkowsky and Nate Soares explain the nature of the threat posed by smarter-than-human AI. In a conflict between humans and AI, a superintelligence would win, as easily as modern chess AIs crush the world’s best humans at chess. The conflict would not be close, or even especially interesting.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
Stephen Fry’s blurb from Nate’s post above might also be helpful here:
The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the **** up!
If your friends are looking for additional social proof that this is a serious issue, you could cite things like the Secretary-General of the United Nations:
Alarm bells over the latest form of artificial intelligence, generative AI, are deafening. And they are loudest from the developers who designed it. These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on par with the risk of nuclear war. We must take those warnings seriously.
(This is me spitballing ideas; if a bunch of LWers take a crack at figuring out useful things to say, I expect at least some people to have better ideas.)
One notable difficulty with talking to ordinary people about this stuff is that often, you lay out the basic case and people go “That’s neat. Hey, how about that weather?” There’s a missing mood, a sense that the person listening didn’t grok the implications of what they’re hearing. Now, of course, maybe they just don’t believe you or think you’re spouting nonsense. But in those cases, I’d expect more resistance to the claims, more objections or even claims like “that’s crazy”. Not bland acceptance.
One notable difficulty with talking to ordinary people about this stuff is that often, you lay out the basic case and people go “That’s neat. Hey, how about that weather?” There’s a missing mood, a sense that the person listening didn’t grok the implications of what they’re hearing.
I kinda think that people are correct to do this, given the normal epistemic environment. My model is this: Everyone is pretty frequently bombarded with wild arguments and beliefs that have crazy implications. Like conspiracy theories, political claims, spiritual claims, get-rich-quick schemes, scientific discoveries, news headlines, mental health and wellness claims, alternative medicine, claims about which lifestyles are better. We don’t often have the time (nor expertise or skill or sometimes intelligence) to evaluate them properly. So we usually keep track of a bunch of these beliefs and arguments, and talk about them, but usually require nearby social proof in order to attach the arguments/beliefs to actions and emotions. Rationalists (and the more culty religions and many activist groups, etc.) are extreme in how much they change their everyday lives based on their beliefs.
I think it’s probably okay to let people maintain this detachment? Maybe even better, because it avoids activating antibodies. It’s (usefully) something that’s hard to change with argument. It will plausibly fix itself later, if there ever comes a time when their friends are voting or protesting or something.
I recently told my dad that I wasn’t trying to save for retirement. This horrified him far more than when I had previously told him that I didn’t expect anyone to survive the next couple of decades. The contrast was funny.
I recently told my dad that I wasn’t trying to save for retirement. This horrified him far more than when I had previously told him that I didn’t expect anyone to survive the next couple of decades. The contrast was funny.
I had a similar experience where my dad seemed unbothered by me saying AI might take over the world and some other day I mentioned in passing that I don’t know in how many years we have AI that is a better software engineer than humans, but 5-10 years doesn’t sound strictly impossible. My father being a software engineer found that claim more interesting (He was visibly upset about his job security). I noticed I’ve kinda downplayed the retirement thing to my parents, because implicitly I noticed at that point they might call me insane, but explicitly thinking about it, it might be more effective to communicate what is at stake.
Help me figure out how to recommend to normie friends.
In my experience, “normal” folks are often surprisingly open to these arguments, and I think the book is remarkably normal-person-friendly given its topic. I’d mainly recommend telling your friends what you actually think, and using practice to get better at it.
Context: One of the biggest bottlenecks on the world surviving, IMO, is the amount (and quality!) of society-wide discourse about ASI. As a consequence, I already thought one of the most useful things most people can do nowadays is to just raise the alarm with more people, and raise the bar on the quality of discourse about this topic. I’m treating the book as an important lever in that regard (and an important lever for other big bottlenecks, like informing the national security community in particular). Whether you have a large audience or just a network of friends you’re talking to, this is how snowballs get started.
If you’re just looking for text you can quote to get people interested, I’ve been using:
Stephen Fry’s blurb from Nate’s post above might also be helpful here:
If your friends are looking for additional social proof that this is a serious issue, you could cite things like the Secretary-General of the United Nations:
(This is me spitballing ideas; if a bunch of LWers take a crack at figuring out useful things to say, I expect at least some people to have better ideas.)
You could also try sending your friends an online AI risk explainer, e.g., MIRI’s The Problem or Ian Hogarth’s We Must Slow Down the Race to God-Like AI (requires Financial Times access) or Gabriel Alfour’s Preventing Extinction from Superintelligence.
There’s also AIsafety.info’s short-er form and long form explainers.
One notable difficulty with talking to ordinary people about this stuff is that often, you lay out the basic case and people go “That’s neat. Hey, how about that weather?” There’s a missing mood, a sense that the person listening didn’t grok the implications of what they’re hearing. Now, of course, maybe they just don’t believe you or think you’re spouting nonsense. But in those cases, I’d expect more resistance to the claims, more objections or even claims like “that’s crazy”. Not bland acceptance.
I kinda think that people are correct to do this, given the normal epistemic environment. My model is this: Everyone is pretty frequently bombarded with wild arguments and beliefs that have crazy implications. Like conspiracy theories, political claims, spiritual claims, get-rich-quick schemes, scientific discoveries, news headlines, mental health and wellness claims, alternative medicine, claims about which lifestyles are better. We don’t often have the time (nor expertise or skill or sometimes intelligence) to evaluate them properly. So we usually keep track of a bunch of these beliefs and arguments, and talk about them, but usually require nearby social proof in order to attach the arguments/beliefs to actions and emotions. Rationalists (and the more culty religions and many activist groups, etc.) are extreme in how much they change their everyday lives based on their beliefs.
I think it’s probably okay to let people maintain this detachment? Maybe even better, because it avoids activating antibodies. It’s (usefully) something that’s hard to change with argument. It will plausibly fix itself later, if there ever comes a time when their friends are voting or protesting or something.
I recently told my dad that I wasn’t trying to save for retirement. This horrified him far more than when I had previously told him that I didn’t expect anyone to survive the next couple of decades. The contrast was funny.
I had a similar experience where my dad seemed unbothered by me saying AI might take over the world and some other day I mentioned in passing that I don’t know in how many years we have AI that is a better software engineer than humans, but 5-10 years doesn’t sound strictly impossible. My father being a software engineer found that claim more interesting (He was visibly upset about his job security). I noticed I’ve kinda downplayed the retirement thing to my parents, because implicitly I noticed at that point they might call me insane, but explicitly thinking about it, it might be more effective to communicate what is at stake.
I don’t know if you’ll find it helpful, but you inspired me to write up and share a post I plan to make on Facebook.