I think convincing someone in a five-minute conversation is actually a ridiculous pipe-dream scenario we will not get for most people.
I remember back when I was in school at the University of Toronto, some people would talk at length about how evil Jordan Peterson was. The only problem was that they’d literally never heard the man speak and had no idea what his positions even were. They only knew about him from what they’d heard other people say about him. This is how I expect most people will learn about AI extinction risk. Most people will hear a butchered, watered down, strawman version of the AI extinction argument in a clip on Instagram or TikTok as they’re scrolling by, or hear a ridiculous caricature of the argument from a friend, followed by “isn’t that stupid? Don’t they know intelligence makes people more moral, so AI would be friendly?”
Almost nobody will actually hear the argument straight from someone who uses LessWrong or who is similarly “in the know”. Most of the nuance of the argument will be quickly lost, and what will be left will sound like, “did you know some idiots think AI will kill everyone by boiling the oceans?” In that case, having an argument that sounds implausible at first, but makes sense when you dig into the logic is way worse than having an argument that sounds plausible in the first place.
Of course, if someone is open to hearing a more detailed argument, that’s great. We don’t have to give up nuance, only we shouldn’t lead with it. Start with something that sounds plausible even to my mom, then be ready to back it up with all the nuance and logic for whoever wants more details.
I think you’re right that self-driving tanks and helicopters and stuff sound plausible. I guess drones don’t sound too bad if you’re using them to drop grenades. I think the start to sound sci-fi if you have them doing unusual things like spraying viruses over cities. They are kinda sci-fi coded in general though, I think. When it comes to the AI controlling manufacturing chains, I think robots are fine there. Or AI acquiring money and paying people. I just wouldn’t use robots with guns killing people, because that sounds like a movie.
In that case, having an argument that sounds implausible at first, but makes sense when you dig into the logic is way worse than having an argument that sounds plausible in the first place.
I agree with this. A little bit of appeal to “greed” or “recklessness” or “fear of getting left behind” can be useful as well, since it provides a layer of telos to the whole thing. “Society undone by greed and fear” feels more natural “Society undone because the universe was just really mean with how hard AI alignment was.”
Likewise, putting human agency as the first thing in the story helps ground people. A lot of people have this belief (alief?) that humans have a magic first-mover spark which nothing else can produce (except sometimes random natural disasters I guess?).
Putting these together gets you “Humans are excitable and afraid, so of course we’d put AI in charge of a bunch of industrial and military processes.”
There’s also a framing jump which goes from “AI is a tool” to “AI kills us” which I’m currently working on. I want to pump an intuition that “tools” very occasionally let you push on parts of the world that you don’t understand, and that this is a really common way to get yourself killed. In this case, deep learning is a “tool” which pushes on intelligence itself, which we don’t understand at all. Loads of people have an intuition that “don’t play with things you don’t understand” is good advice. This is more aimed at certain middle-sophistication individuals (e.g. Bluesky types and TypeScript web devs) who are particularly resistant to existing ideas.
I think convincing someone in a five-minute conversation is actually a ridiculous pipe-dream scenario we will not get for most people.
Hmm. I don’t think it’s “ridiculous” because I don’t have a solid upper-bound on how persuasive it’s possible for a person to be. I’d rather not rule out things like this, and just keep working on better pedagogy until it is done.
I was referring to the fact that we won’t even get the opportunity to deliver five minutes of information to most people, not that it couldn’t be convincing if you had that opportunity.
I think a lot of what you say makes sense. Framing AI as human folly seems more believable.
I think convincing someone in a five-minute conversation is actually a ridiculous pipe-dream scenario we will not get for most people.
I remember back when I was in school at the University of Toronto, some people would talk at length about how evil Jordan Peterson was. The only problem was that they’d literally never heard the man speak and had no idea what his positions even were. They only knew about him from what they’d heard other people say about him. This is how I expect most people will learn about AI extinction risk. Most people will hear a butchered, watered down, strawman version of the AI extinction argument in a clip on Instagram or TikTok as they’re scrolling by, or hear a ridiculous caricature of the argument from a friend, followed by “isn’t that stupid? Don’t they know intelligence makes people more moral, so AI would be friendly?”
Almost nobody will actually hear the argument straight from someone who uses LessWrong or who is similarly “in the know”. Most of the nuance of the argument will be quickly lost, and what will be left will sound like, “did you know some idiots think AI will kill everyone by boiling the oceans?” In that case, having an argument that sounds implausible at first, but makes sense when you dig into the logic is way worse than having an argument that sounds plausible in the first place.
Of course, if someone is open to hearing a more detailed argument, that’s great. We don’t have to give up nuance, only we shouldn’t lead with it. Start with something that sounds plausible even to my mom, then be ready to back it up with all the nuance and logic for whoever wants more details.
I think you’re right that self-driving tanks and helicopters and stuff sound plausible. I guess drones don’t sound too bad if you’re using them to drop grenades. I think the start to sound sci-fi if you have them doing unusual things like spraying viruses over cities. They are kinda sci-fi coded in general though, I think. When it comes to the AI controlling manufacturing chains, I think robots are fine there. Or AI acquiring money and paying people. I just wouldn’t use robots with guns killing people, because that sounds like a movie.
I agree with this. A little bit of appeal to “greed” or “recklessness” or “fear of getting left behind” can be useful as well, since it provides a layer of telos to the whole thing. “Society undone by greed and fear” feels more natural “Society undone because the universe was just really mean with how hard AI alignment was.”
Likewise, putting human agency as the first thing in the story helps ground people. A lot of people have this belief (alief?) that humans have a magic first-mover spark which nothing else can produce (except sometimes random natural disasters I guess?).
Putting these together gets you “Humans are excitable and afraid, so of course we’d put AI in charge of a bunch of industrial and military processes.”
There’s also a framing jump which goes from “AI is a tool” to “AI kills us” which I’m currently working on. I want to pump an intuition that “tools” very occasionally let you push on parts of the world that you don’t understand, and that this is a really common way to get yourself killed. In this case, deep learning is a “tool” which pushes on intelligence itself, which we don’t understand at all. Loads of people have an intuition that “don’t play with things you don’t understand” is good advice. This is more aimed at certain middle-sophistication individuals (e.g. Bluesky types and TypeScript web devs) who are particularly resistant to existing ideas.
Hmm. I don’t think it’s “ridiculous” because I don’t have a solid upper-bound on how persuasive it’s possible for a person to be. I’d rather not rule out things like this, and just keep working on better pedagogy until it is done.
I was referring to the fact that we won’t even get the opportunity to deliver five minutes of information to most people, not that it couldn’t be convincing if you had that opportunity.
I think a lot of what you say makes sense. Framing AI as human folly seems more believable.