What do we have left? Nukes and chemical weapons for killing people… seems doable. How does the AI establish control over manufacturing chains, once shit is going down? Maybe self-driving lorries, trains, and automated factories?
These are more stringent rules than I normally use for myself: I often talk about biological weapons, hacking, and drones, for example. One main route I can see is to emphasize automation and mechanization.
(This actually very much is the terminator angle, the film makes it clear that most of the fighting is done by big tanks and helicopters, with the terminators only being used for infiltration. Skynet was originally “hooked into everything” because it was trusted.)
If AI gets smarter than us, people will want to put it in charge of everything that it can be. Factories, finance, even military operations. If we don’t all agree not to do this, anyone who doesn’t will get left behind. Lots of these things are already automatic: factories have assembly lines that are mostly machines. Even if we don’t trust the AI, we’ll be forced to use it more and more, or be outcompeted by someone who does. Eventually, AI will control all of the important things in the world, and humans will no longer be important to it. It could easily use its factories to produce a bunch of cyanide and release it into the atmosphere.
This just looks like gradual disempowerment. Maybe throw in some AI-controlled tanks if you think you can (I genuinely don’t know what the problem with using drones is, I think this is possibly an idiosyncracy on your mum’s part, since everyone knows about Obama’s drone strikes in the middle east)
In any case, convincing someone in a single five-minute conversation over the phone is a high bar; we should rise to this challenge, and above it.
I think convincing someone in a five-minute conversation is actually a ridiculous pipe-dream scenario we will not get for most people.
I remember back when I was in school at the University of Toronto, some people would talk at length about how evil Jordan Peterson was. The only problem was that they’d literally never heard the man speak and had no idea what his positions even were. They only knew about him from what they’d heard other people say about him. This is how I expect most people will learn about AI extinction risk. Most people will hear a butchered, watered down, strawman version of the AI extinction argument in a clip on Instagram or TikTok as they’re scrolling by, or hear a ridiculous caricature of the argument from a friend, followed by “isn’t that stupid? Don’t they know intelligence makes people more moral, so AI would be friendly?”
Almost nobody will actually hear the argument straight from someone who uses LessWrong or who is similarly “in the know”. Most of the nuance of the argument will be quickly lost, and what will be left will sound like, “did you know some idiots think AI will kill everyone by boiling the oceans?” In that case, having an argument that sounds implausible at first, but makes sense when you dig into the logic is way worse than having an argument that sounds plausible in the first place.
Of course, if someone is open to hearing a more detailed argument, that’s great. We don’t have to give up nuance, only we shouldn’t lead with it. Start with something that sounds plausible even to my mom, then be ready to back it up with all the nuance and logic for whoever wants more details.
I think you’re right that self-driving tanks and helicopters and stuff sound plausible. I guess drones don’t sound too bad if you’re using them to drop grenades. I think the start to sound sci-fi if you have them doing unusual things like spraying viruses over cities. They are kinda sci-fi coded in general though, I think. When it comes to the AI controlling manufacturing chains, I think robots are fine there. Or AI acquiring money and paying people. I just wouldn’t use robots with guns killing people, because that sounds like a movie.
In that case, having an argument that sounds implausible at first, but makes sense when you dig into the logic is way worse than having an argument that sounds plausible in the first place.
I agree with this. A little bit of appeal to “greed” or “recklessness” or “fear of getting left behind” can be useful as well, since it provides a layer of telos to the whole thing. “Society undone by greed and fear” feels more natural “Society undone because the universe was just really mean with how hard AI alignment was.”
Likewise, putting human agency as the first thing in the story helps ground people. A lot of people have this belief (alief?) that humans have a magic first-mover spark which nothing else can produce (except sometimes random natural disasters I guess?).
Putting these together gets you “Humans are excitable and afraid, so of course we’d put AI in charge of a bunch of industrial and military processes.”
There’s also a framing jump which goes from “AI is a tool” to “AI kills us” which I’m currently working on. I want to pump an intuition that “tools” very occasionally let you push on parts of the world that you don’t understand, and that this is a really common way to get yourself killed. In this case, deep learning is a “tool” which pushes on intelligence itself, which we don’t understand at all. Loads of people have an intuition that “don’t play with things you don’t understand” is good advice. This is more aimed at certain middle-sophistication individuals (e.g. Bluesky types and TypeScript web devs) who are particularly resistant to existing ideas.
I think convincing someone in a five-minute conversation is actually a ridiculous pipe-dream scenario we will not get for most people.
Hmm. I don’t think it’s “ridiculous” because I don’t have a solid upper-bound on how persuasive it’s possible for a person to be. I’d rather not rule out things like this, and just keep working on better pedagogy until it is done.
I was referring to the fact that we won’t even get the opportunity to deliver five minutes of information to most people, not that it couldn’t be convincing if you had that opportunity.
I think a lot of what you say makes sense. Framing AI as human folly seems more believable.
What do we have left? Nukes and chemical weapons for killing people… seems doable. How does the AI establish control over manufacturing chains, once shit is going down? Maybe self-driving lorries, trains, and automated factories?
These are more stringent rules than I normally use for myself: I often talk about biological weapons, hacking, and drones, for example. One main route I can see is to emphasize automation and mechanization.
(This actually very much is the terminator angle, the film makes it clear that most of the fighting is done by big tanks and helicopters, with the terminators only being used for infiltration. Skynet was originally “hooked into everything” because it was trusted.)
This just looks like gradual disempowerment. Maybe throw in some AI-controlled tanks if you think you can (I genuinely don’t know what the problem with using drones is, I think this is possibly an idiosyncracy on your mum’s part, since everyone knows about Obama’s drone strikes in the middle east)
In any case, convincing someone in a single five-minute conversation over the phone is a high bar; we should rise to this challenge, and above it.
I think convincing someone in a five-minute conversation is actually a ridiculous pipe-dream scenario we will not get for most people.
I remember back when I was in school at the University of Toronto, some people would talk at length about how evil Jordan Peterson was. The only problem was that they’d literally never heard the man speak and had no idea what his positions even were. They only knew about him from what they’d heard other people say about him. This is how I expect most people will learn about AI extinction risk. Most people will hear a butchered, watered down, strawman version of the AI extinction argument in a clip on Instagram or TikTok as they’re scrolling by, or hear a ridiculous caricature of the argument from a friend, followed by “isn’t that stupid? Don’t they know intelligence makes people more moral, so AI would be friendly?”
Almost nobody will actually hear the argument straight from someone who uses LessWrong or who is similarly “in the know”. Most of the nuance of the argument will be quickly lost, and what will be left will sound like, “did you know some idiots think AI will kill everyone by boiling the oceans?” In that case, having an argument that sounds implausible at first, but makes sense when you dig into the logic is way worse than having an argument that sounds plausible in the first place.
Of course, if someone is open to hearing a more detailed argument, that’s great. We don’t have to give up nuance, only we shouldn’t lead with it. Start with something that sounds plausible even to my mom, then be ready to back it up with all the nuance and logic for whoever wants more details.
I think you’re right that self-driving tanks and helicopters and stuff sound plausible. I guess drones don’t sound too bad if you’re using them to drop grenades. I think the start to sound sci-fi if you have them doing unusual things like spraying viruses over cities. They are kinda sci-fi coded in general though, I think. When it comes to the AI controlling manufacturing chains, I think robots are fine there. Or AI acquiring money and paying people. I just wouldn’t use robots with guns killing people, because that sounds like a movie.
I agree with this. A little bit of appeal to “greed” or “recklessness” or “fear of getting left behind” can be useful as well, since it provides a layer of telos to the whole thing. “Society undone by greed and fear” feels more natural “Society undone because the universe was just really mean with how hard AI alignment was.”
Likewise, putting human agency as the first thing in the story helps ground people. A lot of people have this belief (alief?) that humans have a magic first-mover spark which nothing else can produce (except sometimes random natural disasters I guess?).
Putting these together gets you “Humans are excitable and afraid, so of course we’d put AI in charge of a bunch of industrial and military processes.”
There’s also a framing jump which goes from “AI is a tool” to “AI kills us” which I’m currently working on. I want to pump an intuition that “tools” very occasionally let you push on parts of the world that you don’t understand, and that this is a really common way to get yourself killed. In this case, deep learning is a “tool” which pushes on intelligence itself, which we don’t understand at all. Loads of people have an intuition that “don’t play with things you don’t understand” is good advice. This is more aimed at certain middle-sophistication individuals (e.g. Bluesky types and TypeScript web devs) who are particularly resistant to existing ideas.
Hmm. I don’t think it’s “ridiculous” because I don’t have a solid upper-bound on how persuasive it’s possible for a person to be. I’d rather not rule out things like this, and just keep working on better pedagogy until it is done.
I was referring to the fact that we won’t even get the opportunity to deliver five minutes of information to most people, not that it couldn’t be convincing if you had that opportunity.
I think a lot of what you say makes sense. Framing AI as human folly seems more believable.