A desire to understand the arguments is admirable.
Wanting to actually be convinced that we are in fact doomed is a dereliction of duty.
Karl Popper wrote that
Optimism is a duty. The future is open. It is not predetermined. No one can predict it, except by chance. We all contribute to determining it by what we do. We are all equally responsible for its success.
Only those who believe success is possible will work to achieve it. This is what Popper meant by “optimism is a duty”.
We are not doomed. We do face danger, but with effort and attention we may yet survive.
I am not as smart as most of the people who read this blog, nor am I an AI expert. But I am older than almost all of you. I’ve seen other predictions of doom, sincerely believed by people as smart as you, come and go. Ideology. Nuclear war. Resource exhaustion. Overpopulation. Environmental destruction. Nanotechnological grey goo.
One of those may yet get us, but so far none has, which would surprise a lot of people I used to hang around with. As Edward Gibbon said, “however it may deserve respect for its usefulness and antiquity,
[prediction of the end of the world] has not been found agreeable to
experience.”
One thing I’ve learned with time: Everything is more complicated than it seems. And prediction is difficult, especially about the future.
Other people have addressed the truth/belief gap. I want to talk about existential risk.
We got EXTREMELY close to extinction with nukes, more than once. Launch orders in the Cold War were given and ignored or overridden three separate times that I’m aware of, and probably more. That risk has declined but is still present. The experts were 100% correct and their urgency and doomsday predictions were arguably one of the reasons we are not all dead.
The same is true of global warming, and again there is still some risk. We probably got extremely lucky in the last decade and happened upon the right tech and strategies and got decent funding to combat climate change such that it won’t reach 3+ degrees deviation, but that’s still not a guarantee and it also doesn’t mean the experts were wrong. It was an emergency, it still is, the fact that we got lucky doesn’t mean we shouldn’t have paid very close attention.
The fact that we might survive this potential apocalypse too is not a reason to act like it is not a potential apocalypse. I agree that empirically, humans have a decent record at avoiding extinction when a large number of scientific experts predict its likelihood. It’s not a great record, we’re like 4-0 depending on how you count, which is not many data points, but it’s something. What we have learned from those experiences is that the loud and extreme actions of a small group of people who are fully convinced of the risk is sometimes enough to sufficiently shift the inertia of a large society only vaguely aware of the risk to avoid catastrophe by a hairs breadth. We might need to be that group.
I want to be convinced of the truth. If the truth is that we are doomed, I want to know that. If the truth is that fear of AGI is yet another false eschatology, then I want to know that as well. As such, I want to hear the best arguments that intelligent people make, for the position they believe to be true. This post is explicitly asking for those who are pessimistic to give their best arguments, and in the future, I will ask the opposite.
Fair enough. If you don’t have the time/desire/ability to look at the alignment problem arguments in detail, going by “so far, all doomsday predictions turned out false” is a good, cheap, first-glance heuristic. Of course, if you eventually manage to get into the specifics of AGI alignment, you should discard that heuristic and instead let the (more direct) evidence guide your judgement.
Talking about predictions, there’s been an AI winter a few decades ago, when most predictions of rapid AI progress turned out completely wrong. But recently, it’s the opposite trend that dominates: it’s the predictions that downplay the progress of the capabilities of AI that turn out wrong. What does your model say you should conclude about that?
I don’t think that a fair assessment of what they said. They cite their years as evidence that they witnessed multiple doomsday predictions that turned out wrong. That’s a fine point.
I witnessed them as well, and they don’t move my needle back on the dangers of AI. Referring to them is pure outside view, when what is needed here is inside view, because when no-one does that, no-one does the actual work.
Actually I fully agree with that. I just have the impression that your choice of words suggested that Dave was being lazy or not fully honest, and I would disagree with that. I think he’s probably honestly laying his best arguments for what he truly believes.
I certainly wasn’t intending any implication of dishonesty. As for laziness, well, we all have our own priorities. Despite taking the AGI threat more seriously than Dave Lindbergh, I am not actually doing any more about it than he is (presumably nothing), as I find myself baffled to have any practical ideas of addressing it.
FWIW, I didn’t say anything about how seriously I take the AGI threat—I just said we’re not doomed. Meaning we don’t all die in 100% of future worlds.
I didn’t exclude, say, 99%.
I do think AGI is seriously fucking dangerous and we need to be very very careful, and that the probability of it killing us all is high enough to be really worried about.
What I did try to say is that if someone wants to be convinced we’re doomed (== 100%), then they want to put themselves in a situation where they believe nothing anyone does can improve our chances. And that leads to apathy and worse chances.
A desire to understand the arguments is admirable.
Wanting to actually be convinced that we are in fact doomed is a dereliction of duty.
Karl Popper wrote that
Only those who believe success is possible will work to achieve it. This is what Popper meant by “optimism is a duty”.
We are not doomed. We do face danger, but with effort and attention we may yet survive.
I am not as smart as most of the people who read this blog, nor am I an AI expert. But I am older than almost all of you. I’ve seen other predictions of doom, sincerely believed by people as smart as you, come and go. Ideology. Nuclear war. Resource exhaustion. Overpopulation. Environmental destruction. Nanotechnological grey goo.
One of those may yet get us, but so far none has, which would surprise a lot of people I used to hang around with. As Edward Gibbon said, “however it may deserve respect for its usefulness and antiquity, [prediction of the end of the world] has not been found agreeable to experience.”
One thing I’ve learned with time: Everything is more complicated than it seems. And prediction is difficult, especially about the future.
Other people have addressed the truth/belief gap. I want to talk about existential risk.
We got EXTREMELY close to extinction with nukes, more than once. Launch orders in the Cold War were given and ignored or overridden three separate times that I’m aware of, and probably more. That risk has declined but is still present. The experts were 100% correct and their urgency and doomsday predictions were arguably one of the reasons we are not all dead.
The same is true of global warming, and again there is still some risk. We probably got extremely lucky in the last decade and happened upon the right tech and strategies and got decent funding to combat climate change such that it won’t reach 3+ degrees deviation, but that’s still not a guarantee and it also doesn’t mean the experts were wrong. It was an emergency, it still is, the fact that we got lucky doesn’t mean we shouldn’t have paid very close attention.
The fact that we might survive this potential apocalypse too is not a reason to act like it is not a potential apocalypse. I agree that empirically, humans have a decent record at avoiding extinction when a large number of scientific experts predict its likelihood. It’s not a great record, we’re like 4-0 depending on how you count, which is not many data points, but it’s something. What we have learned from those experiences is that the loud and extreme actions of a small group of people who are fully convinced of the risk is sometimes enough to sufficiently shift the inertia of a large society only vaguely aware of the risk to avoid catastrophe by a hairs breadth. We might need to be that group.
I want to be convinced of the truth. If the truth is that we are doomed, I want to know that. If the truth is that fear of AGI is yet another false eschatology, then I want to know that as well. As such, I want to hear the best arguments that intelligent people make, for the position they believe to be true. This post is explicitly asking for those who are pessimistic to give their best arguments, and in the future, I will ask the opposite.
I fully expect the world to be complicated.
Fair enough. If you don’t have the time/desire/ability to look at the alignment problem arguments in detail, going by “so far, all doomsday predictions turned out false” is a good, cheap, first-glance heuristic. Of course, if you eventually manage to get into the specifics of AGI alignment, you should discard that heuristic and instead let the (more direct) evidence guide your judgement.
Talking about predictions, there’s been an AI winter a few decades ago, when most predictions of rapid AI progress turned out completely wrong. But recently, it’s the opposite trend that dominates: it’s the predictions that downplay the progress of the capabilities of AI that turn out wrong. What does your model say you should conclude about that?
Your Wise-sounding complacent platitudes likewise.
FWIW, I too am older than almost everyone else here. However, I do not cite my years as evidence of wisdom.
I don’t think that a fair assessment of what they said. They cite their years as evidence that they witnessed multiple doomsday predictions that turned out wrong. That’s a fine point.
I witnessed them as well, and they don’t move my needle back on the dangers of AI. Referring to them is pure outside view, when what is needed here is inside view, because when no-one does that, no-one does the actual work.
Actually I fully agree with that. I just have the impression that your choice of words suggested that Dave was being lazy or not fully honest, and I would disagree with that. I think he’s probably honestly laying his best arguments for what he truly believes.
I certainly wasn’t intending any implication of dishonesty. As for laziness, well, we all have our own priorities. Despite taking the AGI threat more seriously than Dave Lindbergh, I am not actually doing any more about it than he is (presumably nothing), as I find myself baffled to have any practical ideas of addressing it.
FWIW, I didn’t say anything about how seriously I take the AGI threat—I just said we’re not doomed. Meaning we don’t all die in 100% of future worlds.
I didn’t exclude, say, 99%.
I do think AGI is seriously fucking dangerous and we need to be very very careful, and that the probability of it killing us all is high enough to be really worried about.
What I did try to say is that if someone wants to be convinced we’re doomed (== 100%), then they want to put themselves in a situation where they believe nothing anyone does can improve our chances. And that leads to apathy and worse chances.
So, a dereliction of duty.