Thank you for this response. I think it really helped me understand where you’re coming from, and it makes me happy. :)
I really like the line “their case is maybe plausible without it, but I just can’t see the argument that it’s certain.” I actually agree that IABIED fails to provide an argument that it’s certain that we’ll die if we build superintelligence. Predictions are hard, and even though I agree that some predictions are easier, there’s a lot of complexity and path-dependence and so on! My hope is that the book persuades people that ASI is extremely dangerous and worth taking action on, but I’d definitely raise an eyebrow at someone who did not have Eliezer-level confidence going in, but then did have that level of confidence after reading the book.
There’s a motte argument that says “Um actually the book just says we’ll die if we build ASI given the alignment techniques we currently have” but this is dumb. What matters is whether our future alignment skill will be up to the task. And to my understanding, Nate and Eliezer both think that there’s a future version of Earth which has smarter, more knowledgeable, more serious people that can and should build safe/aligned ASI. Knowing that a godlike superintelligence with misaligned goals will squish you might be an easy call, but knowing exactly what the state of alignment science will be when ASI is first built is not.
(This is why it’s important that the world invests a whole bunch more in alignment research! (...in addition to trying to slow down capabilities research.))
It seems like maybe part of the issue is that you hear Nate and Eliezer as saying “here is the argument for why it’s obvious that ASI will kill us all” and I hear them as saying “here is the argument for why ASI will kill us all” and so you’re docking them points when they fail to reach the high standard of “this is a watertight and irrefutable proof” and I’m not?
On a different subtopic, it seems clear to me that we think about the possibility of a misaligned ASI taking over the world pretty differently. My guess is that if we wanted to focus on syncing up our worldviews, that is where the juicy double-cruxes are. I’m not suggesting that we spend the time to actually do that—just noting the gap.
It seems like maybe part of the issue is that you hear Nate and Eliezer as saying “here is the argument for why it’s obvious that ASI will kill us all” and I hear them as saying “here is the argument for why ASI will kill us all” and so you’re docking them points when they fail to reach the high standard of “this is a watertight and irrefutable proof” and I’m not?
fwiw I think Eliezer/Nate are saying “it’s obvious, unless we were to learn new surprising information” and deliberately not saying “it has a watertight proof”, and part of the disagreement here is “have they risen the standard of ’fairly obvious call, unless we learn new surprising information?”
(with the added wrinkle of many people incorrectly thinking LLM era observations count as new information that changes the call)
It seems like maybe part of the issue is that you hear Nate and Eliezer as saying “here is the argument for why it’s obvious that ASI will kill us all” and I hear them as saying “here is the argument for why ASI will kill us all” and so you’re docking them points when they fail to reach the high standard of “this is a watertight and irrefutable proof” and I’m not?
Yeah, for sure. I would maybe quibble that I think the book is saying less that it’s obvious that ASI will kill us all but that it is inevitable that ASI will kill us all, and so our only option is to make sure nobody builds it. I do think this is a pretty fair gloss (representative quote: “If anyone anywhere builds superintelligence, everyone everywhere dies”).
To me, this distinction matters because the belief that ASI doom is inevitable suggests a really profoundly different set of possibly actions than the belief that ASI doom is possible. Once we’re out of the realm of certainty, we have to start doing risk analyses and thinking seriously about how the existence of future advanced AIs changes the picture. I really like the distinction you draw here:
There’s a motte argument that says “Um actually the book just says we’ll die if we build ASI given the alignment techniques we currently have” but this is dumb. What matters is whether our future alignment skill will be up to the task. And to my understanding, Nate and Eliezer both think that there’s a future version of Earth which has smarter, more knowledgeable, more serious people that can and should build safe/aligned ASI. Knowing that a godlike superintelligence with misaligned goals will squish you might be an easy call, but knowing exactly what the state of alignment science will be when ASI is first built is not.
To its credit, IABIED is not saying that we’ll die if we build ASI with current alignment techniques – it is trying to argue that future alignment techniques won’t be adequate, because the problem is just too hard. And this is where I think they could have done a much better job of addressing the kinds of debates people who actually do this work are having instead of presenting fairly shallow counter-arguments and then dismissing them out of hand because they don’t sound like they’re taking the problem seriously.
My issue isn’t purely the level of confidence, it’s that the level of confidence comes out of a very specific set of beliefs about how the future will develop, and if any one of those beliefs is wrong less confidence would be appropriate, so it’s disappointing to me to see that those beliefs aren’t clearly articulated or defended.
I think the book is saying less that it’s obvious that ASI will kill us all but that it is inevitable that ASI will kill us all, and so our only option is to make sure nobody builds it. I do think this is a pretty fair gloss
Crucial caveat that this is conditional on building it soon, rather than preparing to an unprecedented degree first. Probably you are tracking this, but when you say it like that someone without context might take the intended meaning as unconditional inevitable lethality of ASI, which is very different. Our only option is that nobody builds it soon, not that nobody builds it ever, is the claim.
it is trying to argue that future alignment techniques won’t be adequate, because the problem is just too hard
This is still future alignment techniques that can become available soon. Reasonable counterarguments to inevitability of ASI-caused extinction or takeover if it’s created soon seem to be mostly about AGIs developing meaningfully useful alignment techniques soon enough (and if not soon enough, an ASI Pause of some kind would help, but then AGIs themselves are almost as big of a problem).
Knowing that a godlike superintelligence with misaligned goals will squish you might be an easy call, but knowing exactly what the state of alignment science will be when ASI is first built is not.
Hmm, I feel more on the Eliezer/Nate side of this one. I think it’s a medium call that capabilities science advances faster than alignment science, and so we’re not on track without drastic change. (Like, the main counterargument is negative alignment tax, which I do take seriously as a possibility, but I think probably doesn’t close the gap.)
Thank you for this response. I think it really helped me understand where you’re coming from, and it makes me happy. :)
I really like the line “their case is maybe plausible without it, but I just can’t see the argument that it’s certain.” I actually agree that IABIED fails to provide an argument that it’s certain that we’ll die if we build superintelligence. Predictions are hard, and even though I agree that some predictions are easier, there’s a lot of complexity and path-dependence and so on! My hope is that the book persuades people that ASI is extremely dangerous and worth taking action on, but I’d definitely raise an eyebrow at someone who did not have Eliezer-level confidence going in, but then did have that level of confidence after reading the book.
There’s a motte argument that says “Um actually the book just says we’ll die if we build ASI given the alignment techniques we currently have” but this is dumb. What matters is whether our future alignment skill will be up to the task. And to my understanding, Nate and Eliezer both think that there’s a future version of Earth which has smarter, more knowledgeable, more serious people that can and should build safe/aligned ASI. Knowing that a godlike superintelligence with misaligned goals will squish you might be an easy call, but knowing exactly what the state of alignment science will be when ASI is first built is not.
(This is why it’s important that the world invests a whole bunch more in alignment research! (...in addition to trying to slow down capabilities research.))
It seems like maybe part of the issue is that you hear Nate and Eliezer as saying “here is the argument for why it’s obvious that ASI will kill us all” and I hear them as saying “here is the argument for why ASI will kill us all” and so you’re docking them points when they fail to reach the high standard of “this is a watertight and irrefutable proof” and I’m not?
On a different subtopic, it seems clear to me that we think about the possibility of a misaligned ASI taking over the world pretty differently. My guess is that if we wanted to focus on syncing up our worldviews, that is where the juicy double-cruxes are. I’m not suggesting that we spend the time to actually do that—just noting the gap.
Thanks again for the response!
fwiw I think Eliezer/Nate are saying “it’s obvious, unless we were to learn new surprising information” and deliberately not saying “it has a watertight proof”, and part of the disagreement here is “have they risen the standard of ’fairly obvious call, unless we learn new surprising information?”
(with the added wrinkle of many people incorrectly thinking LLM era observations count as new information that changes the call)
I’m really glad this was clarifying!
Yeah, for sure. I would maybe quibble that I think the book is saying less that it’s obvious that ASI will kill us all but that it is inevitable that ASI will kill us all, and so our only option is to make sure nobody builds it. I do think this is a pretty fair gloss (representative quote: “If anyone anywhere builds superintelligence, everyone everywhere dies”).
To me, this distinction matters because the belief that ASI doom is inevitable suggests a really profoundly different set of possibly actions than the belief that ASI doom is possible. Once we’re out of the realm of certainty, we have to start doing risk analyses and thinking seriously about how the existence of future advanced AIs changes the picture. I really like the distinction you draw here:
To its credit, IABIED is not saying that we’ll die if we build ASI with current alignment techniques – it is trying to argue that future alignment techniques won’t be adequate, because the problem is just too hard. And this is where I think they could have done a much better job of addressing the kinds of debates people who actually do this work are having instead of presenting fairly shallow counter-arguments and then dismissing them out of hand because they don’t sound like they’re taking the problem seriously.
My issue isn’t purely the level of confidence, it’s that the level of confidence comes out of a very specific set of beliefs about how the future will develop, and if any one of those beliefs is wrong less confidence would be appropriate, so it’s disappointing to me to see that those beliefs aren’t clearly articulated or defended.
Crucial caveat that this is conditional on building it soon, rather than preparing to an unprecedented degree first. Probably you are tracking this, but when you say it like that someone without context might take the intended meaning as unconditional inevitable lethality of ASI, which is very different. Our only option is that nobody builds it soon, not that nobody builds it ever, is the claim.
This is still future alignment techniques that can become available soon. Reasonable counterarguments to inevitability of ASI-caused extinction or takeover if it’s created soon seem to be mostly about AGIs developing meaningfully useful alignment techniques soon enough (and if not soon enough, an ASI Pause of some kind would help, but then AGIs themselves are almost as big of a problem).
Hmm, I feel more on the Eliezer/Nate side of this one. I think it’s a medium call that capabilities science advances faster than alignment science, and so we’re not on track without drastic change. (Like, the main counterargument is negative alignment tax, which I do take seriously as a possibility, but I think probably doesn’t close the gap.)