Just to help people understand the context: The book really doesn’t say that doom is inevitable. It goes out of its way like 4 times to say the opposite. I really don’t have a good explanation of Zach’s comment that doesn’t involve him not having read the book, and nevertheless making a tweet thread about it with a confidently wrong take. IMO the above really reads to me as if he workshopped some random LinkedIn-ish platitudes about the book to seem like a moderate and be popular on social media, without having engaged with the substance at all.
The book certainly claims that doom is not inevitable, but it does claim that doom is ~inevitable if anyone builds ASI using anything remotely like the current methods.
I understand Zach (and other “moderates”) as saying no, even conditioned on basically YOLO-ing the current paradigm to superintelligence, its really uncertain (and less likely than not) that the resulting ASI would kill everyone.
I disagree with this position, but if I held it, I would be saying somewhat similar things to Zach (even having read the book).
Though I agree that engaging on the object level (beyond “predictions are hard”) would be good.
My guess is that they’re doing the motte-and-bailey of “make it seem to people who haven’t read the book that it says that the ASI extinction is inevitable, that the book is just spreading doom and gloom”, from which, if challenged, they could retreat to “no, I meant doom isn’t inevitable even if we do build ASI using the current methods”.
Like, if someone means the latter (and has also read the book and knows that it goes to great lengths to clarify that we can avoid extinction), would they really phrase it as “doom is inevitable”, as opposed to e. g. “safe ASI is impossible”?
Or maybe they haven’t put that much thought into it and are just sloppy with language.
I disagree with this position, but if I held it, I would be saying somewhat similar things to Zach (even having read the book).
I wouldn’t. I roughly agree with Zach’s background position (i.e. I’m quite uncertain about the likelihood of extinction conditional on YOLO-ing the current paradigm*) but I still think his conclusions are wild. Quoting Zach:
First, it leaves room for AI’s transformative benefits. Tech has doubled life expectancy, slashed extreme poverty, and eliminated diseases over the past two centuries. AI could accelerate these trends dramatically.
The tradeoff isn’t between solving scarcity at a high risk of extinction vs. never getting either of those things. It’s between solving scarcity now at a high risk of extinction, vs. solving scarcity later at a much lower risk.
Second, focusing exclusively on extinction scenarios blinds us to other serious AI risks: authoritarian power grabs, democratic disruption through misinformation, mass surveillance, economic displacement, new forms of inequity. These deserve attention too.
Slowing down / pausing AI development gives us more time to work on all of those problems. Racing to build ASI means not only are we risking extinction from misalignment, but we’re also facing a high risk of outcomes such as, for example, ASI being developed so quickly that governments don’t have time to get a handle on what’s happening and we end up with Sam Altman as permanent world dictator. (I don’t think that particular outcome is that likely, it’s just an example.)
*although I think my conditional P(doom) is considerably higher than his
Slowing down / pausing AI development gives us more time to work on all of those problems. Racing to build ASI means not only are we risking extinction from misalignment, but we’re also facing a high risk of outcomes such as, for example, ASI being developed so quickly that governments don’t have time to get a handle on what’s happening and we end up with Sam Altman as permanent world dictator.
This depends on what mechanism is used to pause. MIRI is proposing, among other things, draconian control over the worldwide compute supply. Whoever has such control has a huge amount of power to leverage over a transformative technology, which seems at least possibly (and to me, very likely) to increase the risk of getting a permanent world dictator, although the dictator in that scenario is perhaps more likely to be a head of state than the head of an AI lab.
Unfortunately, this means that there is no low risk path into the future, so I don’t think the tradeoff is as straightforward as you describe:
The tradeoff isn’t between solving scarcity at a high risk of extinction vs. never getting either of those things. It’s between solving scarcity now at a high risk of extinction, vs. solving scarcity later at a much lower risk.
My preferred mechanism, and I think MIRI’s, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can’t build dangerous AI without risking war. It’s analogous to nuclear non-proliferation treaties.
I don’t think I would call it low risk, but my guess is it’s less risky than the default path of “let anyone build ASI with no regulations”.
My preferred mechanism, and I think MIRI’s, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can’t build dangerous AI without risking war. It’s analogous to nuclear non-proliferation treaties.
The control required within each country to enforce such a ban breaks the analogy to nuclear non-proliferation.
Uranium is an input to a general purpose technology (electricity), but it is not a general purpose technology itself, so it is possible to control its enrichment without imposing authoritarian controls on every person and industry in their use of electricity. By contrast, AI chips are themselves a general purpose technology, and exerting the proposed degree of control would entail draconian limits on every person and industry in society.
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
Fair enough, but China and the US are not going to risk war over that unless they believe doom is anywhere close to as certain as Eliezer believes it to be. And they are not going to believe that, in part because that level of certainty is not justified by any argument anyone including Eliezer has provided. And even if I am wrong on the inside view/object level to say that, there is enough disagreement about that claim among AI existential risk researchers that the outside view of a national government is unlikely to fully adopt Eliezer’s outlier viewpoint as its own.
But in return, we now have the tools of authoritarian control implemented within each participating country. And this is even if they don’t use their control over the computing supply to build powerful AI solely for themselves. Just the regime required to enforce such control would entail draconian invasions into the lives of every person and industry.
I highly doubt you would say something as false as “doom being inevitable” without qualifiers!
Like, sure, maybe this is just really terrible miscommunication, but that itself also seems kind of crazy. Like, the above thread mentions no conditional. It does not say that “doom is inevitable if we build ASI”, or anything like that. It just claims that Nate + Eliezer say that “doom is inevitable”, no qualifiers.
I do think there’s some amount of “these guys are weirdo extremists” signaling implicit in stating that they think doom is inevitable, but I don’t think it stems from not reading the book / not understanding the conditional (the conditional is in the title!)
Yeah it goes out of its way to say the opposite, but if you know Nate and Eliezer the book gives the impression that their pdooms are still extremely high, and responding to the author’s beliefs even when those aren’t exactly the same as the text is sometimes correct, although not really in this case.
Just to help people understand the context: The book really doesn’t say that doom is inevitable. It goes out of its way like 4 times to say the opposite. I really don’t have a good explanation of Zach’s comment that doesn’t involve him not having read the book, and nevertheless making a tweet thread about it with a confidently wrong take. IMO the above really reads to me as if he workshopped some random LinkedIn-ish platitudes about the book to seem like a moderate and be popular on social media, without having engaged with the substance at all.
The book certainly claims that doom is not inevitable, but it does claim that doom is ~inevitable if anyone builds ASI using anything remotely like the current methods.
I understand Zach (and other “moderates”) as saying no, even conditioned on basically YOLO-ing the current paradigm to superintelligence, its really uncertain (and less likely than not) that the resulting ASI would kill everyone.
I disagree with this position, but if I held it, I would be saying somewhat similar things to Zach (even having read the book).
Though I agree that engaging on the object level (beyond “predictions are hard”) would be good.
My guess is that they’re doing the motte-and-bailey of “make it seem to people who haven’t read the book that it says that the ASI extinction is inevitable, that the book is just spreading doom and gloom”, from which, if challenged, they could retreat to “no, I meant doom isn’t inevitable even if we do build ASI using the current methods”.
Like, if someone means the latter (and has also read the book and knows that it goes to great lengths to clarify that we can avoid extinction), would they really phrase it as “doom is inevitable”, as opposed to e. g. “safe ASI is impossible”?
Or maybe they haven’t put that much thought into it and are just sloppy with language.
Eliezer did write Death with Dignity which seems to assert that doom is inevitable, so the book not making that case, is a meaningful step.
I wouldn’t. I roughly agree with Zach’s background position (i.e. I’m quite uncertain about the likelihood of extinction conditional on YOLO-ing the current paradigm*) but I still think his conclusions are wild. Quoting Zach:
The tradeoff isn’t between solving scarcity at a high risk of extinction vs. never getting either of those things. It’s between solving scarcity now at a high risk of extinction, vs. solving scarcity later at a much lower risk.
Slowing down / pausing AI development gives us more time to work on all of those problems. Racing to build ASI means not only are we risking extinction from misalignment, but we’re also facing a high risk of outcomes such as, for example, ASI being developed so quickly that governments don’t have time to get a handle on what’s happening and we end up with Sam Altman as permanent world dictator. (I don’t think that particular outcome is that likely, it’s just an example.)
*although I think my conditional P(doom) is considerably higher than his
This depends on what mechanism is used to pause. MIRI is proposing, among other things, draconian control over the worldwide compute supply. Whoever has such control has a huge amount of power to leverage over a transformative technology, which seems at least possibly (and to me, very likely) to increase the risk of getting a permanent world dictator, although the dictator in that scenario is perhaps more likely to be a head of state than the head of an AI lab.
Unfortunately, this means that there is no low risk path into the future, so I don’t think the tradeoff is as straightforward as you describe:
My preferred mechanism, and I think MIRI’s, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can’t build dangerous AI without risking war. It’s analogous to nuclear non-proliferation treaties.
I don’t think I would call it low risk, but my guess is it’s less risky than the default path of “let anyone build ASI with no regulations”.
The control required within each country to enforce such a ban breaks the analogy to nuclear non-proliferation.
Uranium is an input to a general purpose technology (electricity), but it is not a general purpose technology itself, so it is possible to control its enrichment without imposing authoritarian controls on every person and industry in their use of electricity. By contrast, AI chips are themselves a general purpose technology, and exerting the proposed degree of control would entail draconian limits on every person and industry in society.
The relevant way in which it’s analogous is that a head of state can’t build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
Fair enough, but China and the US are not going to risk war over that unless they believe doom is anywhere close to as certain as Eliezer believes it to be. And they are not going to believe that, in part because that level of certainty is not justified by any argument anyone including Eliezer has provided. And even if I am wrong on the inside view/object level to say that, there is enough disagreement about that claim among AI existential risk researchers that the outside view of a national government is unlikely to fully adopt Eliezer’s outlier viewpoint as its own.
But in return, we now have the tools of authoritarian control implemented within each participating country. And this is even if they don’t use their control over the computing supply to build powerful AI solely for themselves. Just the regime required to enforce such control would entail draconian invasions into the lives of every person and industry.
I highly doubt you would say something as false as “doom being inevitable” without qualifiers!
Like, sure, maybe this is just really terrible miscommunication, but that itself also seems kind of crazy. Like, the above thread mentions no conditional. It does not say that “doom is inevitable if we build ASI”, or anything like that. It just claims that Nate + Eliezer say that “doom is inevitable”, no qualifiers.
I do think there’s some amount of “these guys are weirdo extremists” signaling implicit in stating that they think doom is inevitable, but I don’t think it stems from not reading the book / not understanding the conditional (the conditional is in the title!)
Yeah it goes out of its way to say the opposite, but if you know Nate and Eliezer the book gives the impression that their pdooms are still extremely high, and responding to the author’s beliefs even when those aren’t exactly the same as the text is sometimes correct, although not really in this case.