A similar argument could be made about the second line,
Oh, I saw this, too, but since the second line is conditional on the first, if you weaken the first, both are weakened.
I feel a little shitty being like ‘trust me about what the book says’, but… please trust me about what the book says! There’s just not much in there about timelines. Even from the book website (and the title of the book!), the central claim opens with a conditional:
If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
There’s much more uncertainty (at MIRI and in general) as to when ASI will be developed than as to what will happen once it is developed. We all have our own takes on timelines, some more bearish, some more bullish, all with long tails (afaik, although obviously I don’t speak for any specificother person).
If you build it, though, everyone dies.
There’s a broad strategic call here that goes something like:
All of our claims will be perceived as having a similar level of confidence by the general public (this is especially true in low-fidelity communications like an advertisement).
If one part of the story is falsified, the public will consider the whole story falsified.
We are in fact much more certain about what will happen than when.
We should focus on the unknowability of the when, the certainty of the what, and the unacceptability of that conjunction.
This is a gloss of the real thing which is more nuanced; of course if it comes up that someone asks us when we expect it to happen, or if there’s space for us to gesture to the uncertainty, the MIRI line is often “if it doesn’t happen in 20 years (conditional on no halt), we’d be pretty surprised” (although individual people may say something different).
I think some of the reception to AI2027 (i.e. in YouTube comments and the like) has given evidence that emphasizing timelines too too much can result in backlash, even if you prominently flag your uncertainty about the timelines! This is an important failure mode that, if all of the ecosystem’s comms err on overconfidence re timelines, will burn a lot of our credibility in 2028. (Yes, I know that AI2027 isn’t literally ‘ASI in 2027’, but I’m trying to highlight that most people who’ve heard of AI2027 don’t know that, and that’s the problem!).
(some meta: I am one of the handful of people who will be making calls about the ads, so I’m trying to offer feedback to help improve submissions, not just arguing a point for the sake of it or nitpicking)
Thanks again! My drafts are of course just ideas, so they can easily be adapted. However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety. If you want people to act, even if it’s just buying a book, you need to do just that. It’s not enough to say “you should read this”, you need to say “you should read this now” and give a reason for that. In marketing, this is usually done with some kind of time constraint (20% off, only this week …).
This is even more true if you want someone to take measures against something that is in the mind of most people still “science fiction” or even “just hype”. Of course, just claiming that something is “soon” is not very strong, but it may at least raise a question (“Why do they say this?”).
I’m not saying that you should give any specific timeline, and I fully agree with the MIRI view. However, if we want to prevent superintelligent AI and we don’t know how much time we have left, we can’t just sit around and wait until we know when it will arrive. For this reason, I have dedicated a whole chapter on timelines in my own German language book about AI existential risk and also included the AI-2027 scenario as one possible path. The point I make in my book is not that it will happen soon, but that we can’t know it won’t happen soon and that there are good reasons to believe that we don’t have much time. I use my own experience with AI since my Ph.D. on expert systems in 1988 and Yoshua Bengio’s blogpost on his change of mind as examples of how fast and surprising progress has been even for someone familiar with the field.
I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I’d choose the latter.
However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety.
Personally, I would rather stake my chips on ‘important’ and let urgent handle itself. The title of the book is a narrow claim—if anyone builds it, everyone dies—with the clarifying details conveniently swept into the ‘it’. Adding more inferential steps makes it more challenging to convey clearly and more challenging to hear (since each step could lose some of the audience).
There’s some further complicated arguments about urgency—you don’t want to have gone out on too much of a limb about saying it’s close, because of costs when it’s far—but I think I most want to make a specialization of labor argument, where it’s good that the AI 2027 people, who are focused on forecasting, are making forecasting claims, and good that MIRI, who are focused on alignment, are making alignment difficulty / stakes claims.
I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I’d choose the latter.
Hmm I think I might agree with this value tradeoff but I don’t think I agree with the underlying prediction of what the world is offering us.
I think also MIRI has tried for a while to recruit people who can make progress on alignment and thought it was important to start work now, and the current push is on trying to get broad attention and support. The people writing blurbs for the book are just saying “yes, this is a serious book and a serious concern” and not signing on to “and it might happen in two years”—tho probably some of them also believe that—and I think that gives enough cover for the people who are acting on two-year timelines to operate.
Oh, I saw this, too, but since the second line is conditional on the first, if you weaken the first, both are weakened.
I feel a little shitty being like ‘trust me about what the book says’, but… please trust me about what the book says! There’s just not much in there about timelines. Even from the book website (and the title of the book!), the central claim opens with a conditional:
There’s much more uncertainty (at MIRI and in general) as to when ASI will be developed than as to what will happen once it is developed. We all have our own takes on timelines, some more bearish, some more bullish, all with long tails (afaik, although obviously I don’t speak for any specific other person).
If you build it, though, everyone dies.
There’s a broad strategic call here that goes something like:
All of our claims will be perceived as having a similar level of confidence by the general public (this is especially true in low-fidelity communications like an advertisement).
If one part of the story is falsified, the public will consider the whole story falsified.
We are in fact much more certain about what will happen than when.
We should focus on the unknowability of the when, the certainty of the what, and the unacceptability of that conjunction.
This is a gloss of the real thing which is more nuanced; of course if it comes up that someone asks us when we expect it to happen, or if there’s space for us to gesture to the uncertainty, the MIRI line is often “if it doesn’t happen in 20 years (conditional on no halt), we’d be pretty surprised” (although individual people may say something different).
I think some of the reception to AI2027 (i.e. in YouTube comments and the like) has given evidence that emphasizing timelines too too much can result in backlash, even if you prominently flag your uncertainty about the timelines! This is an important failure mode that, if all of the ecosystem’s comms err on overconfidence re timelines, will burn a lot of our credibility in 2028. (Yes, I know that AI2027 isn’t literally ‘ASI in 2027’, but I’m trying to highlight that most people who’ve heard of AI2027 don’t know that, and that’s the problem!).
(some meta: I am one of the handful of people who will be making calls about the ads, so I’m trying to offer feedback to help improve submissions, not just arguing a point for the sake of it or nitpicking)
Thanks again! My drafts are of course just ideas, so they can easily be adapted. However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety. If you want people to act, even if it’s just buying a book, you need to do just that. It’s not enough to say “you should read this”, you need to say “you should read this now” and give a reason for that. In marketing, this is usually done with some kind of time constraint (20% off, only this week …).
This is even more true if you want someone to take measures against something that is in the mind of most people still “science fiction” or even “just hype”. Of course, just claiming that something is “soon” is not very strong, but it may at least raise a question (“Why do they say this?”).
I’m not saying that you should give any specific timeline, and I fully agree with the MIRI view. However, if we want to prevent superintelligent AI and we don’t know how much time we have left, we can’t just sit around and wait until we know when it will arrive. For this reason, I have dedicated a whole chapter on timelines in my own German language book about AI existential risk and also included the AI-2027 scenario as one possible path. The point I make in my book is not that it will happen soon, but that we can’t know it won’t happen soon and that there are good reasons to believe that we don’t have much time. I use my own experience with AI since my Ph.D. on expert systems in 1988 and Yoshua Bengio’s blogpost on his change of mind as examples of how fast and surprising progress has been even for someone familiar with the field.
I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I’d choose the latter.
Personally, I would rather stake my chips on ‘important’ and let urgent handle itself. The title of the book is a narrow claim—if anyone builds it, everyone dies—with the clarifying details conveniently swept into the ‘it’. Adding more inferential steps makes it more challenging to convey clearly and more challenging to hear (since each step could lose some of the audience).
There’s some further complicated arguments about urgency—you don’t want to have gone out on too much of a limb about saying it’s close, because of costs when it’s far—but I think I most want to make a specialization of labor argument, where it’s good that the AI 2027 people, who are focused on forecasting, are making forecasting claims, and good that MIRI, who are focused on alignment, are making alignment difficulty / stakes claims.
Hmm I think I might agree with this value tradeoff but I don’t think I agree with the underlying prediction of what the world is offering us.
I think also MIRI has tried for a while to recruit people who can make progress on alignment and thought it was important to start work now, and the current push is on trying to get broad attention and support. The people writing blurbs for the book are just saying “yes, this is a serious book and a serious concern” and not signing on to “and it might happen in two years”—tho probably some of them also believe that—and I think that gives enough cover for the people who are acting on two-year timelines to operate.