Thanks again! My drafts are of course just ideas, so they can easily be adapted. However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety. If you want people to act, even if it’s just buying a book, you need to do just that. It’s not enough to say “you should read this”, you need to say “you should read this now” and give a reason for that. In marketing, this is usually done with some kind of time constraint (20% off, only this week …).
This is even more true if you want someone to take measures against something that is in the mind of most people still “science fiction” or even “just hype”. Of course, just claiming that something is “soon” is not very strong, but it may at least raise a question (“Why do they say this?”).
I’m not saying that you should give any specific timeline, and I fully agree with the MIRI view. However, if we want to prevent superintelligent AI and we don’t know how much time we have left, we can’t just sit around and wait until we know when it will arrive. For this reason, I have dedicated a whole chapter on timelines in my own German language book about AI existential risk and also included the AI-2027 scenario as one possible path. The point I make in my book is not that it will happen soon, but that we can’t know it won’t happen soon and that there are good reasons to believe that we don’t have much time. I use my own experience with AI since my Ph.D. on expert systems in 1988 and Yoshua Bengio’s blogpost on his change of mind as examples of how fast and surprising progress has been even for someone familiar with the field.
I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I’d choose the latter.
However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety.
Personally, I would rather stake my chips on ‘important’ and let urgent handle itself. The title of the book is a narrow claim—if anyone builds it, everyone dies—with the clarifying details conveniently swept into the ‘it’. Adding more inferential steps makes it more challenging to convey clearly and more challenging to hear (since each step could lose some of the audience).
There’s some further complicated arguments about urgency—you don’t want to have gone out on too much of a limb about saying it’s close, because of costs when it’s far—but I think I most want to make a specialization of labor argument, where it’s good that the AI 2027 people, who are focused on forecasting, are making forecasting claims, and good that MIRI, who are focused on alignment, are making alignment difficulty / stakes claims.
I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I’d choose the latter.
Hmm I think I might agree with this value tradeoff but I don’t think I agree with the underlying prediction of what the world is offering us.
I think also MIRI has tried for a while to recruit people who can make progress on alignment and thought it was important to start work now, and the current push is on trying to get broad attention and support. The people writing blurbs for the book are just saying “yes, this is a serious book and a serious concern” and not signing on to “and it might happen in two years”—tho probably some of them also believe that—and I think that gives enough cover for the people who are acting on two-year timelines to operate.
Thanks again! My drafts are of course just ideas, so they can easily be adapted. However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety. If you want people to act, even if it’s just buying a book, you need to do just that. It’s not enough to say “you should read this”, you need to say “you should read this now” and give a reason for that. In marketing, this is usually done with some kind of time constraint (20% off, only this week …).
This is even more true if you want someone to take measures against something that is in the mind of most people still “science fiction” or even “just hype”. Of course, just claiming that something is “soon” is not very strong, but it may at least raise a question (“Why do they say this?”).
I’m not saying that you should give any specific timeline, and I fully agree with the MIRI view. However, if we want to prevent superintelligent AI and we don’t know how much time we have left, we can’t just sit around and wait until we know when it will arrive. For this reason, I have dedicated a whole chapter on timelines in my own German language book about AI existential risk and also included the AI-2027 scenario as one possible path. The point I make in my book is not that it will happen soon, but that we can’t know it won’t happen soon and that there are good reasons to believe that we don’t have much time. I use my own experience with AI since my Ph.D. on expert systems in 1988 and Yoshua Bengio’s blogpost on his change of mind as examples of how fast and surprising progress has been even for someone familiar with the field.
I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I’d choose the latter.
Personally, I would rather stake my chips on ‘important’ and let urgent handle itself. The title of the book is a narrow claim—if anyone builds it, everyone dies—with the clarifying details conveniently swept into the ‘it’. Adding more inferential steps makes it more challenging to convey clearly and more challenging to hear (since each step could lose some of the audience).
There’s some further complicated arguments about urgency—you don’t want to have gone out on too much of a limb about saying it’s close, because of costs when it’s far—but I think I most want to make a specialization of labor argument, where it’s good that the AI 2027 people, who are focused on forecasting, are making forecasting claims, and good that MIRI, who are focused on alignment, are making alignment difficulty / stakes claims.
Hmm I think I might agree with this value tradeoff but I don’t think I agree with the underlying prediction of what the world is offering us.
I think also MIRI has tried for a while to recruit people who can make progress on alignment and thought it was important to start work now, and the current push is on trying to get broad attention and support. The people writing blurbs for the book are just saying “yes, this is a serious book and a serious concern” and not signing on to “and it might happen in two years”—tho probably some of them also believe that—and I think that gives enough cover for the people who are acting on two-year timelines to operate.