However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety.
Personally, I would rather stake my chips on ‘important’ and let urgent handle itself. The title of the book is a narrow claim—if anyone builds it, everyone dies—with the clarifying details conveniently swept into the ‘it’. Adding more inferential steps makes it more challenging to convey clearly and more challenging to hear (since each step could lose some of the audience).
There’s some further complicated arguments about urgency—you don’t want to have gone out on too much of a limb about saying it’s close, because of costs when it’s far—but I think I most want to make a specialization of labor argument, where it’s good that the AI 2027 people, who are focused on forecasting, are making forecasting claims, and good that MIRI, who are focused on alignment, are making alignment difficulty / stakes claims.
I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I’d choose the latter.
Hmm I think I might agree with this value tradeoff but I don’t think I agree with the underlying prediction of what the world is offering us.
I think also MIRI has tried for a while to recruit people who can make progress on alignment and thought it was important to start work now, and the current push is on trying to get broad attention and support. The people writing blurbs for the book are just saying “yes, this is a serious book and a serious concern” and not signing on to “and it might happen in two years”—tho probably some of them also believe that—and I think that gives enough cover for the people who are acting on two-year timelines to operate.
Personally, I would rather stake my chips on ‘important’ and let urgent handle itself. The title of the book is a narrow claim—if anyone builds it, everyone dies—with the clarifying details conveniently swept into the ‘it’. Adding more inferential steps makes it more challenging to convey clearly and more challenging to hear (since each step could lose some of the audience).
There’s some further complicated arguments about urgency—you don’t want to have gone out on too much of a limb about saying it’s close, because of costs when it’s far—but I think I most want to make a specialization of labor argument, where it’s good that the AI 2027 people, who are focused on forecasting, are making forecasting claims, and good that MIRI, who are focused on alignment, are making alignment difficulty / stakes claims.
Hmm I think I might agree with this value tradeoff but I don’t think I agree with the underlying prediction of what the world is offering us.
I think also MIRI has tried for a while to recruit people who can make progress on alignment and thought it was important to start work now, and the current push is on trying to get broad attention and support. The people writing blurbs for the book are just saying “yes, this is a serious book and a serious concern” and not signing on to “and it might happen in two years”—tho probably some of them also believe that—and I think that gives enough cover for the people who are acting on two-year timelines to operate.