Yeah, I think the book is going to be (by a very large margin) the best resource in the world for this sort of use case. (Though I’m potentially biased as a MIRI employee.) We’re not delaying; this is basically as fast as the publishing industry goes, and we expected the audience to be a lot smaller if we self-published. (A more typical timeline would have put the book another 3-20 months out.)
If Eliezer and Nate could release it sooner than September while still gaining the benefits of working with a top publishing house, doing a conventional media tour, etc., then we’d definitely be releasing it immediately. As is, our publisher has done a ton of great work already and has been extremely enthusiastic about this project, in a way that makes me feel way better about this approach. “We have to wait till September” is a real cost of this option, but I think it’s a pretty unavoidable cost given that we need this book to reach a lot of people, not just the sort of people who would hear about it from a friend on LessWrong.
I do think there are a lot of good resources already online, like MIRI’s recently released intro resource, “The Problem”. It’s a very different beast from If Anyone Build It, Everyone Dies (mainly written by different people, and independent of the whole book-writing process), and once the book comes out I’ll consider the book strictly better for anyone willing to read something longer. But I think “The Problem” is a really good overview in its own right, and I expect to continue citing it regularly, because having something shorter and free-to-read does matter a lot.
The AI Futures Project’s AI 2027, for a discussion focused on very near-term disaster scenarios. (See also a response from Max Harms, who works at MIRI.)
MIRI’s AGI Ruin, for people who want a more thorough and (semi)technical “why does AGI alignment look hard?” argument. This is a tweaked version of the LW AGI Ruin post, with edits aimed at making the essay more useful to share around widely. (The original post kinda assumed you were vaguely in the LW/EA ecosystem.)
Yeah, I think the book is going to be (by a very large margin) the best resource in the world for this sort of use case. (Though I’m potentially biased as a MIRI employee.) We’re not delaying; this is basically as fast as the publishing industry goes, and we expected the audience to be a lot smaller if we self-published. (A more typical timeline would have put the book another 3-20 months out.)
If Eliezer and Nate could release it sooner than September while still gaining the benefits of working with a top publishing house, doing a conventional media tour, etc., then we’d definitely be releasing it immediately. As is, our publisher has done a ton of great work already and has been extremely enthusiastic about this project, in a way that makes me feel way better about this approach. “We have to wait till September” is a real cost of this option, but I think it’s a pretty unavoidable cost given that we need this book to reach a lot of people, not just the sort of people who would hear about it from a friend on LessWrong.
I do think there are a lot of good resources already online, like MIRI’s recently released intro resource, “The Problem”. It’s a very different beast from If Anyone Build It, Everyone Dies (mainly written by different people, and independent of the whole book-writing process), and once the book comes out I’ll consider the book strictly better for anyone willing to read something longer. But I think “The Problem” is a really good overview in its own right, and I expect to continue citing it regularly, because having something shorter and free-to-read does matter a lot.
Some other resources I especially like include:
Gabriel Alfour’s Preventing Extinction from Superintelligence, for a quick and to-the-point overview of the situation.
Ian Hogarth’s We Must Slow Down the Race to God-Like AI (requires Financial Times access), for an overview with a bit more discussion of recent AI progress.
The AI Futures Project’s AI 2027, for a discussion focused on very near-term disaster scenarios. (See also a response from Max Harms, who works at MIRI.)
MIRI’s AGI Ruin, for people who want a more thorough and (semi)technical “why does AGI alignment look hard?” argument. This is a tweaked version of the LW AGI Ruin post, with edits aimed at making the essay more useful to share around widely. (The original post kinda assumed you were vaguely in the LW/EA ecosystem.)