Not necessarily responding to the rest of your comment, but on the “four hypotheses” part:
You think the rat/EA in-fights are the important thing to address, and you’re annoyed the book won’t do this.
You think the arguments of the Very Optimistic are the important thing to address.
I’m not sure that I buy the “rat/EA infights” and “Very Optimistic” are the relevant categories. I think there are a broad group of people who already have some views on AI risk, who are at-least-plausibly at-least-somewhat open minded, and who could be useful allies if they either changed their mind or took their stated views more seriously.
Let me list some people / groups of people: Noam Brown, Matt Clifford, Boaz Barak, Jared Kaplan, Dario Amodei, Noam Shazeer, [a large group of AI/ML academics], Sayash Kapoor, Nat Friedman, Jake Sullivan, Dean Ball, [various people associated with progress studies], Bill Gates.
I’m not claiming people who didn’t have any considered view on AI don’t matter. But I think that in practice when trying to change the minds and actions of key people (in a way that will actually lead to productive actions etc), it’s often important to either convince some more skeptical people who already have views and objections or to at least have solid arguments against their best objections. At a more basic level, to actually achieve frontier intellectual progress (e.g., settle action relevant disagreements about the level and nature of the risk between people who are already doing things), this is required. Maybe this book isn’t the place to do this, but that isn’t a crux for the point being made in this thread.
Yup, this is a good clarification and I see now that omitting this was an error. Thank you!
I think Jack Shanahan is a member of the reference class you’re pointing at here, and I can say that there were others in this reference class who found the book compelling but not wholly convincing, who did not want to say so publicly (hopefully they start feeling comfortable talking more openly about the topic — that’s the goal!).
There are also other resources we’re currently developing, to release in tandem with the book, that should help with this leg of the conversation.
We are at least attempting to be attentive to this population as part of the overall book campaign, even if it seemed like too much to chew / not exactly the right target for the contents of the book itself.
Not necessarily responding to the rest of your comment, but on the “four hypotheses” part:
I’m not sure that I buy the “rat/EA infights” and “Very Optimistic” are the relevant categories. I think there are a broad group of people who already have some views on AI risk, who are at-least-plausibly at-least-somewhat open minded, and who could be useful allies if they either changed their mind or took their stated views more seriously.
Let me list some people / groups of people: Noam Brown, Matt Clifford, Boaz Barak, Jared Kaplan, Dario Amodei, Noam Shazeer, [a large group of AI/ML academics], Sayash Kapoor, Nat Friedman, Jake Sullivan, Dean Ball, [various people associated with progress studies], Bill Gates.
I’m not claiming people who didn’t have any considered view on AI don’t matter. But I think that in practice when trying to change the minds and actions of key people (in a way that will actually lead to productive actions etc), it’s often important to either convince some more skeptical people who already have views and objections or to at least have solid arguments against their best objections. At a more basic level, to actually achieve frontier intellectual progress (e.g., settle action relevant disagreements about the level and nature of the risk between people who are already doing things), this is required. Maybe this book isn’t the place to do this, but that isn’t a crux for the point being made in this thread.
Yup, this is a good clarification and I see now that omitting this was an error. Thank you!
I think Jack Shanahan is a member of the reference class you’re pointing at here, and I can say that there were others in this reference class who found the book compelling but not wholly convincing, who did not want to say so publicly (hopefully they start feeling comfortable talking more openly about the topic — that’s the goal!).
There are also other resources we’re currently developing, to release in tandem with the book, that should help with this leg of the conversation.
We are at least attempting to be attentive to this population as part of the overall book campaign, even if it seemed like too much to chew / not exactly the right target for the contents of the book itself.