[your comment leads me to believe you may not see why MIRI/LC-clustered folks disagreed with your comment but thumbs-upped Ryan, so it might be worthwhile for me to point out why I think that is]
The delta I see between the comments is:
Buck: almost any skeptic who had expressed opinions on the topic before
vs
Ryan: skeptics who had somewhat detailed views
‘Almost any skeptic who has expressed opinions on the topic before’ includes people like Francis Fukayama, who is just a random public intellectual with no AI understanding and got kinda pressed to express a view on x-risk in an interview, so came out against it as a serious concern. Then he thought harder, and voila [more gradual disempowerment flavored, but still!]. I think the vast majority of people, both in gen pop and in powerful positions, are more like Francis than they are like Alex Turner.
So four hypotheses:
You just agree with Ryan’s narrower frame uncomplicatedly and your initial comment was a little strong.
You think the rat/EA in-fights are the important thing to address, and you’re annoyed the book won’t do this.
You think the arguments of the Very Optimistic are the important thing to address.
Not necessarily responding to the rest of your comment, but on the “four hypotheses” part:
You think the rat/EA in-fights are the important thing to address, and you’re annoyed the book won’t do this.
You think the arguments of the Very Optimistic are the important thing to address.
I’m not sure that I buy the “rat/EA infights” and “Very Optimistic” are the relevant categories. I think there are a broad group of people who already have some views on AI risk, who are at-least-plausibly at-least-somewhat open minded, and who could be useful allies if they either changed their mind or took their stated views more seriously.
Let me list some people / groups of people: Noam Brown, Matt Clifford, Boaz Barak, Jared Kaplan, Dario Amodei, Noam Shazeer, [a large group of AI/ML academics], Sayash Kapoor, Nat Friedman, Jake Sullivan, Dean Ball, [various people associated with progress studies], Bill Gates.
I’m not claiming people who didn’t have any considered view on AI don’t matter. But I think that in practice when trying to change the minds and actions of key people (in a way that will actually lead to productive actions etc), it’s often important to either convince some more skeptical people who already have views and objections or to at least have solid arguments against their best objections. At a more basic level, to actually achieve frontier intellectual progress (e.g., settle action relevant disagreements about the level and nature of the risk between people who are already doing things), this is required. Maybe this book isn’t the place to do this, but that isn’t a crux for the point being made in this thread.
Yup, this is a good clarification and I see now that omitting this was an error. Thank you!
I think Jack Shanahan is a member of the reference class you’re pointing at here, and I can say that there were others in this reference class who found the book compelling but not wholly convincing, who did not want to say so publicly (hopefully they start feeling comfortable talking more openly about the topic — that’s the goal!).
There are also other resources we’re currently developing, to release in tandem with the book, that should help with this leg of the conversation.
We are at least attempting to be attentive to this population as part of the overall book campaign, even if it seemed like too much to chew / not exactly the right target for the contents of the book itself.
[your comment leads me to believe you may not see why MIRI/LC-clustered folks disagreed with your comment but thumbs-upped Ryan, so it might be worthwhile for me to point out why I think that is]
The delta I see between the comments is:
vs
‘Almost any skeptic who has expressed opinions on the topic before’ includes people like Francis Fukayama, who is just a random public intellectual with no AI understanding and got kinda pressed to express a view on x-risk in an interview, so came out against it as a serious concern. Then he thought harder, and voila [more gradual disempowerment flavored, but still!]. I think the vast majority of people, both in gen pop and in powerful positions, are more like Francis than they are like Alex Turner.
So four hypotheses:
You just agree with Ryan’s narrower frame uncomplicatedly and your initial comment was a little strong.
You think the rat/EA in-fights are the important thing to address, and you’re annoyed the book won’t do this.
You think the arguments of the Very Optimistic are the important thing to address.
William is just confused.
Not necessarily responding to the rest of your comment, but on the “four hypotheses” part:
I’m not sure that I buy the “rat/EA infights” and “Very Optimistic” are the relevant categories. I think there are a broad group of people who already have some views on AI risk, who are at-least-plausibly at-least-somewhat open minded, and who could be useful allies if they either changed their mind or took their stated views more seriously.
Let me list some people / groups of people: Noam Brown, Matt Clifford, Boaz Barak, Jared Kaplan, Dario Amodei, Noam Shazeer, [a large group of AI/ML academics], Sayash Kapoor, Nat Friedman, Jake Sullivan, Dean Ball, [various people associated with progress studies], Bill Gates.
I’m not claiming people who didn’t have any considered view on AI don’t matter. But I think that in practice when trying to change the minds and actions of key people (in a way that will actually lead to productive actions etc), it’s often important to either convince some more skeptical people who already have views and objections or to at least have solid arguments against their best objections. At a more basic level, to actually achieve frontier intellectual progress (e.g., settle action relevant disagreements about the level and nature of the risk between people who are already doing things), this is required. Maybe this book isn’t the place to do this, but that isn’t a crux for the point being made in this thread.
Yup, this is a good clarification and I see now that omitting this was an error. Thank you!
I think Jack Shanahan is a member of the reference class you’re pointing at here, and I can say that there were others in this reference class who found the book compelling but not wholly convincing, who did not want to say so publicly (hopefully they start feeling comfortable talking more openly about the topic — that’s the goal!).
There are also other resources we’re currently developing, to release in tandem with the book, that should help with this leg of the conversation.
We are at least attempting to be attentive to this population as part of the overall book campaign, even if it seemed like too much to chew / not exactly the right target for the contents of the book itself.