I think I somewhat disagree with this. My view is more like:
The recent writings of Eliezer (and probably Nate?) are not very good at persuading thoughtful skeptics, seemingly in part due to not really trying to do this / being uninterested (see e.g. Eliezer’s writing on X/twitter).
Eliezer and Nate tried much harder to make this book persuasive to moderately thoughtful people who are more skeptical by default (or at least not actively off-putting) including via mechanisms like having a bunch of test readers etc. I bet they didn’t try hard to engage with the arguments of people who were skeptical and already had detailed views on the topic. So, I expect “skeptics who had somewhat detailed views” will feel like the book doesn’t even responds to their disagreements, let alone persuades them.
In practice, I also expect minimal movement from “thoughtful skeptics”, though maybe a bit more than Buck seems to articulate.
However, this isn’t really the point of the book I’d guess, the point (I think) is to be reasonably persuasive (and make initial arguments) to people who haven’t really thought about the topic (including people who would be skeptical by default).
I expect the book is much less off-putting to the target audience than stuff like Eliezer’s writing on X/twitter which will make it much more effective at its aims.
I don’t expect that this book will (e.g.) engage with my disagreements with Eliezer or why I’m much more optimistic.
[your comment leads me to believe you may not see why MIRI/LC-clustered folks disagreed with your comment but thumbs-upped Ryan, so it might be worthwhile for me to point out why I think that is]
The delta I see between the comments is:
Buck: almost any skeptic who had expressed opinions on the topic before
vs
Ryan: skeptics who had somewhat detailed views
‘Almost any skeptic who has expressed opinions on the topic before’ includes people like Francis Fukayama, who is just a random public intellectual with no AI understanding and got kinda pressed to express a view on x-risk in an interview, so came out against it as a serious concern. Then he thought harder, and voila [more gradual disempowerment flavored, but still!]. I think the vast majority of people, both in gen pop and in powerful positions, are more like Francis than they are like Alex Turner.
So four hypotheses:
You just agree with Ryan’s narrower frame uncomplicatedly and your initial comment was a little strong.
You think the rat/EA in-fights are the important thing to address, and you’re annoyed the book won’t do this.
You think the arguments of the Very Optimistic are the important thing to address.
Not necessarily responding to the rest of your comment, but on the “four hypotheses” part:
You think the rat/EA in-fights are the important thing to address, and you’re annoyed the book won’t do this.
You think the arguments of the Very Optimistic are the important thing to address.
I’m not sure that I buy the “rat/EA infights” and “Very Optimistic” are the relevant categories. I think there are a broad group of people who already have some views on AI risk, who are at-least-plausibly at-least-somewhat open minded, and who could be useful allies if they either changed their mind or took their stated views more seriously.
Let me list some people / groups of people: Noam Brown, Matt Clifford, Boaz Barak, Jared Kaplan, Dario Amodei, Noam Shazeer, [a large group of AI/ML academics], Sayash Kapoor, Nat Friedman, Jake Sullivan, Dean Ball, [various people associated with progress studies], Bill Gates.
I’m not claiming people who didn’t have any considered view on AI don’t matter. But I think that in practice when trying to change the minds and actions of key people (in a way that will actually lead to productive actions etc), it’s often important to either convince some more skeptical people who already have views and objections or to at least have solid arguments against their best objections. At a more basic level, to actually achieve frontier intellectual progress (e.g., settle action relevant disagreements about the level and nature of the risk between people who are already doing things), this is required. Maybe this book isn’t the place to do this, but that isn’t a crux for the point being made in this thread.
Yup, this is a good clarification and I see now that omitting this was an error. Thank you!
I think Jack Shanahan is a member of the reference class you’re pointing at here, and I can say that there were others in this reference class who found the book compelling but not wholly convincing, who did not want to say so publicly (hopefully they start feeling comfortable talking more openly about the topic — that’s the goal!).
There are also other resources we’re currently developing, to release in tandem with the book, that should help with this leg of the conversation.
We are at least attempting to be attentive to this population as part of the overall book campaign, even if it seemed like too much to chew / not exactly the right target for the contents of the book itself.
I think I somewhat disagree with this. My view is more like:
The recent writings of Eliezer (and probably Nate?) are not very good at persuading thoughtful skeptics, seemingly in part due to not really trying to do this / being uninterested (see e.g. Eliezer’s writing on X/twitter).
Eliezer and Nate tried much harder to make this book persuasive to moderately thoughtful people who are more skeptical by default (or at least not actively off-putting) including via mechanisms like having a bunch of test readers etc. I bet they didn’t try hard to engage with the arguments of people who were skeptical and already had detailed views on the topic. So, I expect “skeptics who had somewhat detailed views” will feel like the book doesn’t even responds to their disagreements, let alone persuades them.
In practice, I also expect minimal movement from “thoughtful skeptics”, though maybe a bit more than Buck seems to articulate.
However, this isn’t really the point of the book I’d guess, the point (I think) is to be reasonably persuasive (and make initial arguments) to people who haven’t really thought about the topic (including people who would be skeptical by default).
I expect the book is much less off-putting to the target audience than stuff like Eliezer’s writing on X/twitter which will make it much more effective at its aims.
I don’t expect that this book will (e.g.) engage with my disagreements with Eliezer or why I’m much more optimistic.
I don’t think I disagree with any of this, it doesn’t seem to conflict much with what I said.
[your comment leads me to believe you may not see why MIRI/LC-clustered folks disagreed with your comment but thumbs-upped Ryan, so it might be worthwhile for me to point out why I think that is]
The delta I see between the comments is:
vs
‘Almost any skeptic who has expressed opinions on the topic before’ includes people like Francis Fukayama, who is just a random public intellectual with no AI understanding and got kinda pressed to express a view on x-risk in an interview, so came out against it as a serious concern. Then he thought harder, and voila [more gradual disempowerment flavored, but still!]. I think the vast majority of people, both in gen pop and in powerful positions, are more like Francis than they are like Alex Turner.
So four hypotheses:
You just agree with Ryan’s narrower frame uncomplicatedly and your initial comment was a little strong.
You think the rat/EA in-fights are the important thing to address, and you’re annoyed the book won’t do this.
You think the arguments of the Very Optimistic are the important thing to address.
William is just confused.
Not necessarily responding to the rest of your comment, but on the “four hypotheses” part:
I’m not sure that I buy the “rat/EA infights” and “Very Optimistic” are the relevant categories. I think there are a broad group of people who already have some views on AI risk, who are at-least-plausibly at-least-somewhat open minded, and who could be useful allies if they either changed their mind or took their stated views more seriously.
Let me list some people / groups of people: Noam Brown, Matt Clifford, Boaz Barak, Jared Kaplan, Dario Amodei, Noam Shazeer, [a large group of AI/ML academics], Sayash Kapoor, Nat Friedman, Jake Sullivan, Dean Ball, [various people associated with progress studies], Bill Gates.
I’m not claiming people who didn’t have any considered view on AI don’t matter. But I think that in practice when trying to change the minds and actions of key people (in a way that will actually lead to productive actions etc), it’s often important to either convince some more skeptical people who already have views and objections or to at least have solid arguments against their best objections. At a more basic level, to actually achieve frontier intellectual progress (e.g., settle action relevant disagreements about the level and nature of the risk between people who are already doing things), this is required. Maybe this book isn’t the place to do this, but that isn’t a crux for the point being made in this thread.
Yup, this is a good clarification and I see now that omitting this was an error. Thank you!
I think Jack Shanahan is a member of the reference class you’re pointing at here, and I can say that there were others in this reference class who found the book compelling but not wholly convincing, who did not want to say so publicly (hopefully they start feeling comfortable talking more openly about the topic — that’s the goal!).
There are also other resources we’re currently developing, to release in tandem with the book, that should help with this leg of the conversation.
We are at least attempting to be attentive to this population as part of the overall book campaign, even if it seemed like too much to chew / not exactly the right target for the contents of the book itself.