I can see why the different things I’ve said on this might seem inconsistent :P It’s also very possible I’m wrong here, I’m not confident about this and have only spent a few hours in conversation about it. And if I wasn’t recently personally angered by Eliezer’s behavior, I wouldn’t have mentioned this opinion publicly. But here’s my current model.
My current sense is that IABIED hasn’t had that much of an effect on public perception of AI risk, compared to things like AI 2027. My previous sense was that there are huge downsides of Eliezer (and co) being more influential on the topic of AI safety, but MIRI had some chance of succeeding at getting lots of attention, so I was overall positive on you and other MIRI people putting your time into promoting the book. Because the book didn’t go as well as seemed plausible, promoting Eliezer’s perspective seems less like an efficient way of popularizing concern about AI risk, and less outweighs the disadvantages of him being having negative effects inside the AI safety community.
For example, my guess is that it’s worse for the MIRI governance team to be at MIRI than elsewhere except in as much as they gain prominence due to Eliezer association; if that second factor is weaker, it looks less good for them to be there.
I think my impression of the book is somewhat more negative than it was when it first came out, based on various discussions I’ve had with people about it. But this isn’t a big factor.
Does this make sense?
“The main thing Eliezer and MIRI have been doing since shifting focus to comms addressed a ‘shockingoversight’ that it’s hard to imagine anyone else doing a better job addressing” (lmk if this doesn’t feel like an accurate paraphrase)
This paraphrase doesn’t quite preserve the meaning I intended. I think many people would have done a somewhat better job.
For example, my guess is that it’s worse for the MIRI governance team to be at MIRI than elsewhere except in as much as they gain prominence due to Eliezer association
Or if they want to work from a frame that isn’t really supported by other orgs (i.e., they’re closer to Eliezer’s views than to the views/filters enforced at AIFP, RAND, Redwood, and other alternatives). I think people at MIRI think halt/off-switch is a good idea, and want to work on it. Many (but not all) of us think it’s Our Best Hope, and would be pretty dissatisfied working on something else.
I agree that visible-impact-so-far for AI2027 is > it is for IABIED, but I’m more optimistic than you about IABIED’s impact into the future (both because I like IABIED more than you do, and because I’m keeping an eye on ongoing sales, readership, assignment in universities, etc).
I think my impression of the book is somewhat more negative than it was when it first came out, based on various discussions I’ve had with people about it.
Consider leaving a comment on your review about this if you have the time and inclination in the future; I’m at least curious, and others may be, too.
I can see why the different things I’ve said on this might seem inconsistent :P It’s also very possible I’m wrong here, I’m not confident about this and have only spent a few hours in conversation about it. And if I wasn’t recently personally angered by Eliezer’s behavior, I wouldn’t have mentioned this opinion publicly. But here’s my current model.
My current sense is that IABIED hasn’t had that much of an effect on public perception of AI risk, compared to things like AI 2027. My previous sense was that there are huge downsides of Eliezer (and co) being more influential on the topic of AI safety, but MIRI had some chance of succeeding at getting lots of attention, so I was overall positive on you and other MIRI people putting your time into promoting the book. Because the book didn’t go as well as seemed plausible, promoting Eliezer’s perspective seems less like an efficient way of popularizing concern about AI risk, and less outweighs the disadvantages of him being having negative effects inside the AI safety community.
For example, my guess is that it’s worse for the MIRI governance team to be at MIRI than elsewhere except in as much as they gain prominence due to Eliezer association; if that second factor is weaker, it looks less good for them to be there.
I think my impression of the book is somewhat more negative than it was when it first came out, based on various discussions I’ve had with people about it. But this isn’t a big factor.
Does this make sense?
This paraphrase doesn’t quite preserve the meaning I intended. I think many people would have done a somewhat better job.
Or if they want to work from a frame that isn’t really supported by other orgs (i.e., they’re closer to Eliezer’s views than to the views/filters enforced at AIFP, RAND, Redwood, and other alternatives). I think people at MIRI think halt/off-switch is a good idea, and want to work on it. Many (but not all) of us think it’s Our Best Hope, and would be pretty dissatisfied working on something else.
I agree that visible-impact-so-far for AI2027 is > it is for IABIED, but I’m more optimistic than you about IABIED’s impact into the future (both because I like IABIED more than you do, and because I’m keeping an eye on ongoing sales, readership, assignment in universities, etc).
Consider leaving a comment on your review about this if you have the time and inclination in the future; I’m at least curious, and others may be, too.
(probably I bow out now; thanks Buck!)