While a decent exchange, I’m not sure if this is that useful to either of us for future exchanges?
Regarding anecdata, you also have to take into account Scott Alexander disliking the scenario, Will being disappointed, Shakeel thinking the writing was terrible, and Buck thinking that they didn’t sufficiently argue their case. And that’s not even including the people who overly disagree with the main argument.
Anyway, we shall see how it turns out (and I sincerely hope it has a positive impact)
Most of these people claim to be speaking from their impression of how the public will respond, which is not yet knowable and will be known in the (near-ish) future.
My meta point remains that these are all marginal calls, that there are arguments the other direction, and that only Nate is equipped to argue them on the margin (because, in many cases, I disagree with Nate’s calls, but don’t think I’m right about literally all the things we disagree on; the same is true for everyone else at MIRI who’s been involved with the project, afaict). Eg I did not like the scenario, and felt Part 3 could have been improved by additional input from the technical governance team (and more detailed plans, which ended up in the online resources instead). It is unreasonable that I have been dragged into arguing against claims I basically agree with on account of introducing a single fact to the discussion (that length DOES matter, even among ‘elite’ audiences, and that thresholds for this may be low). My locally valid point and differing conclusions do not indicate that I disagree with you on your many other points.
That people wishing the book well are also releasing essays (based on guesses and, much less so in your case than others, misrepresentations) to talk others in the ecosystem out of promoting it could, in fact, be a big problem, mostly in that it could bring about a lukewarm overall reception (eg random normie-adjacent CEA employees don’t read it and don’t recommend it to their parents, because they believe the misrepresentations from Zach’s tweet thread here: https://x.com/Zach_y_robinson/status/1968810665973530781). Once that happens, Zach can say “well, nobody else at my workplace thought it was good,” when none of them read it, and HE didn’t read it, AND they just took his word for it.
I could agree with every one of your object level points, still think the book was net positive, and therefore think it was overconfident and self-fulfillingly nihilistic of you to aithoritatively predict how the public would respond.
I, of course, wouldn’t stand by the book if I didn’t think it was net positive, and hadn’t spent tens of hours hearing the other side out in advance of the release. Part I shines VERY bright in my eyes, and the other sections are, at least, better than similarly high-profile works (to the extent that those exist at all) tackling the same topics (exception for AI2027 vs Part 2).
While a decent exchange, I’m not sure if this is that useful to either of us for future exchanges?
Regarding anecdata, you also have to take into account Scott Alexander disliking the scenario, Will being disappointed, Shakeel thinking the writing was terrible, and Buck thinking that they didn’t sufficiently argue their case. And that’s not even including the people who overly disagree with the main argument.
Anyway, we shall see how it turns out (and I sincerely hope it has a positive impact)
Most of these people claim to be speaking from their impression of how the public will respond, which is not yet knowable and will be known in the (near-ish) future.
My meta point remains that these are all marginal calls, that there are arguments the other direction, and that only Nate is equipped to argue them on the margin (because, in many cases, I disagree with Nate’s calls, but don’t think I’m right about literally all the things we disagree on; the same is true for everyone else at MIRI who’s been involved with the project, afaict). Eg I did not like the scenario, and felt Part 3 could have been improved by additional input from the technical governance team (and more detailed plans, which ended up in the online resources instead). It is unreasonable that I have been dragged into arguing against claims I basically agree with on account of introducing a single fact to the discussion (that length DOES matter, even among ‘elite’ audiences, and that thresholds for this may be low). My locally valid point and differing conclusions do not indicate that I disagree with you on your many other points.
That people wishing the book well are also releasing essays (based on guesses and, much less so in your case than others, misrepresentations) to talk others in the ecosystem out of promoting it could, in fact, be a big problem, mostly in that it could bring about a lukewarm overall reception (eg random normie-adjacent CEA employees don’t read it and don’t recommend it to their parents, because they believe the misrepresentations from Zach’s tweet thread here: https://x.com/Zach_y_robinson/status/1968810665973530781). Once that happens, Zach can say “well, nobody else at my workplace thought it was good,” when none of them read it, and HE didn’t read it, AND they just took his word for it.
I could agree with every one of your object level points, still think the book was net positive, and therefore think it was overconfident and self-fulfillingly nihilistic of you to aithoritatively predict how the public would respond.
I, of course, wouldn’t stand by the book if I didn’t think it was net positive, and hadn’t spent tens of hours hearing the other side out in advance of the release. Part I shines VERY bright in my eyes, and the other sections are, at least, better than similarly high-profile works (to the extent that those exist at all) tackling the same topics (exception for AI2027 vs Part 2).