I think you are underestimating the difficulty of getting endorsements like this. Like, I have seen many people in the AI Safety space try to get endorsements like this over the years, for many of their projects, and failed.
Now, how much is that evidence about the correctness of the book? Extremely little! But I also think that’s not what Malo is excited about here. He is excited about the shift in the Overton window it might reflect, and I think that’s pretty real, given the historical failure of people to get endorsements like this for many other projects.
Like, IDK, I am into a more prominent “filtered evidence disclaimer” somewhere in this post, just so that people don’t make wrong updates, but even with the filtered evidence, I think for many people these endorsements are substantial updates.
Now, how much is that evidence about the correctness of the book? Extremely little!
It might not be much evidence for LWers, who are already steeped in arguments and evidence about AI risk. It should be a lot of evidence for people newer to this topic who start with a skeptical prior. Most books making extreme-sounding (conditional) claims about the future don’t have endorsements from Nobel-winning economists, former White House officials, retired generals, computer security experts, etc. on the back cover.
I think you are underestimating the difficulty of getting endorsements like this. Like, I have seen many people in the AI Safety space try to get endorsements like this over the years, for many of their projects, and failed.
Now, how much is that evidence about the correctness of the book? Extremely little! But I also think that’s not what Malo is excited about here. He is excited about the shift in the Overton window it might reflect, and I think that’s pretty real, given the historical failure of people to get endorsements like this for many other projects.
Like, IDK, I am into a more prominent “filtered evidence disclaimer” somewhere in this post, just so that people don’t make wrong updates, but even with the filtered evidence, I think for many people these endorsements are substantial updates.
It might not be much evidence for LWers, who are already steeped in arguments and evidence about AI risk. It should be a lot of evidence for people newer to this topic who start with a skeptical prior. Most books making extreme-sounding (conditional) claims about the future don’t have endorsements from Nobel-winning economists, former White House officials, retired generals, computer security experts, etc. on the back cover.