Calling references from someone’s job application actually helps, even if they of course chose the references to be ones that would give a positive testimonial.
That’s quite different though!
References are generally people contacted at work; i.e., those who can be verified to have had a relationship with the person giving the reference. We think these people, by reason of having worked with them, are in a position to be knowledgeable about them, because it’s hard to fake qualities over long period of time. Furthermore, there’s an extremely finite number of such people; someone giving a list of references will probably give a reference they think would give a favorable review, but they have to select from a short list of such possible references. That short list is why it’s evidential. So it’s filtered, but it’s filtered from a small list of people well-positioned to evaluate your character.
Book reviews are different! You can hire a PR firm, contact friends, to try to send it to as many people for a favorable blurb as you possibly can. Rather than having a short list of potential references, you have a very long one. And of course, these people are in many cases not positioned to be knowledgeable about whether the book is correct; they’re certainly not as well-positioned as people you’ve worked with are to know about your character. So you’re filtering from a large list—potentially very large list—of people who are not well-positioned to evaluate the work.
You’re right it provides some evidence. I was wrong about the claim about none; if you could find no people to review a book positively, surely that’s some evidence about it.
But for the reasons above it’s very weak evidence, probably to the degree where you wonder about about the careful use of evidence (“evidence” is anything that might help you distinguish between worlds you’re in, regardless of strength -- 0.001 bits of evidence is still evidence) or the more colloquial (“evidence” is what helps you distinguish between worlds you’re in reasonably strongly).
And like, this is why it’s normal epistemics to ignore the blurbs on the backs of books when evaluating their quality, no matter how prestigious the list of blurbers! Like that’s what I’ve always done, that’s what I imagine you’ve always done, and that’s what we’d of course be doing if this wasn’t a MIRI-published book.
I think you are underestimating the difficulty of getting endorsements like this. Like, I have seen many people in the AI Safety space try to get endorsements like this over the years, for many of their projects, and failed.
Now, how much is that evidence about the correctness of the book? Extremely little! But I also think that’s not what Malo is excited about here. He is excited about the shift in the Overton window it might reflect, and I think that’s pretty real, given the historical failure of people to get endorsements like this for many other projects.
Like, IDK, I am into a more prominent “filtered evidence disclaimer” somewhere in this post, just so that people don’t make wrong updates, but even with the filtered evidence, I think for many people these endorsements are substantial updates.
Now, how much is that evidence about the correctness of the book? Extremely little!
It might not be much evidence for LWers, who are already steeped in arguments and evidence about AI risk. It should be a lot of evidence for people newer to this topic who start with a skeptical prior. Most books making extreme-sounding (conditional) claims about the future don’t have endorsements from Nobel-winning economists, former White House officials, retired generals, computer security experts, etc. on the back cover.
And like, this is why it’s normal epistemics to ignore the blurbs on the backs of books when evaluating their quality, no matter how prestigious the list of blurbers! Like that’s what I’ve always done, that’s what I imagine you’ve always done, and that’s what we’d of course be doing if this wasn’t a MIRI-published book.
If I see a book and I can’t figure out how seriously I should take it, I will look at the blurbs.
Good blurbs from serious, discerning, recognizable people are not on every book, even books from big publishers with strong sales. I realize this is N=2, so update (or not) accordingly, but the first book I could think of that I knew had good sales, but isn’t actually good is The Population Bomb. I didn’t find blurbs for that (I didn’t look all that hard, though, and the book is pretty old, so maybe not a good check for today’s publishing ecosystem anyway). The second book that came to mind was The Body Keeps the Score. The blurbs for that seem to be from a couple respectable-looking psychiatrists I’ve never heard of.
Yeah, I think people usually ignore blurbs, but sometimes blurbs are helpful. I think strong blurbs are unusually likely to be helpful when your book has a title like If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.
I second this. “If Anyone Builds It, Everyone Dies” triggers the “find out if this is insane crank horseshit” subroutine. And one of the quickest/strongest ways to negatively resolve that question is credible endorsements from well-known non-cranks.
Yep. And equally, the blurbs would be a lot less effective if the title were more timid and less stark.
Hearing that a wide range of respected figures endorse a book called If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All is a potential “holy shit” moment. If the same figures were endorsing a book with a vaguely inoffensive title like Smarter Than Us or The AI Crucible, it would spark a lot less interest (and concern).
I’d agree that this is to some extent playing the respectability game, but personally I’d be very happy for Eliezer and people to risk doing this too much rather than too little for once.
That’s quite different though!
References are generally people contacted at work; i.e., those who can be verified to have had a relationship with the person giving the reference. We think these people, by reason of having worked with them, are in a position to be knowledgeable about them, because it’s hard to fake qualities over long period of time. Furthermore, there’s an extremely finite number of such people; someone giving a list of references will probably give a reference they think would give a favorable review, but they have to select from a short list of such possible references. That short list is why it’s evidential. So it’s filtered, but it’s filtered from a small list of people well-positioned to evaluate your character.
Book reviews are different! You can hire a PR firm, contact friends, to try to send it to as many people for a favorable blurb as you possibly can. Rather than having a short list of potential references, you have a very long one. And of course, these people are in many cases not positioned to be knowledgeable about whether the book is correct; they’re certainly not as well-positioned as people you’ve worked with are to know about your character. So you’re filtering from a large list—potentially very large list—of people who are not well-positioned to evaluate the work.
You’re right it provides some evidence. I was wrong about the claim about none; if you could find no people to review a book positively, surely that’s some evidence about it.
But for the reasons above it’s very weak evidence, probably to the degree where you wonder about about the careful use of evidence (“evidence” is anything that might help you distinguish between worlds you’re in, regardless of strength -- 0.001 bits of evidence is still evidence) or the more colloquial (“evidence” is what helps you distinguish between worlds you’re in reasonably strongly).
And like, this is why it’s normal epistemics to ignore the blurbs on the backs of books when evaluating their quality, no matter how prestigious the list of blurbers! Like that’s what I’ve always done, that’s what I imagine you’ve always done, and that’s what we’d of course be doing if this wasn’t a MIRI-published book.
I think you are underestimating the difficulty of getting endorsements like this. Like, I have seen many people in the AI Safety space try to get endorsements like this over the years, for many of their projects, and failed.
Now, how much is that evidence about the correctness of the book? Extremely little! But I also think that’s not what Malo is excited about here. He is excited about the shift in the Overton window it might reflect, and I think that’s pretty real, given the historical failure of people to get endorsements like this for many other projects.
Like, IDK, I am into a more prominent “filtered evidence disclaimer” somewhere in this post, just so that people don’t make wrong updates, but even with the filtered evidence, I think for many people these endorsements are substantial updates.
It might not be much evidence for LWers, who are already steeped in arguments and evidence about AI risk. It should be a lot of evidence for people newer to this topic who start with a skeptical prior. Most books making extreme-sounding (conditional) claims about the future don’t have endorsements from Nobel-winning economists, former White House officials, retired generals, computer security experts, etc. on the back cover.
If I see a book and I can’t figure out how seriously I should take it, I will look at the blurbs.
Good blurbs from serious, discerning, recognizable people are not on every book, even books from big publishers with strong sales. I realize this is N=2, so update (or not) accordingly, but the first book I could think of that I knew had good sales, but isn’t actually good is The Population Bomb. I didn’t find blurbs for that (I didn’t look all that hard, though, and the book is pretty old, so maybe not a good check for today’s publishing ecosystem anyway). The second book that came to mind was The Body Keeps the Score. The blurbs for that seem to be from a couple respectable-looking psychiatrists I’ve never heard of.
Yeah, I think people usually ignore blurbs, but sometimes blurbs are helpful. I think strong blurbs are unusually likely to be helpful when your book has a title like If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.
I second this. “If Anyone Builds It, Everyone Dies” triggers the “find out if this is insane crank horseshit” subroutine. And one of the quickest/strongest ways to negatively resolve that question is credible endorsements from well-known non-cranks.
Yep. And equally, the blurbs would be a lot less effective if the title were more timid and less stark.
Hearing that a wide range of respected figures endorse a book called If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All is a potential “holy shit” moment. If the same figures were endorsing a book with a vaguely inoffensive title like Smarter Than Us or The AI Crucible, it would spark a lot less interest (and concern).
I’d agree that this is to some extent playing the respectability game, but personally I’d be very happy for Eliezer and people to risk doing this too much rather than too little for once.