I think I agree with the thrust of this, but I think the comment section raises caveats that seem important. Scott’s acknowledged that there’s danger in this, and I hope an updated version would put that in the post.
But also...
Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
This seems like a strange model to use. We don’t know, a priori, what % are false. If 50% are obviously false, probably most of the remainder are subtly false. Giving me subtly false arguments is no favor.
Scott doesn’t tell, us, in this essay, what Steven Pinker has given him / why Steven Pinker is ruled in. Has Steven Pinker given him valuable insights? How does Scott know they’re valuable? (There may have been some implicit context when this was posted. Possibly Scott had recently reviewed a Pinker book.)
Given Anna’s example,
Julia Galef helpfully notes a case where Steven Pinker straightforwardly misrepresents basic facts about who said what. This is helpful to me in ruling out Steven Pinker as someone who I can trust not to lie to me about even straightforwardly checkable facts.
I find myself wondering, has Scott checked Pinker’s straightforwardly checkable facts?
I wouldn’t be surprised if he has. The point of these questions isn’t to say that Pinker shouldn’t be ruled in, but that the questions need to be asked and answered. And the essay doesn’t really acknowledge that that’s actually kind of hard. It’s even somewhat dismissive; “all you have to do is *test* some stuff to *see if it’s true*?” Well, the Large Hadron Collider cost €7.5 billion. On a less extreme scale, I recently wanted to check some of Robert Ellickson’s work; that cost me, I believe, tens of hours. And that was only checking things close to my own specialty. I’ve done work that could have ruled him out and didn’t, but is that enough to say he’s ruled in?
So this advice only seems good if you’re willing and able to put in the time to find and refute the bad arguments. Not only that, if you actually will put in that time. Not everyone can, not everyone wants to, not everyone will do. (This includes: “if you fact-check something and discover that it’s false, the thing doesn’t nevertheless propagate through your models influencing your downstream beliefs in ways it shouldn’t”.)
If you’re not going to do that… I don’t know. Maybe this is still good advice, but I think that discussion would be a different essay, and my sense is that Scott wasn’t actually trying to give that advice here.
In the comments, cousin_it and gjm describe the people who can and will do such work as “angel investors”, which seems apt.
I feel like right now, the essay is advising people to be angel investors, and not acknowledging that that’s risky if you’re not careful, and difficult to do carefully. That feels like an overstep. A more careful version might instead advise:
Some people have done some great work and some silly work. If you know which is which (e.g. because others have fact checked, or time has vindicated), feel free to pay attention to the great and ignore the silly.
Don’t automatically dismiss people just because they’ve said some silly things. Take that fact into account when evaluating the things they say that aren’t obviously silly, and deciding whether to actually evaluate them. But don’t let that fact take the place of actually evaluating those things. Like, given “Steven Pinker said obviously silly things about AI”, don’t say ”… so the rest of The Nurture Assumption isn’t worth paying attention to”. Instead, say ”… so I don’t think it’s worth me spending the time to look closer at The Nurture Assumption right now”. And allow for the possibility of changing that to ”… but The Nurture Assumption is getting a lot of good press, maybe I’ll look into it anyway”.
I think I agree with the thrust of this, but I think the comment section raises caveats that seem important. Scott’s acknowledged that there’s danger in this, and I hope an updated version would put that in the post.
But also...
This seems like a strange model to use. We don’t know, a priori, what % are false. If 50% are obviously false, probably most of the remainder are subtly false. Giving me subtly false arguments is no favor.
Scott doesn’t tell, us, in this essay, what Steven Pinker has given him / why Steven Pinker is ruled in. Has Steven Pinker given him valuable insights? How does Scott know they’re valuable? (There may have been some implicit context when this was posted. Possibly Scott had recently reviewed a Pinker book.)
Given Anna’s example,
I find myself wondering, has Scott checked Pinker’s straightforwardly checkable facts?
I wouldn’t be surprised if he has. The point of these questions isn’t to say that Pinker shouldn’t be ruled in, but that the questions need to be asked and answered. And the essay doesn’t really acknowledge that that’s actually kind of hard. It’s even somewhat dismissive; “all you have to do is *test* some stuff to *see if it’s true*?” Well, the Large Hadron Collider cost €7.5 billion. On a less extreme scale, I recently wanted to check some of Robert Ellickson’s work; that cost me, I believe, tens of hours. And that was only checking things close to my own specialty. I’ve done work that could have ruled him out and didn’t, but is that enough to say he’s ruled in?
So this advice only seems good if you’re willing and able to put in the time to find and refute the bad arguments. Not only that, if you actually will put in that time. Not everyone can, not everyone wants to, not everyone will do. (This includes: “if you fact-check something and discover that it’s false, the thing doesn’t nevertheless propagate through your models influencing your downstream beliefs in ways it shouldn’t”.)
If you’re not going to do that… I don’t know. Maybe this is still good advice, but I think that discussion would be a different essay, and my sense is that Scott wasn’t actually trying to give that advice here.
In the comments, cousin_it and gjm describe the people who can and will do such work as “angel investors”, which seems apt.
I feel like right now, the essay is advising people to be angel investors, and not acknowledging that that’s risky if you’re not careful, and difficult to do carefully. That feels like an overstep. A more careful version might instead advise:
Some people have done some great work and some silly work. If you know which is which (e.g. because others have fact checked, or time has vindicated), feel free to pay attention to the great and ignore the silly.
Don’t automatically dismiss people just because they’ve said some silly things. Take that fact into account when evaluating the things they say that aren’t obviously silly, and deciding whether to actually evaluate them. But don’t let that fact take the place of actually evaluating those things. Like, given “Steven Pinker said obviously silly things about AI”, don’t say ”… so the rest of The Nurture Assumption isn’t worth paying attention to”. Instead, say ”… so I don’t think it’s worth me spending the time to look closer at The Nurture Assumption right now”. And allow for the possibility of changing that to ”… but The Nurture Assumption is getting a lot of good press, maybe I’ll look into it anyway”.
(e: lightly edited for formatting and content)