This was covered on Prof. Massimo Pigliucci’s blog (Rationally Speaking) a few days ago. He points out some of the issues with the Inbar and Lammer paper’s methodology and notes that its findings should be taken with a grain of skepticism. Well worth a read, in my opinion.
Sixth, “we asked whether they would evaluate papers and grant applications that seemed to take a conservative perspective negatively.” Well, I would. But I would also evaluate negatively a paper or grant that takes a liberal perspective, because I happen to think that scientific papers ought to strive for having no ideological perspective whatsoever (they are not op-ed pieces, or works in political philosophy). And psychology, last time I checked, was presenting itself as a science. Incidentally, the authors immediately admit, in the same phrase: “but we did not ask whether they would evaluate work that seemed to take a liberal perspective negatively.” Well, why on earth not?
It’s a good point, and I think enough to call the paper’s findings seriously into question, but I don’t think fixing it would be enough to salvage the methodology. Ideological bias tends to be transparent from the inside: I’d expect any academic with a strong commitment to academic neutrality to punish perceived ideological bias in proportion to its magnitude, but I’d also expect the same academics to perceive viewpoints leaning toward their own ideology as less biased than the alternatives. Probably much less.
You’d need to do something a lot more clever to filter that out: maybe something like asking academics about the perceived rates of each type of ideological bias in papers and grants they evaluate, and normalizing based on that.
Agreed. He also suggests that there were obvious controls that should have been used but were not:
Perhaps the most problematic aspect of the Inbar and Lammers paper, however, is the above mentioned lack of the obvious control: they didn’t ask conservatives about their biases (nor, for that matter, did they ask another obvious control group: politically neutral or middle of the road faculty).
This was covered on Prof. Massimo Pigliucci’s blog (Rationally Speaking) a few days ago. He points out some of the issues with the Inbar and Lammer paper’s methodology and notes that its findings should be taken with a grain of skepticism. Well worth a read, in my opinion.
This seems to be his strongest argument:
It’s a good point, and I think enough to call the paper’s findings seriously into question, but I don’t think fixing it would be enough to salvage the methodology. Ideological bias tends to be transparent from the inside: I’d expect any academic with a strong commitment to academic neutrality to punish perceived ideological bias in proportion to its magnitude, but I’d also expect the same academics to perceive viewpoints leaning toward their own ideology as less biased than the alternatives. Probably much less.
You’d need to do something a lot more clever to filter that out: maybe something like asking academics about the perceived rates of each type of ideological bias in papers and grants they evaluate, and normalizing based on that.
Agreed. He also suggests that there were obvious controls that should have been used but were not: