What’s Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers

Link post

Really interesting analysis of social science papers and replication markets. Some excerpts:

Over the past year, I have skimmed through 2578 social science papers, spending about 2.5 minutes on each one. This was due to my participation in Replication Markets, a part of DARPA’s SCORE program, whose goal is to evaluate the reliability of social science research. 3000 studies were split up into 10 rounds of ~300 studies each. Starting in August 2019, each round consisted of one week of surveys followed by two weeks of market trading. I finished in first place in 3 out 10 survey rounds and 6 out of 10 market rounds. In total, about $200,000 in prize money will be awarded.

The studies were sourced from all social sciences disciplines (economics, psychology, sociology, management, etc.) and were published between 2009 and 2018 (in other words, most of the sample came from the post-replication crisis era).

The average replication probability in the market was 54%; while the replication results are not out yet (175 of the 3000 papers will be replicated), previous experiments have shown that prediction markets work well.1

This is what the distribution of my own predictions looks like:2

[...]


Check out this crazy chart from Yang et al. (2020):

Yes, you’re reading that right: studies that replicate are cited at the same rate as studies that do not. Publishing your own weak papers is one thing, but citing other people’s weak papers? This seemed implausible, so I decided to do my own analysis with a sample of 250 articles from the Replication Markets project. The correlation between citations per year and (market-estimated) probability of replication was −0.05!

You might hypothesize that the citations of non-replicating papers are negative, but negative citations are extremely rare.5 One study puts the rate at 2.4%. Astonishingly, even after retraction the vast majority of citations are positive, and those positive citations continue for decades after retraction.6

As in all affairs of man, it once again comes down to Hanlon’s Razor. Either:

  1. Malice: they know which results are likely false but cite them anyway.

  2. or, Stupidity: they can’t tell which papers will replicate even though it’s quite easy.

Accepting the first option would require a level of cynicism that even I struggle to muster. But the alternative doesn’t seem much better: how can they not know? I, an idiot with no relevant credentials or knowledge, can fairly accurately determine good research from bad, but all the tenured experts can not? How can they not tell which papers are retracted?

I think the most plausible explanation is that scientists don’t read the papers they cite, which I suppose involves both malice and stupidity.7 Gwern has an interesting write-up on this question, citing some ingenious bibliographic analyses: “Simkin & Roychowdhury venture a guess that as many as 80% of authors citing a paper have not actually read the original”. Once a paper is out there nobody bothers to check it, even though they know there’s a 50-50 chance it’s false!