This is why we can’t have social science

Ja­son Mitchell is [edit: has been] the John L. Loeb As­so­ci­ate Pro­fes­sor of the So­cial Sciences at Har­vard. He has won the Na­tional Academy of Science’s Troland Award as well as the As­so­ci­a­tion for Psy­cholog­i­cal Science’s Janet Tay­lor Spence Award for Trans­for­ma­tive Early Ca­reer Con­tri­bu­tion.

Here, he ar­gues against the prin­ci­ple of repli­ca­bil­ity of ex­per­i­ments in sci­ence. Ap­par­ently, it’s dis­re­spect­ful, and pre­sump­tively wrong.

Re­cent hand-wring­ing over failed repli­ca­tions in so­cial psy­chol­ogy is largely pointless, be­cause un­suc­cess­ful ex­per­i­ments have no mean­ingful sci­en­tific value.

Be­cause ex­per­i­ments can be un­der­mined by a vast num­ber of prac­ti­cal mis­takes, the like­liest ex­pla­na­tion for any failed repli­ca­tion will always be that the repli­ca­tor bun­gled some­thing along the way. Un­less di­rect repli­ca­tions are con­ducted by flawless ex­per­i­menters, noth­ing in­ter­est­ing can be learned from them.

Three stan­dard re­join­ders to this cri­tique are con­sid­ered and re­jected. De­spite claims to the con­trary, failed repli­ca­tions do not provide mean­ingful in­for­ma­tion if they closely fol­low origi­nal method­ol­ogy; they do not nec­es­sar­ily iden­tify effects that may be too small or flimsy to be worth study­ing; and they can­not con­tribute to a cu­mu­la­tive un­der­stand­ing of sci­en­tific phe­nom­ena.

Repli­ca­tion efforts ap­pear to re­flect strong prior ex­pec­ta­tions that pub­lished find­ings are not re­li­able, and as such, do not con­sti­tute sci­en­tific out­put.

The field of so­cial psy­chol­ogy can be im­proved, but not by the pub­li­ca­tion of nega­tive find­ings. Ex­per­i­menters should be en­couraged to re­strict their “de­grees of free­dom,” for ex­am­ple, by spec­i­fy­ing de­signs in ad­vance.

Whether they mean to or not, au­thors and ed­i­tors of failed repli­ca­tions are pub­li­cly im­pugn­ing the sci­en­tific in­tegrity of their col­leagues. Tar­gets of failed repli­ca­tions are jus­tifi­ably up­set, par­tic­u­larly given the in­ad­e­quate ba­sis for repli­ca­tors’ ex­traor­di­nary claims.

This is why we can’t have so­cial sci­ence. Not be­cause the sub­ject is not amenable to the sci­en­tific method—it ob­vi­ously is. Peo­ple are con­duct­ing con­trol­led ex­per­i­ments and other peo­ple are at­tempt­ing to repli­cate the re­sults. So far, so good. Rather, the prob­lem is that at least one cel­e­brated au­thor­ity in the field hates that, and would pre­fer much, much more defer­ence to au­thor­ity.