Statistical checks on some social science

Simonsohn, a social scientist, investigates bad use of statistics in his field.

A few good quotes:

The three social psychologists set up a test experiment, then played by current academic methodologies and widely permissible statistical rules. By going on what amounted to a fishing expedition (that is, by recording many, many variables but reporting only the results that came out to their liking); by failing to establish in advance the number of human subjects in an experiment; and by analyzing the data as they went, so they could end the experiment when the results suited them, they produced a howler of a result, a truly absurd finding. They then ran a series of computer simulations using other experimental data to show that these methods could increase the odds of a false-positive result—a statistical fluke, basically—to nearly two-thirds.

Laugh or cry?:”He prefers psychology’s close-up focus on the quirks of actual human minds to the sweeping theory and deduction involved in economics.”

Last summer, not long after Sanna and Smeesters left their respective universities, Simonsohn laid out his approach to fraud-busting in an online article called “Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone”. Afterward, his inbox was flooded with tips from strangers. People wanted him to investigate election results, drug trials, the work of colleagues they’d long doubted. He has not replied to these messages. Making a couple of busts is one thing. Assuming the mantle of the social sciences’ full-time Grand Inquisitor would be quite another.

This looks like a clue that there’s work available for anyone who knows statistics. Eventually, there will be an additional line of work for how to tell whether a forensic statistician is competent.