This is why we can’t have social science
Jason Mitchell is [edit: has been] the John L. Loeb Associate Professor of the Social Sciences at Harvard. He has won the National Academy of Science’s Troland Award as well as the Association for Psychological Science’s Janet Taylor Spence Award for Transformative Early Career Contribution.
Here, he argues against the principle of replicability of experiments in science. Apparently, it’s disrespectful, and presumptively wrong.
Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value.
Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way. Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.
Three standard rejoinders to this critique are considered and rejected. Despite claims to the contrary, failed replications do not provide meaningful information if they closely follow original methodology; they do not necessarily identify effects that may be too small or flimsy to be worth studying; and they cannot contribute to a cumulative understanding of scientific phenomena.
Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output.
The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their “degrees of freedom,” for example, by specifying designs in advance.
Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims.
This is why we can’t have social science. Not because the subject is not amenable to the scientific method—it obviously is. People are conducting controlled experiments and other people are attempting to replicate the results. So far, so good. Rather, the problem is that at least one celebrated authority in the field hates that, and would prefer much, much more deference to authority.
- [link] Why Psychologists’ Food Fight Matters by 1 Aug 2014 7:52 UTC; 40 points) (
- 11 Sep 2014 12:29 UTC; 1 point) 's comment on [Link] How to see into the future (Financial Times) by (
The amusing thing is that Mitchel’s argument proves much more than he wants it to prove.
Notice that the above argument applies just as well to the original experiment being replicated.
Yes, noticed.
Has anyone read his entire article? Does he attempt any justification for why this particular argument doesn’t equally apply to the original experiment?
One principle I try to keep in mind is “The other guy is probably not a total moron. If it seems that way, you’re probably missing something.”
I read it. He has a section titled “The asymmetry between positive and negative evidence”.
His argument is that a positive result is like seeing a black swan, and a null result is like seeing a white swan, and once you see a black swan, then no matter how many white swans you see it doesn’t prove that all swans are white.
He addresses the objection that this leaves us unable to ever reject a spurious claim. His answer is that, since negative evidence is always meaningless, we should get positive evidence that the experimenter was wrong.
I think this is a fair summary of the section. It’s not long, so you can check for yourself. I am… not impressed.
Actually, it’s like hearing a report of a black swan, which is why the burden of proof is generally put on the report.
It’s even worse than that for him. What a bad analogy for him to rest his case on. Surely, the purpose of these social science studies is not to make a claim about the existence of some bizarre subset of the population (a black swan), but that the results will generalize to the population at large (all swans are black).
That’s more than enough for me.
Thanks for taking the bullet for us.
There’s a lot wrong with the argument; he has no actual justification for assuming that social science is anything like swan-spotting.
But even within his unjustified analogy… apparently if someone reports a new color of swan in Australia, he might give polygraphs and vision tests to the reporter, but sending an expedition to Australia to check it out would be of no scientific value.
Medicine has had its Ioannidis. Is anyone doing the same for psychology and social science?
Uri Simonsohn?
There’s El Vul’s Voodoo Correlations in Social Neuroscience.
My interpretation is that he’s saying false positives are unlikely but false negatives are common. If that’s the case, then getting a positive result should greatly affect your beliefs; but a negative replication shouldn’t shift them much.
(I can’t actually justify this interpretation by reading the text, but it does make the text seem fairly reasonable.)
Beautiful point.
Congratulations, you’re smarter than a Harvard professor. I noticed the same thing, so I’m also smarter than a Harvard professor. I would hope that everyone on this site is smarter than a Harvard professor. A Harvard professor who got his B.A. and M.S. from Yale and his Ph.D. from Harvard.
This is sociologically interesting: is it an isolated incident that a Harvard professor—an award-winning one, no less—would so loudly fail to comprehend how science and statistics work? That doesn’t seem likely, for two reasons. First, he probably talked it over with other people in the field, or at least mentioned why he thought replication was bad and wrong, and no one talked him out of it. Second, the system he came through produced him, and it’s unlikely that that sort of error would only be produced once, since education is fairly standardized. So: what does this say about the relevant institutions? (The social sciences, HYP, academia, etc.)
I don’t think that means you are smarter than that Harvard professor. He is a very successful person and has reached heights coveted by many very smart people. It just means that the game he is playing is not one where you get ahead by saying things that make sense.
For example, if you listen to a successful politician and spot a false statement he utters, that does not mean that you are smarter than that politician.
Yes, academics are supposed to raise the status of their institution. This brings in money, which helps educate students, which makes the world a better place. Unsuccessful replication threatens this. Plus, replication does nothing to advance social justice.
His website describes his research as:
That doesn’t seem to me like someone who focus on advancing social justice.
You should have looked at his vita for a more accurate description of his activities. If you had looked at his paper titles, some of them indicate he’s not a stranger to social justice like theorizing and investigation, and likewise his funding sources, on top of Harvard’s well-earned reputation: eg. “What’s in a forename?: Cue familiarity and stereotypical thinking”, “Gender differences in implicit weight identity”, “Deflecting negative self-relevant stereotype activation: The effects of individuation”, “Me and my group: Cultural status can disrupt cognitive consistency”, and the funding:
Sorry my mistake. As far as the paper titles goes.
As far as the funding goes the National Science Foundation isn’t an entity that I would see as spearheading the social justice movement.
The same goes for the Templeton Foundation. They have the reputation of wanting “progress in spiritual discoveries” instead of “advancing social justice”.
Both groups are quite big and may fund more than you think; not that the grantors always get what they think they’re getting or are the only people who are then allowed to draw upon the research. For example, consider “stereotype threat”, much beloved of social-justice types for explaining how bad white people keep test scores low for women and blacks; you can see the NSF certainly has been involved in that research in the past just with a cursory google: http://scholar.google.com/scholar?q=%22stereotype%20threat%22%20%22National%20Science%20Foundation%22 (I count ~4 NSF grants attested to just from the snippet-view for the first page).
I do accept that both groups do fund a large variety of courses but I still wouldn’t conclude from the funding source that there a bias in the direction of the social justice movement.
When thinking about the title “The neural basis of stereotyping” you might be right that it smells like pseudoscience. It’s a bit like the “The neural basis of acupuncture”. Instead of searching for the neural basis it would make much more sense to focus of studying the actual effect.
I wonder if I picked up to much stereotypes about fMRI research ;)
I never said the NSF is biased in favor of SJ. I said if you had bothered to look at the vita instead of stopping at the most convenient place, you would have found a number of paper and grant titles which indicate a more than theoretical interest in topics strongly associated with SJ on top of his affiliation with an institute with a strong background both current and historical in liberal thought & SJ-like figures such as Cornell West and to borrow from parallel ongoing conversations, crucified Sumners for his well-founded suggestion that the elite math achievement gap might be a necessary consequence of gender differences. And then you said the NSF was evidence against SJ association, which is either wrong or weak evidence since they fund related research all the time.
The might have crucified Summers but they led him rise to be president at Harvard in the first place and didn’t seem to successfully taught him not to say things like that.
Cornell West has professor for religion and then professor for African-American Studies. As far as my instincts go I wouldn’t expect the same thing from Harvard department of psychology than I would expect from the department of African-American Studies.
I didn’t. I took your post as saying that I should update in the direction of him being part of the social justice movement based on his funding source and wanted to reject updating based on that information in that direction.
He teaches at an elite university so it’s highly probably he greatly values social justice.
Either you don’t read your own comments, or you classify all social justice as signaling.
In the latter case, I’m sure you’ll come up with reasons why you aren’t making a fully general counterargument. I would urge you to stop for a second and ask concretely what you would have thought about, say, abolitionists.
I don’t understand what you mean. Social justice is a label for a set of beliefs. I do not think that social justice is just about signaling. If it helps to understand what I mean, I’m a politically incorrect college professor.
Are you implying that there is a contradiction in what you quoted? If so, I don’t see it.
There may indeed be a contradiction, but the real question—if we grant the first part—is what do you know and how do you know it? This seems to me like a really obvious point.
Reasoning for this claim? Do you also believe that original research can do nothing to advance social justice? That doesn’t seem likely.
-- Richard Feynman, “Cargo Cult Science”
(Yes, I am aware of the irony of appealing to authority to mock someone who says we need to defer more to established authorities.)
What irony? People want to have authorities that can be appealed to, so if you value epistemic rationality, just saying it’s bad to appeal to authority won’t work as well as capturing the drive: providing authorities who can be appealed to in order to support memes beneficial to epistemic rationality, including that it’s bad to appeal to authority. If you leave the drive uncaptured, someone else can capture it.
It’s ironic in the same way that adding the text “DEFACING STOP SIGNS” under the main text of a stop sign is ironic.
The method used is the very one which is being condemned / warned against, and the fact that it works better than other methods (in both examples) only adds to the irony, as one should assume that something that preaches not doing exactly what it’s doing would invalidate itself, rather than its actual effect of producing greater results due to a quirk of humans.
I sort of side with Mitchel on this.
A mentor of mine once told me that replication is useful, but not the most useful thing you could be doing because it’s often better to do a followup experiment that rests on the premises established by the initial experiment. If the first experiment was wrong, the second experiment will end up wrong too. Science should not go even slower than it already does—just update and move on, don’t obsess.
It’s kind of how some of the landmark studies on priming failed to replicate, but there are so many followup studies which are explained by priming really well that it seems a bit silly to throw out the notion of priming just because of that.
Keep in mind, while you are unlikely to hit statistically significance where there is no real result, it’s not statistically unlikely to have a real result that doesn’t hit significance the next time you do it. Significance tests are attuned to get false negatives more often than false positives.
Emotionally though… when you get a positive result in breast cancer screening even when you’re not at risk, you don’t just shrug and say “probably a false positive” even though it is. Instead, you irrationally do more screenings and possibly get a needless operation. Similarly, when the experiment fails to replicate, people don’t shrug and say “probably a false negative”, even though that is, in fact, very likely. Instead, they start questioning the reputation of the experimenter. Understandably, this whole process is nerve wracking for the original experimenter. Which I think is where Mitchel was—admittedly clumsily—groping towards with the talk of “impugning scientific integrity”.
I guess the context is important here. If the first experiment was wrong, and the second experiment is wrong, will you publish the failure of the second experiment? Will you also publish your suspicion that the first experiment was wrong? How likely will people believe you that your results prove the first experiment was wrong, if you did something else?
Here is what the selection bias will do otherwise:
20 people will try 20 “second experiments” with p = 0,05. 19 of them will fail, one will succeed and publish the results of their successful second experiment. Then, using the same strategy, 20 people will try 20 “third experiments”, and again, one of them will succeed… Ten years later, you can have dozen experiments examining and confirming the theory from dozen different angles, so the theory seems completely solid.
Is there a chance that the process I described was responsible for this?
In practice, individual scientists like to be able to say “my work causes updates”. If you do something that rests on someone else’s work and the experiment doesn’t come out, you have an incentive to say “Someonewrongonthenet’s hypothesis X implies A and B. Someonewrongonthenet showed A [citation], but I tried B and that means X isn’t completely right.
Cue further investigation which eventually tosses out X. Whether or not A was a false positive is less important than whether or not X is right.
Yes, that’s possible. I’m not sure direct replication actually solves that issue, though—you’d just shift over to favoring false negatives instead false positives. The existing mechanism that works against this is the incentive to overturn other people’s work.
Tell me, does anyone actually do what you think they should do? That is, based on a long chain of ideas A->B->C->D, none of which have been replicated, upon experimenting and learning ~Z, do they ever reject the bogus theory D? (Or wait, was it C that should be rejected, or maybe the ~Z should be rejected as maybe the experiment just wasn’t powered enough to be meaningful as almost all studies are underpowered or, can you really say that Z logically entailed A...D? Maybe some other factor interfered with Z and so we can ‘save the appearances’ of A..Z! Yes, that’s definitely it!) “Theory-testing in psychology and physics: a methodological paradox”, Meehl 1967, puts it nicely (and this is as true as the day he wrote it half a century ago):
To give a concrete example of why your advice is absurd and impractical and dangerous...
One of the things I am most proud of is my work on dual n-back not increasing IQ; the core researchers, in particular, the founder Jaeggi, are well-aware that their results have not replicated very well and that the results are almost entirely explained by bad control groups, and this is in part thanks to increased sample size from various followup studies which tried to repeat the finding while doing something else like an fMRI study or trying an emotional processing variant. So, what are they doing now, the Buschkuel lab and the new Jaeggi lab? Have they abandoned DNB/IQ, reasoning that since “the first experiment was wrong, the second experiment will end up wrong too”? Have they taken your advice to “just update and move on, don’t obsess”? Maybe taken serious stock of their methods and other results involving benefits to working memory training in general?
No. They are now busily investigating whether individual personality differences can explain transfer or not to IQ, whether other tasks can transfer, whether manipulation motivation can moderate transfer to IQ, and so on and so forth, and reaching p<0.05 and publishing papers just like they were before; but I suppose that’s all OK, because after all, “there are so many followup studies which are explained by [dual n-back transferring] really well that it seems a bit silly to throw out the notion of [dual n-back increasing IQ] just because of that”.
Wait, I’m not sure we’re talking about the same thing. I’m saying direct replication isn’t the most useful way to spend time. You’re talking about systematic experiment design flaws.
According to your writing, the failures in this example stem from methodological issues (not using an active control group). A direct replication of the n-back-IQ transfer would have just hit p<.05 again, as it would have had the same methodological issues. Of course, if the methodological issue is not repaired, all subsequent findings will suffer from the same issues.
I’m strictly saying that direct replication isn’t useful. Rigorous checking of methods and doing it over again correctly where there is a failure in the documented methodology is always a good idea.
But the Jaeggi cluster also sometimes use active control groups, with various kinds of differences in the intervention, metrics, and interpretations. In fact, Jaeggi was co-author on a new dual n-back meta-analysis released this month*; the meta-analysis finds the passive-active difference I did, and you know what their interpretation is? That it’s due to the correlated classification of US vs international laboratories conducting particular experiments. (It never even occurred to me to classify the studies this way.) They note that sometimes psychology experiments reach different conclusions in other cultures/countries—which they do—so perhaps the lower results in American studies using active control groups is because Americans gain less from n-back training. The kindest thing I can say about this claim is that I may be able to falsify it with my larger collection of studies (they threw out or missed a lot).
So, after performing these conceptual extensions of their results—as you suggest—they continue to
So it goes.
* http://www.gwern.net/docs/dnb/2014-au.pdf / https://pdf.yt/d/VMPWmd0jpDYvZIjm / https://dl.dropboxusercontent.com/u/85192141/2014-au.pdf ; initial comments on it: https://groups.google.com/forum/#!topic/brain-training/GYqqSyfqffA
The first sentence in your dual-n-back article is:
If you believe that there’s a net gain of medium effect size then why do you think we should throw dual n-back under the bus?
You should probably have read part of the second sentence: “active vs passive control groups criticism: found, and it accounts for most of the net effect size”.
This is not good, and I guess is not what he meant.
You design the second experiment so that it aims to find something assuming the first was right, but if the first was wrong, it can expose that too. Basically, it has to be a stronger experiment than the first one.
Agreed, that is a better way to say what I was trying to say.
If you’re concerned about the velocity of scientific progress, you should also be concerned about wrong turns. A Type 1 Error (establishing a wrong result by incorrectly rejecting a null hypothesis) is, IMHO, far more damaging to science than failure to establish a correct result—possibly due to an insufficient experimental setup.
Yeah, there’s definitely an “exploration / rigor” trade-off here (or maybe “speed / accuracy”) and I’m not sure it’s clear which side we are erring on right now. I’m not terribly surprised that LW favors rigor, just due to the general personality profile of the users here, and that my favoring of exploration at the cost of being wrong a few times is in the minority.
I definitely think a rational agent would be more exploratory than science currently is, but on the other hand we’ve got systematic biases to contend with and rigor might offset that.
If you get a positive result, you run another test. If you keep getting positive results, you probably have breast cancer.
Similarly, if an experiment fails to replicate, you try again. If it replicates this time, then it’s probably fine. If it keeps failing to replicate, then there’s a problem.
At the very least, you need to try to replicate a random sample of studies, just to make sure there aren’t more false studies than you’ve been assuming.
Not an expert on cancer, but I don’t think it works that way .I think the cancer test accurately tests a variable wihch is a proxy for cancer risk. So a patient who doesn’t have cancer but tests positive will continue testing positive, because the variable that the cancer test measures as a proxy for cancer is elevated in that patient.
Experiments do work that way, but I’m not arguing against that. I’m only arguing that direct replication isn’t a better use of resources than just going on to a followup experiment with a different methodology (unless direct replication is really easy and you can just have some students do it or something).
Is there only one kind of test? Couldn’t they find another proxy?
If the followup is testing the same thing with a different methodology, then the metaphor works. If you run followup experiments just to find more detail, it would be like someone testing positive for cancer so then you run a test for what kind of cancer. You’re assuming they have cancer when you run the second test, so the results could be misleading.
Generally an idea is considered well supported when multiple methodologies support it, yes. In the psychology lab I used to work in, at least, we never try to replicate, but we do try to show the same thing in multiple different ways. There are maybe 15 different experiments a year, but they’re generally all centered around proving or dis-proving a cluster of 2 or 3 broad, conceptually linked hypotheses.
Biology labs I’ve worked with do often do the whole “okay, the results are in and this is established now, let’s find additional detail’ thing, but that’s because they were usually looking at much simpler systems, like a single protein or something, so they could afford to take liberties and not be so paranoid about experimental methods.
...and now you have two problems X-)
It’s not a matter of speed, it’s a matter of velocity. Going fast in the wrong direction is (much) worse than useless.
You are quite likely. You start with a 5% chance under ideal circumstances and that chance only climbs from there. P-hacking is very widespread.
8-0 You think getting additional screenings after testing positive for cancer is “irrational”??
The process of screening itself involves risks, not to mention the misplaced stress and possibility of unnecessary surgery.
This is true for e.g. any visit to the doctor. Are you saying that it’s irrational to go for medical checkups?
In the cancer screening case, what do you think does the cost-benefit analysis say?
It would be irrational to go for medical check ups when they aren’t necessary—if you did it every 3 days, for example.
I’m looking at this from a birds eye view. A lot of people get unnecessary screenings, which tell them information which is not worth acting upon no matter whether it says that it is positive or negative, and then start worrying and getting unnecessary testing and treatment. Information is only useful to the extent that you can act upon it.
And from up there you take it upon yourself to judge whether personal decisions are rational or not? I think you’re way too far away for that.
That’s a different issue. In a post upstream you made a rather amazing claim that additional tests after testing positive for cancer on a screening would be irrational. Do you stand by that claim?
Er...I think that’s a little harsh of you. Overscreening is recognized as a problem among epidemiologists. When I say overscreening is a problem, I’m mostly just trusting expert consensus on the matter.
I stand by that a lot of smart people who study this issue believe that in actual medical practice, these screenings are either a problem in themselves, or that the information from the screenings can lead people to irrational behavior, and I do trust them.
But really, that was just an illustrative example used to steelman Michael. You don’t have to except the actual example, just the general concept that this sort of thing can happen.
Rationality does not specify values. I rather suspect that the cost-benefit analysis that epidemiologists look at is quite different from the cost-benefit analysis that individuals look at.
LOL. Don’t bother you pretty little head with too much information. No, you don’t need to know that. No, you can’t decide what you need to know and what you don’t need to know. X-/
Scientists as community of humans should expect there research to return false positives sometimes, because that is what is going to happen, and they should publish those results. Scientists should also expect experiments to demonstrate that some of their hypotheses are just plain wrong. It seems to me replication is only not very useful if the replications of the experiment are likely prone to all the same crap that currently makes original experiments from social psychology not all that reliable. I don’t have experience, or practical knowledge of the field, though, so I wouldn’t know.
When natural scientists attempt to replicate famous experiments where the original result was clearly correct, with what probability do they tend to succeed? Is it closer to 1 than, say, .7?
I’d think that “famous experiments where the original result was clearly correct” are exactly those whose results have already been replicated repeatedly. If they haven’t been replicated they may well be famous—Stanford prison experiment, I’m looking at you—but they aren’t clearly correct.
I was thinking more “What is the error rate in replication experiments when we know the results from the original experiment were correct?” So if mixing X and Y under certain conditions has to yield Z, how often when scientists actually try to do this do they get Z?
The error rate in replication experiments in the natural sciences is expected to be much much lower than in the social sciences. Humans and human environments are noisy and complicated. Look at nutrition/medicine—it’s taking us decades to figure out whether some substance/food is good or bad for you and under what circumstances. Why would you expect it be easier to analyze human psychology and behavior?
If you want to know whether food is good or bad you have to look at mortality which means you might have to wait a decade.
A lot of psychology experiments claim effects over much shorter timeframes.
I think he is more suggesting that the number of confounding factors in psychology experiments is generally far higher than in the natural sciences. The addition of such uncontrollable factors leads to a generally higher error rate in human sciences.
The number of confounding factors isn’t that important if it’s possible to do controlled experiments that control for them. Nutrition science has the problem that you usually can’t do good controlled experiments or those are very expensive.
Obviously if you can control for a confounding factor then its not an issue, I was simply stressing that the nature of human sciences means that it is effectively impossible to control for all confounding factors, or even be aware of many of them.
To the extend that’s true careful replication of studies to identify factors is important if you don’t want to practice what Feymann described as Cargo Cult science. If you follow Feymann argument physicists also would get a bunch of bad results if they would work with the scientific standards used in psychology.
Feymann on rat psychology:
Nutrition is really a different case than a lot of psychology. There are question in psychology such as whether doing certain things to a child in it’s childhood effect whether that child is a healthy adult. Those questions are hard to investigate scientifically because of time lag. The same isn’t true for many psychology experiments.
I don’t think we actually disagree on anything, the only point I was making was that your reply to Lightwave, while accurate, wasn’t actually replying to the point he made.
I did reply to his point. He spoke about nutrition science. That field has it’s own problems that psychologists don’t have to deal with. It’s a bad example if he wanted to make the point you think he wanted to make.
Well fair enough. His use of nutrition science as an example was probably poorly chosen.
I’ve suggested on LW before that most attempts at physics experiments are wrong, if one counts physics students’ attempts. The standard reaction to a student getting a counterintuitive result is, “well, obviously they messed up the experiment”. I notice I feel OK with that response in the case of physics but don’t like Mitchell trying it for psychology.
(I wonder whether biology students have to count chromosomes.)
Students are particularly bad at experimentation (which is why they have to take those labs in the first place), and the experiments they do are selected for being particularly fundamental and well-understood (in particular, they have already been replicated lots of times). I think this is a more important difference than physics versus psychology.
I don’t think this is true at all. His points against replicability are very valid and match my experience as a researcher. In particular:
This is a very real issue and I think that if we want to solve the current issues with science we need to be honest about this, rather than close our eyes and repeat the mantra that replication will solve everything. And it’s not like he’s arguing against accountability. Even in your quoted passage he says:
Now, I think he goes too far by saying that no negative findings should be published; but I think they need to be held to a high standard for the very reason he gives. On the other hand, positive findings should also be held to a higher standard.
Note that there are people much wiser than me (such as Andrew Gelman) who disagree with me; Gelman is dissatisfied with the current presumption that published research is correct. I certainly agree with this but for the same reasons that Mitchell gives, I don’t think that merely publishing negative results can fix this issue.
Either way, I think you are being quite uncharitable to Mitchell.
What is the purpose of an experiment in science? For instance, in the field of social psychology? For instance,what is the current value of the Milgram experiment? A few people in Connecticut did something in a room at Yale in 1961. Who cares? Maybe it’s just gossip from half a century ago.
However, some people would have us believe that this experiment has broader significance, beyond the strict parameters of the original experiment, and has implications for (for example) the military in Texas and corporations in California.
Maybe these people are wrong. Maybe the Milgram experiment was a one-off fluke. If so, then let’s stop mentioning it in every intro to psych textbook. While we’re at it, why the hell was that experiment funded, anyway? Why should we bother funding any further social psychology experiments?
I would have thought, though, that most social psychologists would believe that the Milgram experiment has predictive significance for the real world. A Bayesian who knows about the results of the Milgram experiment should better be able to anticipate what happens in the real world. This is what an experiment is for. It changes your expectations.
However, if a supposedly scientific experiment does nothing at all to alter your expectations, it has told you nothing. You are just as ignorant as you were before the experiment. It was a waste.
Social psychology purports to predict what will happen in the real world. This is what would qualify it as a science. Jason Mitchell is saying it cannot even predict what will happen in a replicated experiment. In so doing, he is proclaiming to the world that he personally has learned nothing from the experiments of social psychology. He is ignorant of what will happen if the experiment is replicated. I am not being uncharitable to Mitchell. He is rejecting the foundations of his own field. He is not a scientist.
http://en.wikipedia.org/wiki/Milgram_experiment#Replications_and_variations —
(emphasis added)
Why is it more likely that the followup experiment was flawed, rather than the original? Are we giving a prior of > 50% to every hypothesis that a social scientist comes up with?
I disagree. Let’s look at this section again:
Contrast this to:
From here, linked before on LW here.
The first view seems to have the implied assumption that false positives don’t happen to good researchers, whereas the second view has the implied assumption that theories and people are separate, and people should follow the facts, rather than the other way around.
But perhaps it is the case that, in social psychology, the majority of false positives are not innocent, and thus when a researchers results do not replicate it is a sign that they’re dishonest rather than that they’re unlucky. In such a case, he is declaring that researchers should not try to expose dishonesty, which should bring down opprobrium from all decent people.
The goal is to set up the experiments to make it solely about the results and not about colleagues. If ‘scientific integrity’ means sloppy, porous experimental setup, then impugning this is not a bad thing. Ideally the experimental design and execution should transcend the question of the researchers’ motives.
Do you agree with the empirical claim about the frequencies of false positives in initial studies versus false negatives in replications?
This is why we can’t have social science. Not because the subject is not amenable to the scientific method—it obviously is. People are conducting controlled experiments and other people are attempting to replicate the results. So far, so good.
So, you say people are trying the scientific approach. My guess is, the nature of the problem is such that nothing much came out of these attempts. No great insights were gained, no theories were discovered. Real scientists had nothing to show for their efforts, and this is why the these fields are now not owned by real scientists, but by people with other skills.
For an imperfect analogy, say an ancient civilization left a lot of incomprehensible texts. Say a lot of effort went into deciphering these texts, and to everyone’s surprise it was discovered that some of these texts were English encrypted with some cyphers that the cryptographers cracked. So people skilled in the mathematics of cryptography started working on other texts, and many of them were consequently decrypted. But there were some classes of documents that the cryptographers were helpless against. Maybe these texts were really random garbage, or maybe they were encrypted with cyphers which are well beyond the current reach of our technology.
Now imagine a crook comes and says he has found the true meaning of a text. He calls himself a cryptographer and produces an “explanation” which is drivel but which superficially sounds like what the cryptographers say. If he tries that with one of the texts that have been decrypted for real, it will be hard for him to compete with the actual solution. But for the texts that remain a mystery, crooks will be the only game in town.
This may have happened.
I think someone should mention Harry Collins and Trevor Pinch’s book The Golem here. It’s a collection of episodes from the history of science. The general theme is that in practice, new discoveries do not involve a clear-cut observation followed by theorizing, instead there is a lot of squabbling over whether the researchers involved carried out their experiments correctly, and these kind of feuds can persist for a scientific generation.
My view is that this makes replication attempts all the more important. But it also shows that some resistance and recriminations and mudslinging is probably to be expected—because the human status games of “are you a good enough scientist that we can really trust you” is a very integral part of how humans do science.
In the second paragraph of the quote the author ignores the whole point of replication efforts. We know that scientific studies may suffer from methodological errors. The whole point of replication studies are to identify methodological errors. If they disagree then you know there is an uncontrolled variable or methodological mistake in one or both of them, further studies and the credibility of the experimenters is then used to determine which result is more likely to be true. If the independent studies agree then it is evidence that they are both correct.
The author also argues that replication efforts are biased because they are mostly made by people who disagree with the original study. That seems like a valid point.
Specifying designs in advance is a good idea, though not orignal
Original, non-replication studies are mostly made by people who agree with what their studies are showing. (Also, publication bias.) So this is not a reason to think replication studies are particularly biased.
While I agree that this guy needs to hand in his “Scientist” card this is an individual who no more reflects on his field than any other individual does on theirs.
There was a notable climate scientist whose response to people asking for his data was literally “no, you’ll just try to use it to prove me wrong”.
Edit: exact quote:”Even if WMO agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.” http://www.theregister.co.uk/2009/08/13/cru_missing/
That doesn’t make all climate scientists kooks any more than this guy makes all social scientists kooks.
There will always be people who manage to get research grants despite not really getting the whole point of science.
People who view being “right” as more important than having the correct answer.
People who respond to having authority by deciding that questioning authority, questioning them, is wrong and bad.
You were down-voted in part because of bad grammar (who’s=>whose) and the absence of citations.
edited