Beware of WEIRD psychological samples
Most of the research on cognitive biases and other psychological phenomena that we draw on here is based on samples of students at US universities. To what extent are we uncovering human universals, and to what extent facts about these WEIRD (Western, Educated, Industrialized, Rich, and Democratic) sample sources? A paper in press in Behavioural and Brain Sciences the evidence from studies that reach outside this group and highlights the many instances in which US students are outliers for many crucial studies in behavioural economics.
Epiphenom: How normal is WEIRD?
Henrich, J., Heine, S. J., & Norenzayan, A. (in press). The Weirdest people in the world? (PDF) Behavioral and Brain Sciences.
Broad claims about human psychology and behavior based on narrow samples from Western societies are regularly published in leading journals. Are such species-generalizing claims justified? This review suggests not only that substantial variability in experimental results emerges across populations in basic domains, but that standard subjects are in fact rather unusual compared with the rest of the species—frequent outliers. The domains reviewed include visual perception, fairness, categorization, spatial cognition, memory, moral reasoning and self‐concepts. This review (1) indicates caution in addressing questions of human nature based on this thin slice of humanity, and (2) suggests that understanding human psychology will require tapping broader subject pools. We close by proposing ways to address these challenges.
- Let There Be Light by 17 Mar 2010 19:35 UTC; 57 points) (
- How about testing our ideas? by 14 Sep 2012 10:28 UTC; 42 points) (
- some random parenting ideas by 13 Feb 2021 15:53 UTC; 38 points) (
- Does Hyperbolic Discounting Really Exist? by 3 Dec 2011 3:07 UTC; 29 points) (
- 29 Oct 2012 18:53 UTC; 20 points)'s comment on Constructing fictional eugenics (LW edition) by (
- [LINK] Westerners may be terrible experimental psychology subjects by 26 Feb 2013 12:46 UTC; 20 points) (
- 18 Jan 2012 20:22 UTC; 8 points)'s comment on Rationality quotes January 2012 by (
- 26 Feb 2013 16:19 UTC; 8 points)'s comment on [LINK] Westerners may be terrible experimental psychology subjects by (
- 3 Jul 2011 21:19 UTC; 7 points)'s comment on Gender and Libido by (
- 6 Sep 2012 2:40 UTC; 4 points)'s comment on The noncentral fallacy—the worst argument in the world? by (
- 20 Oct 2014 10:56 UTC; 4 points)'s comment on Fixing Moral Hazards In Business Science by (
- 4 Aug 2010 23:49 UTC; 3 points)'s comment on Conflicts Between Mental Subagents: Expanding Wei Dai’s Master-Slave Model by (
- 26 Jul 2013 9:30 UTC; 2 points)'s comment on low stress employment/ munchkin income thread by (
- Small hope for less bias and more practability by 31 Jan 2019 22:09 UTC; 0 points) (
- 28 Jun 2013 12:53 UTC; 0 points)'s comment on Bad Concepts Repository by (
It’s not necessarily a bad thing that 67% of studies are done on psychology undergrads. 90% or so of medical studies are done on mice. If you find something big, you should go outside your cheapest testing group (be it mice or psychology undergrads), but if something fails to produce interesting results even on them, you just saved yourself a lot of money and effort—and failures will be much more common than interesting finds.
Suggesting there’s a market for repeating experiments (cheaply, as well) in rural India? This looks like it’d yield some easy research opportunities.
There are also language problems here—most psychological “experiments” consist of giving people questionnaires, followed by data mining of them. And questionnaires are very language and culture dependent.
I know it’s just anecdotal evidence, but I know someone who tried to translate some standard English questionnaire (mindfulness or somesuch) into Polish for their MSc thesis, and testing both on students who majored in English, so were supposedly fluent in both languages. All usual controls like randomizing order of questionnaires, and using multiple independent translations were used. And in spite of all that correlations between answers to the same exact question in Polish and English were less than impressive (much lower than the usual test-retest correlation), and for many questions every single translation yielded what was not statistically significantly different from zero correlation.
I think these problems would make a far better thesis than what got actually written, but as a rule failures don’t get written or published.
Even worse than language difficulties, I would think, would be large differences in cultural framing of questions. Every culture brings a different set of background issues to the types of questions asked in many psychological studies. The problem has mostly been solved for IQ type tests, but, even without considering the amount of work involved in developing the cross-cultural IQ tests, framing would be a bigger problem for personality and other “softer” tests. (I have, but have only leafed through, Jensen’s “Bias in Mental Testing”; I can already tell it’s going to take a lot of work, and it’s a bit dated, so I’ve been putting it off since it’s only a peripheral interest.)
What evidence do you have that “the problem has mostly been solved for IQ type tests”?
Sorry, that sounded challenging, and it isn’t meant to be. Would you please point me to any books, papers, and so on?
“Arthur Jensen Replies to Stephen Jay Gould : THE DEBUNKING OF SCIENTIFIC FOSSILS AND STRAW PERSONS ” , http://www.debunker.com/texts/jensen.html , is a good place to start. It’s a detailed criticism of Gould’s “The Mismeasure of Man” by one of the best psychometricians around. It’s got a good bibliography, but is rather dated being from 1982. No matter what you may think of his politics, Steve Sailer also has a lot of good, and more recent, information in his essays on IQ, especially on international comparisons, on his website, www.isteve.com . Richard Lynn’s books are supposed to be very good also, but I haven’t read them (too many interests, too little time and money).
The very title “debunking of scientific fossils and straw persons” makes it sound like it has limited use. Johnicholas asked for positive statements, but a debunking is purely negative. Just because Gould lied about X doesn’t make his position wrong.
I suspected from your first comment that all you meant was that people who attempt to prove cultural bias in IQ tests have failed. That is certainly true, with some surprising findings, like that the American black-white gap is larger on questions that are, on the face of them, more culturally neutral. But relying on an opposition you don’t trust to do the research is a highly biased search strategy. It is not a great political victory to say that Raven’s matrices are culturally biased, so few say it, but that doesn’t make it false.
Right now, my best source for “answers to Arthur Jensen” is Cosma Shalizi. My understanding is that performance on IQ tests is mostly related to culture—even though that was (to some extent) Gould’s position.
Shalizi simply doesn’t say that.
There are two things you could mean by it. One is that some cultures make you smart. The other is that the IQ test mostly screens for culture and not useful abilities. It is certainly true that culture affects the difference between performance on Raven’s matrices and other tests. In particular, the Flynn effect is stronger for Raven’s matrices than other tests. Also, sub-saharan Africans do dramatically worse on RM than on other estimates, where they’re closer to African-Americans (who do slightly worse on RM than on common tests). In applying this information to the two possibilities about culture, you’d have to decide which testing approach you liked better, which would depend on what you’re trying to measure. “g” is not the correct answer to this question.
Yes, but then you have to send the researchers to India. (Unless you also recruit Indian psychologists who already live there to do your replications.)
Gap year students! They can dig some irrigation ditches while they’re there.
Critically, they don’t need to devise their own experiments; they’re effectively doing the leg-work for more senior researchers back in the UK/US, and also making use of the language & cultural skills they’ve learnt for their gap year/volunteering. Also, the data they gather can be used both to judge the hypothesis the test was originally investigating, and reveal differences between cultures and nations.
Ooh, this could be a scholarship thing. “Study Abroad And Do Replication Studies Fund”. Give ’em a grand apiece, no essay required, I bet it would work.
I’d be worried about trusting the students. It’s like giving them a test and your answer key, and telling them ‘hey, we did our best in getting the right answers, but please work through all the problems again and see whether we made any mistakes’. This sort of thing only works if you don’t get too much garbage in your replications.
The students might be honest enough to actually do all the work professionally, but I’m not sure I’d trust American students (a summer/semester isn’t that long, and if they’re in India, there are things to do there that could fill a lifetime; the temptation to just fudge up some data and go do all those awesome things would be tremendous), much less Indian ones.
You have way too much trust in the professors. Just a few students naive enough to do what they’re supposed to would be an improvement on the status quo.
The problem is, we already have replications being done by Indian and Chinese scientists and… they’re not very good. Here’s one: “Local Literature Bias in Genetic Epidemiology: An Empirical Evaluation of the Chinese Literature”, 2005:
The huge amount of data that could be gathered should allow for checking; data that is both different from what westerners would expect, and consistent over several independent students, is likely to be accurate. Or at least, not inaccurate because of lazy students.
This may be the place to make an observation which is still growing in me, so I can only state it in a very preliminary way for now. The great historical precursor is to be found in the psychoanalytic subculture which sprung up after Freud, with all its competing schools. Two facts stand out: these people believed that they understood the human mind, and their theories shaped their interactions with each other. (As when one school’s rejection of the theories of another was itself explained psychoanalytically.)
There are new conceptions of human nature springing up from genetics, neuroscience, and cognitive science, and these conceptions are spreading into the culture at large. The most prominent vector for the spread of these ideas is the mass media. But enthusiast online communities like this one are going to be far more demonstrative of the social and psychological effects which result from taking these new ideas utterly to heart.
Two other examples come to mind. There is a sub-blogosphere focused on a particular conception of male and female psychology, centered on the blogger Roissy, which owes a lot to evolutionary psychology. And there is another sub-blogosphere focused on a new racial politics, centered on the blogger Steve Sailer, which owes a lot to human genomics. Together with the bias/rationality focus found here and at Overcoming Bias, these blog communities are not just an exercise in trying to assimilate new discoveries and live their implications, they are themselves little sociological case studies in the impact of science on human subjectivity, individually and collectively.
Now beyond sounding generic warnings about the lesson of history, that people have repeatedly thought that they had things figured out, when they didn’t; and reminding everyone of the skeptical abyss which exists beneath almost all assertions of what is so; I do not really have a way to inoculate you against the errors that come from embracing your favorite paradigm, whatever it is. This post gave me an opportunity to sound the alarm only because it exposes just one of the ways whereby that which is taken to be new knowledge, hereabouts, may not be knowledge at all. I suppose one principle is to keep an eye on whatever part of the culture you think epitomizes the old beliefs, the old way of thinking that has been superseded, because if anyone will escape whatever pathologies accompany the embrace of the new, if anyone knows things that you cannot believe to be true because of what you “know”, it’s going to be Them, the Opposition, whoever they may be. And specifically with respect to evolutionary psychology, just to throw an opposite perspective into the ring, I’m going to mention Jeremy Griffith, a totally obscure Australian thinker who puts the biohistorical perspective on human cognition and human values to a completely different use than anyone else. He has his own problems as a thinker, but perhaps he can be a corrective to some of the excesses of the ev-psych outlook.
In the end, though, I guess we have no choice but to endure whatever downsides accompany the outlooks we choose, if we really do insist on holding those outlooks. So, pointless best wishes to us all, as we suffer the travails of inevitable cause and effect. :-)
I’m a big fan of evolutionary psychology, including practical applications of it. Roissy makes a good start attempting to apply it, but he falls prey to major ideological errors, overgeneralization, and oversimplification. I see no evidence that he has read more than a few popular books on the subject. He has made the discovery that even naive applications of evolutionary psychology can be incredibly powerful in the practical world, then falls into the naive realist pit and assumes that his theories are true just because they work better than the conventional alternatives. Furthermore, he fails at ethics really, really badly. I’m being kinda vague, but I’ll go into further detail upon request.
Evolutionary psychology is great. Applied evolutionary psychology is great. Roissy just isn’t doing it right.
Given the emerging influence of ‘game’ bloggers such as roissy and their often disappointing interaction with so-called “men’s rights” activism, (see e.g.  and resulting comments, ) I think it would be useful if you did take the time to write an extended critique of them. Are you still affiliated with feministcritics.org?
I am indeed planning such an extended critique. I’m just deciding whether it would make sense to post it here, or FC.org, or somewhere else entirely.
And yes, I’m still one of the bloggers there, though I am sort of on hiatus.
Upvoted for this. Now to get it down to 140 characters …
Edit: Posted. Suggestions for how to cut it down enough to add credit welcomed.
It’s hard to discuss the subject with the debate becoming emotional, but let me just say that Roissy’s goals are to be an entertaining writer, to succeed at picking up women, and to debunk false commonsense notions of dating, through real-life experience.
He’s not trying to submit a peer-reviewed paper on evo psych to a rationality audience. To judge him on that basis is to kind of miss the point.
(Ethics is a whole separate question. But then, Stalin was a atheist too, wasn’t he?)
Original paper and Epiphenom post both say “Western, Educated, Industrialized, Rich, and Democratic” where you have “White, Educated, Intelligent, Rich, and Democratic”. ciphergoth, if the change was deliberate then I’d be interested to know why; if not, I’d be interested in any speculations you have about why :-).
Possible partial explanation: the paper uses WEIRD as a description of societies; if you were thinking of individuals then “Western” and “Industrialized” would be odd words to use, “White” and “Intelligent” less so. Possible diagnostic question: what meaning of “Democratic” was foremost in your mind? (I initially read it as a US-centric description of political stance.)
I had in mind an image of a preppy-looking US student doing a psychology test, so yes, I was imagining individuals rather than societies. I read Democratic as a description of the society but the reading of it as an individual political leaning did cross my mind.
Isn’t it possible that some people just want to make a name for themselves so bad, that they will purposefully search for an opposing or radical alternative solution? Everything is so competitive in the western world, that it wouldn’t surprise me if the dynamics of problem solving are misused for a popularity contest rather than getting to the core of a real problem. I often wonder if researchers latch on to an interesting subject and inadvertently shift their focus from something of real value to simply having something important “sounding” to say...just so it appears as if they are being original and creative.
There are many pressures to perform at any level of the spotlight, and those aspiring to join the club, such as US university students, would naturally be a little more vulnerable to have their intent seeded with a burning desire to impress. Just as well, if one were to statistically calculate how many grad students were doing research in the humanities, social sciences, psychology, etc...it immediately becomes obvious that the chance for standing out in the crowd would greatly increase, were one to produce some fresh content. I just worry that the content we are being fed is lacking in substance.
Economists Steve Levitt and John List have a nice paper about generalizing from social/cognitive science laboratory experiments to the real world. They even write down a model. http://pricetheory.uchicago.edu/levitt/Papers/jep%20revision%20Levitt%20&%20List.pdf