The Just World Hypothesis

Sometimes bad things happen to good people. Maybe most of the time even, it’s hard to know, and even harder to create common knowledge on the topic because if it is so we don’t want to know, and tell stories to cover it up when it happens.

Back in the golden age of psychology, long before the replication crisis, the scientific method was understood very differently from today. Effect sizes were expected to be visible with the naked eye, not just statistically significant, and practices such as IRBs, peer review and even the use of control groups were much more optional. For instance, the original Milgram Experiment lacked a control group, bur instead of saying that nothing had been learned, or suppressing it on dubious ethical grounds, the psychological community investigated an enormous number of variations on the Milgram experiment in order to tease out the impacts of slight variations, which were statistically compared with one another.

In 1965 Melvin J. Lerner discovered that experimental subjects disliked the people who they saw subjected to electric shocks. This effect was alleviated when the experimental subjects were able to offer the presumed victims appropriate compensation. Apparently, they wanted to make the situation fair. Unfortunately, if they couldn’t make it fair with compensation they wanted to make it fair by claiming that the victim deserved it. Lerner and others followed up with a series of investigations victim blaming. They discovered that the phenomenon is pervasive, robust and measurable via psychological surveys. When given a story about a date, higher scores in ‘Belief in a just world’ are associated with a greater tendency to see whatever ending the reader is given as following with high probability from the earlier events even if the earlier events are identical while the endings differ.

“Might people on the internet sometimes lie?… If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.”

-- Slate Star Codex, 12/​12/​2016

One advantage of just world belief is that it makes it much easier to believe anything at all. If lying is common, and is typically rewarded, the supposed facts out of which one makes sense of the world are called into much greater uncertainty. If you can’t trust the apparently expert authorities who you grow up with to inculcate you into the truth, or the best approximation available, the process of seeking career advancement decouples almost entirely from the process of understanding, and the world appears far less knowable.

A number of the scientific concepts discussed in this year’s essays seem to me to be specific corrections to the Just World Hypothesis. Stigler’s Law of Eponymy, for instance, could be seen as the assertion that mathematical attribution is unjust, even on those occasions when historical scholarship enables objective investigation. The Fundamental Attribution Error is the error of believing that in the typical case the career and social paths that lead to power will select for and cultivate justice, rather than selecting against them, and thus that the right person could ascend in power without acting unjustly, or perhaps that someone could act unjustly for years in order to ascend in power only to turn around and behave justly due to dispositional factors once power is achieved. As an intellectual, or in almost any social context involving discussion, a person without the need for closure might appear hopelessly uninformed, uncooperative, and generally incapable of participation.

Without a just world, the hope of science, to gradually advance in a collaborative intellectual project, updating a shared set of beliefs appears chimerical. One might face, for instance, a crisis of replication during which the whole content of your field, the fruit of many lives of work, evaporates despite the community apparently making the strongest efforts to commit only type I errors. In such a situation, one might be particularly determined to avoid the type one error of disbelieving in a just world, and with it the possibility of joint intellectual endeavor. If this is the case, deliberate ignorance of an unjust world, rather than Bayesian updating of one’s belief on the matter, might turn out to be the dominant strategy for participation in an intellectual community, whether it be an academic profession, political party, business or church.

“See no evil, hear no evil, speak no evil”

-- 17th-century carving over a door of the famous Tōshō-gū shrine in Nikkō, Japan

“In a country well governed, poverty is something to be ashamed of. In a country badly governed, wealth is something to be ashamed of.”

-- Confucius

“If you see fraud and don’t shout fraud you are a fraud”, but in a different context, “You may not be able to change the world but can at least get some entertainment and make a living out of the epistemic arrogance of the human race.”

-- Nicholas Nassim Taleb

As we can see above, Asia’s major spiritual traditions clash over whether to accept an unjust world graciously and compassionately turn a blind eye to the unjust, or to fight against it proudly with full knowledge that the way to power lies elsewhere. Our own intellectual tradition seems equally divided, even within a single voice. Signaling intensifies our difficulties, since in so far as others expect you to act in a self-interested manner, you may have to signal a belief in a just world in order to be listened to at all. We should expect both the just and the unjust to collude in maintaining a belief in a just world, whatever the evidence to the contrary. Essays that violate that expectation, such as this one, are Bayesian evidence for something, but one may have to think very hard in order to know what.