Supporting the underdog is explained by Hanson’s Near/​Far distinction

Yvain can’t make head nor tails of the apparently near universal human tendency to root for the underdog. [Read Yvain’s post before going any further]..

He uses the following plausible-sounding story from a small hunter-gatherer tribe in our Era of Evolutionary Adaptedness to illustrate why support for the underdog seems to be an antiprediction of the standard theory of human evolutionary psychology:

Suppose Zug and Urk are battling it out for supremacy in the tribe. Urk comes up to you and says “my faction are hopelessly outnumbered and will probably be killed, and our property divided up amongst Zug’s supporters.” Those cave-men with genes that made them support the underdog would join Urk’s faction and be wiped out. Their genes would not make it very far in evolution’s ruthless race, unless we can think of some even stronger effect that might compensate for this.

Yvain cites an experiment where people supported either Israel or Palestine depending on who they saw as the underdog. This seems to contradict the claim that the human mind is well adapted to its EEA.

A lot of people tried to use the “truel” situation as an explanation: in a game of three players, it is rational for the weaker two to team up against the stronger one. But the choice of which faction to join is not a truel between three approximately equal players: as an individual you will have almost no impact upon which faction wins, and if you join the winning side you won’t necessarily be next on the menu: you will have about as much chance as anyone else in Zug’s faction of doing well if there is another mini-war. People who proffered this explanation are guilty of not being more surprised by fiction than reality. To start with, if this theory were correct, we would expect to see soldiers defecting away from the winning side in the closing stages of a war… which, to my knowledge, is the opposite of what happens.

SoulessAutomaton comes closest to the truth when he makes the following statement:

there may be a critical difference between voicing sympathy for the losing faction and actually joining it and sharing its misfortune.

Yes! Draw Distinctions!

I thought about what the answer to Yvain’s puzzle was before reading the comments – and decided that Robin’s Near/​Far distinction is the answer.

All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits.

Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits.

When you put people in a social-science experiment room and tell them, in the abstract, about the Isreal/​Palestine conflict, they are in “far” mode. This situation is totally unlike having to choose which side to join in an actual fight – where your brain goes into “near” mode, and you quickly (I predict) join the likely victors. This explains the apparent contradiction between the Israel experiment and the situation in a real fight between Zug’s faction and Urk’s faction.

In a situation where there is an extremely unbalanced conflict that you are “distant” from, there are various reasons I can think of for supporting the underdog: but the common theme is that when the mind is in “far” mode, its primary purpose is to signal how nice it is, rather than to actually acquire resources. Why do we want to signal to others that we are nice people? We do this because they are more likely to cooperate with us and trust us! If evolution built a cave-man who went around telling other cave-men what a selfish bastard he was… well, that cave-man wouldn’t last long.

When people support, for example, Palestine, they don’t say “I support Palestine because it is the underdog”, they say “I support Palestine because they are the party with the ethical high ground, they are in the right, Israel is in the wrong”. In doing so, they have signalled that they support people for ethical reasons rather than self-interested reasons. Someone who is guided by ethical principles rather than self-interest makes a better ally. Conversely, someone who supports the stronger side signalls that they are more self-interested and less concerned with ethical considerations. Admittedly, this is a signal that you can fake to some extent: there is probably a tradeoff between the probability that the winning side will punish you, and the value that supporting someone for ethical reasons carries. When the conflict is very close, the probability of you becoming involved makes the signal too expensive. When the conflict is far, the signal is almost (but not quite) free.

You also put yourself in a better bargaining position for when you meet the victorious side: you can complain that they don’t really deserve all their conquest-acquired wealth because they stole it anyway. In a world where people genuinely think that they are nicer than they really are (which is, by the way, the world of humans), being able to frame someone as being the “bad guy” puts you in a position of strength when negotiating. They might make concessions to preserve their self-image. In a world where you can’t lie perfectly, preserving your self-image as a nice person or a nice tribe is worth making some concessions for.

All that remains to explain is what situation in our evolutionary past corresponds to hearing about a faraway conflict (Like Israel/​Palestine for westerners who don’t live there or have any true interest). This I am not sure about: perhaps it would be like hearing of a distant battle between two tribes? Or a conflict between two factions of your tribe, which occurs in such a way that you cannot take sides?

My explanation makes the prediction that if you performed a social-science experiment where people felt sufficiently close to the conflict to be personally involved, they would support the likely winner. This might involve making people very frightened and thus not pass ethics committee approval, though.

The only good experience I have with “near” tribal conflicts is my experiences at school; whenever some poor underdog was being bullied, I felt compelled to join in with the bullying, in exactly the same “automatic” way that I feel compelled to support the underdog in Far situations. I just couldn’t help myself.

Hat-tip to Yvain for admitting he couldn’t explain this. The path to knowledge is paved with grudging admissions of your ignorance.