Model-building and scapegoating

Link post

When talking about undesirable traits, we may want to use simple labels. On one hand, simple labels have the virtue of efficiently pointing to an important cluster of behavioral predictions. On the other, they tend to focus attention on the question of whether the person so described is good or bad, instead of on building shared models about the causal structure underlying the perceived problem.

Slate Star Codex recently posted a dialogue exploring this through the example of the term “lazy.” (Ozy’s response is also worth reading.) I think that Scott’s analysis itself unfortunately focuses attention on the question of whether assigning simple labels to adverse traits is good or bad (or alternately, true or false) instead of on building shared models about the causal structure underlying the perceived problem.

When I call someone lazy, I am doing two things. The first is communicating factual information about that person, which can help others avoid incurring costs by trusting the lazy person with some important tasks. This is shared model-building, and it’s going to be more salient if you’re focused on allocating resources to mitigate harm and produce things of value. In other words, if you’re engaged in a community of shared production.

The second is creating a shared willingness to direct blame at that person. Once there’s common knowledge that someone’s considered blameworthy, they become the default target for exclusion if the group experiences a threat. This can be as simple as killing them and taking their stuff, so there’s more per survivor to go around, but this can also take the form of deflecting the hostility of outsiders to the supposed one bad apple. This dynamic is called scapegoating, and it’s going to be more salient when zero-sum dynamics are more salient.

Even though I may intend to do only one of these, it’s actually quite hard to act along one of these dimensions without side effects along the other. For instance, Protocols of the Elders of Zion was famously intended to cast blame on Jews, but Japanese readers with no cultural history of anti-Semitism concluded that if Jews were so powerful, they should make sure to stay on their good side, and consequently the Japanese government made diplomatic overtures to Jewish leaders in the 1930s despite their alignment with the Axis powers.

For more on descriptive vs enactive norms, see Actors and Scribes and the prior posts it links to.

If clearly labeling undesirable traits marks the target for scapegoating, then people who don’t want to do that may find themselves drawn to euphemism or other alternative phrasings.

When a single standard euphemism is proposed, this leads to the infamous “euphemism treadmill” in which the new term itself becomes a term of abuse, marking the target for blame or attack. For instance, “mentally retarded” was originally a highfalutin euphemism for “imbecilic,” but itself was ultimately replaced with “intellectually disabled,” and so on.

Perhaps a more stable solution is the one Scott and Ozy seem to suggest, of preferring more precise and detailed language. Using more precise and detailed descriptions helps avoid scapegoating since more precise usage is more likely to differ from case to case, making it harder to identify a consistently blamed party.

However, these solutions are not the same as the problem. There are multiple distinct interests here. You could want language to be more precise and not much care about the scapegoating side effect. Or you could mainly want to avoid contributing to a process in which a group is targeted for a broad array of social attacks, and not much care about what this does to the precision of your language. (Euphemisms sometimes decrease expressive power instead of increasing it.)

Crucially, neither of these interests requires that you find the thing you’re describing unobjectionable, only that you don’t want to scapegoat.