Con­flict vs Mistake

TagLast edit: 9 Dec 2021 18:26 UTC by abramdemski

Conflict vs Mistake is a framework for analyzing disagreements about policy.

Mistake theorists think problems in society are caused by people being bad at achieving common goals. Conflict theorists think problems in society are caused by adversaries with incompatible goals.

Scott Alexander attributed the conflict vs mistake framework to a post on reddit by user no_bear_so_low.

A conflict theorist thinks problems are primarily due to the conflicting interests of different players. If someone is suffering, someone else must be making money off of it. Karl Marx was a conflict theorist; he blamed the ills of society on class conflict.

A mistake theorist thinks problems are primarily due to mistakes. If only we knew how to run society better, there would be less problems. Jeremy Bentham was more of a mistake theorist: he thought producing a formula by which we could calculate the quality of social interventions would help improve society.

Humans are not automatically strategic is a mistake theory of human (ir)rationality. Things are hard. If people are doing something dumb, it’s probably because they don’t know better.

The Elephant in the Brain is more like a conflict theory of human (ir)rationality. Apparent irrationality is attributed mainly to humans not actually wanting what they think they want.

Hanlon’s Razor says: Never attribute to malice what is adequately explained by stupidity. This is a clear bias toward mistake theory.

On the other hand, economics, evolutionary psychology, and some other fields are based on rational choice theory, IE, an assumption that behavior can be explained by rational decision-making. (Economic rationality assumes that individuals choose rationally to maximize economic value, based on the incentives of the current situation. Evolutionary psychology instead assumes that human and animal behaviors will be optimal solutions to the problems they faced in evolutionary history. Bruce Bueno de Mesquita assumes that politicians act rationally so as to maximize their tenure in positions of power. The ACT-R theory of cognition assumes that individual cognitive mechanisms are designed to optimally perform their individual cognitive tasks, such as retrieving memories which are useful in expectation, even if the whole brain is not perfectly rational.) This assumption of rationality lends itself more naturally to conflict theories.

Game-Theoretic Connections

In game theory, assuming that people can make mistakes (a so-called trembling hand) can complicate cooperative strategies.

For example, in iterated prisoner’s dilemma, tit for tat is a cooperative equilibrium (that is to say, it is pareto-optimal, and it is a Nash equilibrium). The tit-for-tat strategy is: cooperate on the first round; then, copy the other person’s move from the previous round. This enforces cooperation, because if I defect, I expect my partner to defect on the next round (which is bad for me). This is effectively eye-for-an-eye morality.

However, if people make mistakes (the trembling-hand assumption), then tit-for-tat only results in cooperation for an initial period before anyone makes a mistake. If both mistakes are equally probable, then in the long run we’ll average only 50% cooperation. We can see this as an interminable family feud where both sides see the other as having done more wrong. “An eye for an eye makes everyone blind.”

We need to recognize that people make mistakes sometimes—we can’t punish everything eye-for-an-eye.

Therefore, some form of forgiving tit-for-tat does better. For example, copy cooperation 100% of the time, but copy defection 90% of the time. This can still work to enforce rational cooperation (depending on the exact payouts and time-discounting of the players), but without everlasting feuds. See also Contrite Strategies and the Need for Standards.

In this framing, a conflict theorist thinks people are actually defecting on purpose. They know what they’re doing, and therefore, would respond to incentives. Punishing them is prosocial and helps to encourage more cooperation overall.

A mistake theorist thinks people are defecting accidentally, and therefore, would not respond to incentives. Punishing them is pointless and counterproductive; it could even result in a continuing feud, making things much worse for everyone.

Con­flict vs. mis­take in non-zero-sum games

Nisan5 Apr 2020 22:22 UTC
167 points
40 comments2 min readLW link1 review

In Defence of Con­flict Theory

Richard_Ngo17 Feb 2018 3:33 UTC
34 points
10 comments7 min readLW link

“In­fo­haz­ard” is a pre­dom­i­nantly con­flict-the­o­retic concept

jessicata2 Dec 2021 17:54 UTC
44 points
17 comments14 min readLW link

Often, en­e­mies re­ally are in­nately evil.

Andrew Vlahos7 Jun 2021 6:42 UTC
20 points
56 comments5 min readLW link

In Fa­vor of Nice­ness, Com­mu­nity, and Civilization

Scott Alexander23 Feb 2014 22:24 UTC
82 points
9 comments20 min readLW link

Con­flict The­ory of Bounded Distrust

Zack_M_Davis12 Feb 2023 5:30 UTC
106 points
29 comments3 min readLW link

Mis­take Ver­sus Con­flict The­ory of Against Billion­aire Philanthropy

Zvi1 Aug 2019 13:10 UTC
36 points
34 comments3 min readLW link

Par­a­sitic Lan­guage Games: main­tain­ing am­bi­guity to hide con­flict while burn­ing the commons

Hazard12 Mar 2023 5:25 UTC
97 points
15 comments13 min readLW link

Maybe Ly­ing Can’t Ex­ist?!

Zack_M_Davis23 Aug 2020 0:36 UTC
58 points
16 comments5 min readLW link

Can crimes be dis­cussed liter­ally?

Benquo22 Mar 2020 20:17 UTC
102 points
38 comments2 min readLW link3 reviews

Al­gorith­mic In­tent: A Han­so­nian Gen­er­al­ized Anti-Zom­bie Principle

Zack_M_Davis14 Jul 2020 6:03 UTC
51 points
19 comments12 min readLW link

How hard is it for al­tru­ists to dis­cuss go­ing against bad equil­ibria?

abramdemski22 Jun 2019 3:42 UTC
46 points
6 comments11 min readLW link

The Con­text is Conflict

Jacob Falkovich18 May 2018 2:16 UTC
29 points
0 comments9 min readLW link

Maybe Ly­ing Doesn’t Exist

Zack_M_Davis14 Oct 2019 7:04 UTC
64 points
57 comments8 min readLW link

Policy De­bates Should Not Ap­pear One-Sided

Eliezer Yudkowsky3 Mar 2007 18:53 UTC
236 points
187 comments4 min readLW link

Defense against discourse

Benquo17 Oct 2017 9:10 UTC
38 points
15 comments6 min readLW link

My pa­per was sig­nal­ling the whole time—Robin Han­son wins again

Tim Liptrot4 Aug 2020 21:13 UTC
49 points
19 comments2 min readLW link

Our House, My Rules

David_J_Balan2 Nov 2009 0:44 UTC
42 points
232 comments2 min readLW link

Model, Care, Execution

26 Jun 2023 4:05 UTC
98 points
8 comments12 min readLW link

Ten Modes of Cul­ture War Discourse

jchan31 Jan 2024 13:58 UTC
54 points
15 comments15 min readLW link

Democ­racy Is in Danger, but Not for the Rea­sons You Think

ExCeph6 Nov 2022 21:15 UTC
−7 points
4 comments12 min readLW link