# Conflict vs. mistake in non-zero-sum games

Summary: Whether you behave like a mistake theorist or a conflict theorist may depend more on your negotiating position in a non-zero-sum game than on your worldview.

Disclaimer: I don’t really know game theory.

Plot the payoffs in a non-zero-sum two-player game, and you’ll get a convex[1] set with the Pareto frontier on the top and right:

You can describe this set with two parameters: The surplus is how close the outcome is to the Pareto frontier, and the allocation tells you how much the outcome favors player 1 versus player 2. In this illustration, the level sets for surplus and allocation are depicted by concentric curves and radial lines, respectively.

It’s tempting to decompose the game into two phases: A cooperative phase, where the players coordinate to maximize surplus; and a competitive phase, where the players negotiate how the surplus is allocated.

Of course, in the usual formulation, both phases occur simultaneously. But this suggests a couple of negotiation strategies where you try to make one phase happen before the other:

1. “Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”

2. “Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”

I’m going to provocatively call the first strategy mistake theory, and the second conflict theory.

Indeed, the mistake theory strategy pushes the obviously good plan of making things better off for everyone. It can frame all opposition as making the mistake of leaving surplus on the table.

The conflict theory strategy threatens to destroy surplus in order to get a more favorable allocation. Its narrative emphasizes the fact that the players can’t maximize their rewards simultaneously.

Now I don’t have a good model of negotiation. But intuitively, it seems that mistake theory is a good strategy if you think you’ll be in a better negotiating position once you move to the Pareto frontier. And conflict theory is a good strategy if you think you’ll be in a worse negotiating position at the Pareto frontier.

If you’re naturally a mistake theorist, this might make conflict theory seem more appealing. Imagine negotiating with a paperclip maximizer over the fate of billions of lives. Mutual cooperation is Pareto efficient, but unappealing. It’s more sensible to threaten defection in order to save a few more human lives, if you can get away with it.

It also makes mistake theory seem unsavory: Apparently mistake theory is about postponing the allocation negotiation until you’re in a comfortable negotiating position. (Or, somewhat better: It’s about tricking the other players into cooperating before they can extract concessions from you.)

This is kind of unfair to mistake theory, which is supposed to be about educating decision-makers on efficient policies and building institutions to enable cooperation. None of that is present in this model.

But I think it describes something important about mistake theory which is usually rounded off to something like “[mistake theorists have] become part of a class that’s more interested in protecting its own privileges than in helping the poor or working for the good of all”.

The reason I’m thinking about this is that I want a theory of non-zero-sum games involving counterfactual reasoning and superrationality. It’s not clear to me what superrational agents “should” do in general non-zero-sum games.

• Nitpick: I am pretty sure non-zero-sum does not imply a convex Pareto front.

Instead of the lens of negotiation position, one could argue that mistake theorists believe that the Pareto Boundary is convex (which implies that usually maximizing surplus is more important than deciding allocation), while conflict theorists see it as concave (which implies that allocation is the more important factor).

• Oh I see, the Pareto frontier doesn’t have to be convex because there isn’t a shared random signal that the players can use to coordinate. Thanks!

• Why would that make it convex? To me those appear unrelated.

• If you have two strategy pairs , you can form a convex combination of them like this: Flip a weighted coin; play strategy on heads and strategy on tails. This scheme requires both players to see the same coin flip.

• This. I think Mistake theorists would frequently question whether the game is actually zero-sum. This divergence in opinions about the nature of the game feels important to me.

• Dear Nisan,

I just found your post via a search engine. I wanted to quickly follow up on your last paragraph, as I have designed and recently published an equilibrium concept that extends superrationality to non-symmetric games (also non-zero-sum). Counterfactuals are at the core of the reasoning (making it non-Nashian in essence), and outcomes are always unique and Pareto-optimal.

I thought that this might be of interest to you? If so, here are the links:

https://​​www.sciencedirect.com/​​science/​​article/​​abs/​​pii/​​S0022249620300183

(public version of the accepted manuscript on https://​​arxiv.org/​​abs/​​1712.05723 )

With colleagues of mine, we also previously published a similar equilibrium concept for games in extensive form (trees). Likewise, it is always unique and Pareto-optimal, but it also always exists. In the extensive form, there is the additional issue of Grandfather’s paradoxes and preemption.

https://​​arxiv.org/​​abs/​​1409.6172

And more recently, I found a way to generalize it to any positions in Minkowski spacetime (subsuming all of the above):

https://​​arxiv.org/​​abs/​​1905.04196

Kind regards and have a nice day,

Ghislain

• A political example: In March 2020, San Francisco voters approved Proposition E, which limited the amount of new office space that can be built proportionally to the amount of new affordable housing.

This was appealing to voters on Team Affordable Housing who wanted to incentivize Team Office Space to help them build affordable housing.

(“Team Affordable Housing” and “Team Office Space” aren’t accurate descriptions of the relevant political factions, but they’re close enough for this example.)

Team Office Space was able to use the simple mistake-theory argument that fewer limits on building stuff would allow us to have more stuff, which is good.

Team Affordable Housing knew it could build a little affordable housing on its own, but believed it could get more by locking in a favorable allocation early on with the Proposition.

• And Team Economic Common Sense shook their head as both office space and affordable housing (which had already been receiving funding from new construction) are about to take a nosedive in the wake of its implementation.

• Expand? I don’t see how both could be disadvantaged by allocation-before-optimization.

• Office construction stops in San Francisco. The development fees from office construction are a major funding source for affordable housing. The affordable housing stops being built.

(Same thing happens for all the restrictions on market-rate housing, when it’s also paying fees to fund affordable housing. The end result is that very little gets built, which helps nobody but the increasingly rich homeowners in San Francisco. Which is the exact intended outcome of that group, who are the most reliable voters.)

Heh. I guess I’m a conflict theorist when it comes to homeowner NIMBYs, but a mistake theorist when it comes to lefty NIMBYs (who are just completely mistaken in their belief that preventing development will help the non-rich afford to live in SF).

• Preventing development limits the increase in desirability, which reduces market clearing price.

It’s more negative for the rich than for the poor, and as such reduces inequality.

• [Edited to remove sarcasm.]

It’s more negative for the rich than for the poor, and as such reduces inequality.

Wouldn’t that predict that San Francisco, which has built almost nothing since the 1970s in most neighborhoods, should have low inequality?

• I was speaking of inequality generally, not specifically housing inequality.

The entire point was a cheap shot at people who think that inequality is inherently bad, like suggesting destroying all the value to eliminate all the inequality.

• Ah, I’m just bad at recognizing sarcasm. In fact, I’m going to reword my comment above to remove the sarcasm.

• Nice deduction about the relationship between this and conflict vs mistake theory! Similar and complementary to this post is the one I wrote on Moloch and the Pareto optimal frontier.

• +1 I think this area of investigation is underexplored and potentially very fruitful.

• I like this theory. It seems to roughly map to how the distinction works in practice, too. However: Is it true that mistake theorists feel like they’ll be in a better negotiating position later, and conflict theorists don’t?

Take, for example, a rich venture capitalist and a poor cashier. If we all cooperate to boost the economy 10x, such that the existing distribution is maintained but everyone is 10x richer in real terms… yeah, I guess that would put the cashier in a worse negotiating position relative to the venture capitalist, because they’d have more stuff and hence less to complain about, and their complaining would be seen more as envy and less as righteous protest.

What about two people, one from an oppressor group and one from an oppressed group, in the social justice sense? E.g. man and woman? If everyone gets 10x richer, then arguably that would put the man in a worse negotiating position relative to the woman, because the standard rationales for e.g. gender roles, discrimination, etc. would seem less reasonable: So what if men’s sports make more money and thus pouring money into male salaries is a good investment whereas pouring it into female salaries is a money sink? We are all super rich anyway, you can afford to miss out on some profits. (Contrast this with e.g. a farmer in 1932 arguing that his female workers are less strong than the men, and thus do less work, and thus he’s gonna pay them less, so he can keep his farm solvent. When starvation or other economic hardships are close at hand, this argument is more appealing.)

More abstractly, it seems to me that the richer we all are, the more “positional” goods matter, intuitively. When we are all starving, things like discrimination and hate speech seem less pressing, compared to when we all have plenty.

Interesting. Those are the first two examples I thought of, and the first one seems to support your theory and the second one seems to contradict it. Not sure what to make of this. My intuitions might be totally wrong of course.

• When resources are scarce, strongly controlling them seems justified. This includes men taking control of resources and acting unequally, as well as the poor fighting for a bigger slice of the pie.

When there’s already plenty to go around then power grabs (or unequal opportunities between men and women because men need it) are just for their own sake and less justified.

So in general power that already exists (wealth, social classes, political power) will be harder to change through negotiation and anything that needs to keep being stimulated (rich kids getting educations for example (on finances, hard sciences or whatever)) will disappear as scarcity disappears.

• Curated. This is a very simple and useful conceptual point, and such things deserve clear and well-written explanations. Thanks for writing this one.

• The point that the ordering of optimisation and allocation phases is important in negotiation is a good one. But as you say, the phases are more often interleaved: the analogy isn’t exact, but I’m reminded of pigs playing game theory

I disagree that this maps to mistake /​ conflict theory very well. For example, I think that a mistake theorist is often claiming that the allocation effects of some policy are not what you think (e.g. rent controls or minimum wage).

• For example, I think that a mistake theorist is often claiming that the allocation effects of some policy are not what you think (e.g. rent controls or minimum wage).

A big part of optimizing systems is analyzing things to determine it’s outcomes. That might be why mistake theorists frequently claim to have discovered that X policy has surprising effects—even policies related to allocation, like the ones you cited.

It’s a stretch, but not a large one, and it explains how “mistake/​conflict theory = optimizing first/​last” predicts mistake theorists yapping about allocation policies.

• So what happens to mistake theorists once they make it to the Pareto frontier?

• We look for ways that the frontier is secretly just a saddle point and we can actually push the frontier out farther than we naively modeled when we weren’t there looking at it up close. This has worked incredibly well since the start of the industrial revolution.

• I feel like that strategy is unsustainable in the long term. Eventually the cost of the search will get more and more expensive as the lower hanging fruit get picked.

• They switch to negotiating for allocation. But yeah, it’s weird because there’s no basis for negotiation once both parties have committed to playing on the Pareto frontier.

I feel like in practice, negotiation consists of provisional commitments, with the understanding that both parties will retreat to their BATNA if negotiations break down.

Maybe one can model negotiation as a continuous process that approaches the Pareto frontier, with the allocation changing along the way.

• I think you would need to define your superrational agent more precisely to know what it should do. Is it a selfish utility-maximizer? Can its definition of utility change under any circumstances? Does it care about absolute or relative gains, or does it have some rule for trading off absolute against relative gains? Do the agents in the negotiation have perfect information about the external situation? Do they know each others’ decision logic?

• Is it a selfish utility-maximizer? Can its definition of utility change under any circumstances? Does it care about absolute or relative gains, or does it have some rule for trading off absolute against relative gains?

The agent just wants to maximize their expected payoff in the game. They don’t care about the other agents’ payoffs.

Do the agents in the negotiation have perfect information about the external situation?

The agents know the action spaces and payoff matrix. There may be sources of randomness they can use to implement mixed strategies, and they can’t predict these.

Do they know each others’ decision logic?

This is the part I don’t know how to define. They should have some accurate counterfactual beliefs about what the other agent will do, but they shouldn’t be logically omniscient.

• they shouldn’t be logically omniscient.

This is achieved (in a model) by making them the same: then they can ask “what would I do if I was in X’s shoes?” instead of “what would X do in situation Y?”.

• I’m going to provocatively call the first strategy mistake theory, and the second conflict theory.

I think that’s very confusing. The relevant distinctions are not in your essay at all—they’re about how much each side value’s the other side’s desires, and whether they think there IS significant difference in sum based on cooperation, and what kinds of repeated interactions are expected.

Your thesis is very biased toward mistake theory, and makes simply wrong assumptions about most of the conflicts that this applies to.

Indeed, the mistake theory strategy pushes the obviously good plan of making things better off for everyone.

No, mistake theorists push the obviously bad plan of letting the opposition control the narrative and destroy any value that might be left. The outgroup is evil, not negotiating in good faith, and it’s an error to give them an inch. Conflict theory is the correct one for this decision.

The reason I’m thinking about this is that I want a theory of non-zero-sum games involving counterfactual reasoning and superrationality. It’s not clear to me what superrational agents “should” do in general non-zero-sum games.

Wait, shouldn’t you want a decision theory (including non-zero-sum games) that maximizes your goals? It probably will include counterfactual reasoning, but may or may not touch on superrationality. In any case, social categorization of conflict is probably the wrong starting point.

• The outgroup is evil, not negotiating in good faith, and it’s an error to give them an inch. Conflict theory is the correct one for this decision.

Which outgroup? Which decision? Are you saying this is universally true?

• [note: written awhile ago, and six votes netting to zero indicate that it’s at best a non-helpful comment]

Which outgroup? Which decision? Are you saying this is universally true?

For some outgroups and decisions, this applies. It doesn’t need to be universal, only exploitable. Often mistake theory is helpful in identifying acceptable compromises and maintaining future cooperation. Occasionally, mistake theory opens you to disaster. You shouldn’t bias toward one or the other, you should evaluate which one has the most likely decent outcomes.

Also, I keep meaning to introduce “incompetence theory” (or maybe “negligence theory”)- some outgroups aren’t malicious and aren’t so diametrically opposed to your goals that it’s an intentional conflict, but they’re just bad at thinking and can’t be trusted to cooperate.

• some outgroups aren’t malicious and aren’t so diametrically opposed to your goals that it’s an intentional conflict, but they’re just bad at thinking and can’t be trusted to cooperate.

In what way is this different than mistake theory?

• Mistake theory focuses on beliefs and education/​discussion to get alignment (or at least understanding and compromise). Conflict theory focuses on force and social leverage. Neither are appropriate for incompetence theory.

• Huh.

I think I’ve gathered a different definition of the terms. From what I got, mistake theory could be boiled down to “all/​all important/​most of the world’s problems are due to some kind of inefficiency. Somewhere out there, something is broken. That includes bad beliefs, incompetence, coordination problems, etc.”

• I think there are different conceptions of the theory talking past each other (or perhaps a large group talking past me; I’ll bow out shortly). There are two very distinct classifications one might want to use this theory for.

1) How should i judge or think about relationships with those who seem to act in opposition to my goals? I’m fine with a very expansive view of mistake theory for this—there’s not much benefit to villianizing or denigration of persons (unless it really is a deep conflict in values, in which case it can be correct to recognize that).

2) How should I strategize to further my goals in the face of this opposition? This is a superset of #1 - part of the strategy is often to pursue relationships and discussion/​negotiation. But ALSO there are different strategies to reach alignment or to negotiate/​compromise with people who simply don’t model the universe as much as you do, but don’t actually misalign on a value level, than with those who are disagreeing because they have different priors or evidence, so different paths to compatible goals.

For #1, mistake vs conflict is a fine starting point, and I’d agree that I prefer to treat most things as mistake (though not all, and perhaps not as much “most” as many around here). For #2, I find value in more categories, to select among more strategies.