Less Threat-Dependent Bargaining Solutions?? (3/​2)

In the previous two posts, we went over various notions of bargaining. The Nash bargaining solution. The CoCo value. Shapley values. And eventually, we managed to show they were all special cases of each other. The rest of this post will assume you’ve read the previous two posts and have a good sense for what the CoCo value is doing.

Continuing from the last post, the games which determine the payoff everyone gets (not to be confused with the games that directly entail what actions are taken) are all of the form “everyone splits into two coalitions and , and both coalitions are trying to maximize “utility of my coalition—utility of the opposite coalition”″.

Now, in toy cases involving two hot-dog sellers squabbling over whether to hawk their wares at a beach or an airport, this produces acceptable results. But, in richer environments, it’s VERY important to note that adopting “let’s go for a CoCo equilibria” as your rule for how to split gains amongst everyone incentivizes everyone to invent increasingly nasty ways to hurt their foes. Not to actually be used, mind you. To affect how negotiations go.

After all, if you invent the Cruciatus curse, then in all those hypothetical games where your coalition faces off against the foe coalition, and everyone’s utility is “the utility of my coalition—the utility of the foe coalition”… well, you sure can reduce the utility of the foe coalition by a whole lot! And so, your team gets a much higher score in all those games.

Of course, these minimax games aren’t actually played. They just determine everyone’s payoffs. And so, you’d end up picking up a whole lot of extra money from everyone else, because you have the Cruciatus curse and everyone’s scared of it so they give you money. In the special case of a two-player game, getting access to an option which you don’t care about and the foe would pay $1000 to avoid, should let you demand a 500$ payment from the foe as a “please don’t hurt me” payment.

But Which Desiderata Fails?

Step 1 in figuring out how to get an outcome which is Not That is to look at the list of nice properties which the CoCo solution uniquely fulfills, and figure out which one to break.

As it turns out from picking through the paper, the assumption that must be broken is the axiom of “gaining access to more actions shouldn’t lead to you getting less value”. As a quick intuitive way of seeing why it should fail, consider the game of Chicken. If you physically can’t swerve (because your car started off not having a steering wheel), and are locked into always going straight through no fault of your own, then any sensible opponent will swerve and you will get good payoffs. Adding in the new option of being able to swerve means the opponent will start demanding that you swerve sometimes, lowering your score.

As a rough intuition for how the “the CoCo value is the only way to fulfill these axioms” proof works, it is reasoning as follows:

“Hm, let’s say I only had access to my minimax action, that maximized . It makes me relatively much better off than my foe. Any sensible foe going up against this threat would simply press a button that’d just give us both the CoCo value, instead of playing any other action in response. By the axiom of “adding more options can’t hurt me”, I can add in all my other actions and not get less money. And then by the axiom of “adding redundant actions doesn’t affect anything”, I can take away the foe’s CoCo value button and nothing changes. And so, in this game against the foe, I have to get a value equal or greater to my CoCo value. But the foe can run through this same reasoning from their side, and so in this game, we must both get the CoCo value.”

And this makes it very clear exactly what’s going wrong in the reasoning. Just because I’d be willing to give a foe a lot of money if they were unavoidably locked into firing off the Cruciatus curse through the random whims of nature and no action of their own or from anyone else could have prevented it, that does not mean that I’d be willing to give them a bunch of money if they had any other options available.

But just because I can diagnose that as the problem doesn’t mean I’ve got an entirely satisfactory replacement lined up as of the writing of this post. I just have a few options which seem to me to have promise, so don’t go “oh, Diffractor solved this”. Treat it as an open question deserving of your thought where I have a potential way forward, but where there might be a solution from a different direction.

In the spirit of proposing alternate options which have some unsatisfactory parts but at least are less dependent on threats, here’s my attempt at fixing it.

Kinder Fallback Points

An alternate way of viewing what’s going on with the CoCo solution is as follows: Both players are tasked with coming up with a fallback strategy if they can’t come to an agreement. The pair of fallback strategies played against each other defines a disagreement point. And then, any bargaining solution fulfilling symmetry (which is one of the essential properties of a bargaining solution, Nash fulfills it, Kalai-Smorodinsky fulfills it, all the other obscure ones fulfill it) will say “hey, instead of playing this disagreement point, what if you maximized the total pile of money instead and split the surplus 50/​50?”

The place where the CoCo value specifically appears from this protocol is that, if you know that this is what happens, your best fallback strategy to win in the overall bargaining game is the action a that maximizes . The foe playing any fallback strategy other than the counterpart of this means you’ll get more money than the CoCo value says, and the foe will get less, after bargaining concludes.

But there’s something quite suspicious about that disagreement point. It only is what it is because the foe knows you’re guaranteed to go for a 5050 split of surplus afterwards. Your foe isn’t picking its disagreement strategy because it actually likes it, or because it’s the best response to your disagreement strategy. Your foe is behaving in a way that it ordinarily wouldn’t (namely: minimizing your utility instead of purely maximizing its own), and it’s only doing that because it knows you’ll give in to the threat.

A disagreement point to take seriously is one where the foe is like “this is legitimately my best response to you if negotiations break down”, instead of one that the foe only picked because you really hate it and it knows you’ll cut a bargain. For the former, if negotiations blow up, you’ll find that, oh hey, your foe is actually serious. For the latter, if negotiations actually start failing sometimes, the foe will suddenly start wishing that they had committed to a different fallback strategy.

And so, this points towards an alternate notion instead of the CoCo value. Namely, one where the disagreement point is a Nash equilibrium (ie, both players are purely trying to maximize the utility of “negotiations break down” given what the other player has as their disagreement strategy, instead of looking ahead and trying to win at bargaining by hurting the other), and surplus is maximized and split 5050 after that.

Although it’s a lot better to use correlated equilibria instead of Nash equilibria. Since we’re already assuming that the agents can implement any joint strategy, it’s not much of a stretch to assume that they can fall back to joint strategies if negotiations break down. Correlated equilibria generalize the “nobody wants to change their move” property of Nash equilibria to cases where the agents have a joint source of randomness. Alternately, Nash equilibria can be thought of as the special case of correlated equilibria where the agents can’t correlate their actions.

There are two other practical reasons why you’d want to use correlated equilibria over Nash equilibria. The first is that the set of correlated equilibria is convex (a mixture of two correlated equilibria is a correlated equilibrium, which doesn’t hold for Nash), so the set of “possible disagreement payoffs” will be a convex subset of , letting you apply a lot more mathematical tools to it. Also, finding Nash equilibria is computationally hard, while finding correlated equilibria can be done in polynomial time and it’s easy to come up with algorithms which converge to playing correlated equilibria in reasonable timeframes. (though I don’t know if there’s a good algorithm to converge to correlated equilibria which are Pareto-optimal among the set of correlated equilibria)

Ok, so our alternate non-threat-based notion instead of the CoCo value is “both players settle on a correlated equilibrium as a backup, and then evenly split the surplus from there”. If you blow up the negotiation and resort to your fallback strategy, you’ll find that, oh hey, the foe’s best move is to play their part of the fallback strategy, they didn’t just pick their backup strategy to fuck you over.

Two Big Problems

One is that set of possible fallback points will, in general, not be a single point (rather, it will be a convex set). So, uh… how do you pick one? Is there a principled way to go “y’know, this is a somewhat unfair correlated equilibrium”?

The second problem is generalizing to the n-player case. In general, the surplus gets split according to the Shapley value, to give a higher share to players who can significantly boost the value of most coalitions they’re in. Player i gets

as their payoff.

But this has an issue. How is defined? It should be something like “the value earned by coalition ”. For the CoCo value, is defined as “the value that team gets in their zero-sum game against team Everyone Else”. But for using correlated equilibria as the fallback points, it gets harder. Maybe is “the value that team gets when they play a correlated equilibrium against not-, where and not- are both modeled as individual large players”. Or maybe it’s “the value that team gets when they play a correlated equilibrium against the uncoordinated horde of Everyone Else, where is modeled as one big player, and everyone else is modeled as individuals”. And this completely overlooks the issue of, again, “which correlated equilibrium” (there are a lot)

I mean, it’d be nice to just say “give everyone their Shapley payoffs”, but the value produced by a team working together is pretty hard to define due to the dependence on what everyone else is doing. Is everyone else fighting each other? Coordinating and making teams of their own?

So, it’s a fairly dissatisfying solution, with waaaay too many degrees of freedom, and a whole lot of path-dependence, but I’m pretty confident it’s attacking the core issue.

Suggestions for alternate threat-resistant strategies in the comments are extremely welcome, as are pointing out further reasons why this or other strategies suck or don’t fulfill their stated aim, as well as coming up with clever ways to mitigate the issues with the thing I proposed.