It’s indeed a mystery to me why anyone bothered to post and discuss “solutions” different from Rayhawk’s in the swords and armor thread. This stuff is like arithmetic: one right answer, nothing to argue about.
As a bonus, I’ll give an introduction to the notion of “correlated equilibrium” invented by Aumann, using a model game invented by Shapley. Imagine you’re playing a variant of Rock Paper Scissors where a win gives you 1 point, but a lose or a draw give 0 points. (So the game is no longer zero-sum—this is essential.) Obviously, if you use some strategy more than 1⁄3 of the time, the other guy may adjust to that; therefore the only Nash equilibrium is mixed where you both play each strategy with probability 1⁄3, which gets you both an expected payoff of 1⁄3. But the problem with this result is that sometimes the game ends in a draw and no one wins any money. So it would be mutually beneficial to somehow arrange that you never play the same strategy. But doesn’t the uniqueness of the Nash equilibrium mean that any such arrangement would be unstable?
Well, here’s how you do it. Suppose you both ask a trusted third party to randomly pick one of the six non-draw outcomes of the game, and then privately tell each of you which strategy to play (without telling you what they said to the other guy). For example, they might randomly pick “Rock Scissors”, tell you to play Rock, and tell your opponent to play Scissors. In this freaky situation, even though no one’s forcing you to follow the advice, doing so is an equilibrium! This means that neither of you can gain anything by deviating from the advice—provided that the opponent doesn’t deviate. And your expected payoff is now 1⁄2, because draws cannot happen, which is better then the Nash equilibrium payoff of 1⁄3. This is called a “correlated equilibrium”. It’s one of the examples that show how even non-binding agreements, “cheap talk”, can still make people better off.
This stuff is like arithmetic: one right answer, nothing to argue about.
Under the assumption of universal rationality. Without that assumption (which would not be fulfilled in a real fantasy sword-fighting game), the best strategy/mixed strategy does not correspond to the Nash equilibrium, and there remains plenty to argue about.
What happens when some percentage of people are picking randomly, some people are “stylin”, and some people are performing “misinformed” calculations and/or simulations? The fact that some people actually did the latter shows that the effect must be taken into account.
The effect of irrationality should be taken into account, but unless you have a good way to do so (like a solid model of the effect), adopting another strategy would be akin to betting on red sometimes.
It’s indeed a mystery to me why anyone bothered to post and discuss “solutions” different from Rayhawk’s in the swords and armor thread. This stuff is like arithmetic: one right answer, nothing to argue about.
In some sense, I agree with you. The problem as posed had a clear answer that was calculable by a known method (if one had done the requisite reading in game theory). The thing I particularly liked about Rayhawk’s post was the link to the a library of game theory software and tools for the construction and analysis of finite extensive and strategic games: gambit. That link was the kind of novel and useful pointer that is one of the many reasons I have for reading LW :-)
On the other hand, I find that the world frequently fails to present situations to me that are intelligible to the point that I can build a payoff matrix and run the numbers. So, as a simple exercise standing in for a more complex world there was potentially much more to say about the puzzle. In that vein I particularly liked Nominull’s fast and frugal answer:
My general heuristic for these sorts of games is to play the option that beats the option that beats the option that looks best to me at first blush. In this case that means I play green sword, yellow armor. It’s a reasonably fast heuristic that does reasonably well.
I expect that I would find it very difficult to mimic Rayhawk’s application of gambit in the bulk of real life circumstances. Nominull’s heuristic (which incidentally produced one of the options from the optimal mixed strategy) seems more generally applicable. I can imagine using Nominull’s heuristic in much fuzzier contexts for much lower data gathering costs and getting pretty good results thereby. Not that I’ve tested it or anything… but it’s the sort of thing I’ll be looking for an opportunity to try out in the real world someday, and see if it helps :-)
As an extra added bonus, I’ll mention the “revelation principle” which applies in games where there is some uncertainty regarding the other guy’s payoffs or constraints. (No math or proofs here—I just want to sketch the problem and assert the solution. For proofs and details, see a good game theory textbook, such as Myerson.)
For example, say that Player1 has arthritis and can’t form a fist for “rock” without some pain. As a result, this player suffers a penalty of −1/3 whenever he plays “rock”—meaning he only has a net payoff of 2⁄3 against “scissors” and actually loses 1⁄3 against “rock” or “paper”. Player2 has heard rumors about Player1′s arthritis, but doesn’t know whether the pain occurs when Player 1 plays “rock” or whether it is “paper” that causes the problem. She assigns a Bayesian prior probability of 0.5
to each kind of arthritis. Player 1 knows about these priors—that is, these priors are common knowledge.
Clearly, Player1 wants to keep secret from Player2 exactly which form of arthritis he suffers from. Harsanyi (1967 or so) invented the concept of “Bayesian equilibrium” to characterize and calculate solutions to these kinds of games where the players are unsure of each other’s payoffs and beliefs.
Cousin_it just told us about how the basic concept of Nash equilibrium can sometimes be improved to “correlated equilibrium” if the players can find a mutually trusted 3rd person to assist them. Actually, they don’t need a person—a mutually trusted machine will do just fine. So the question naturally arises whether a Bayesian equilibrium can be improved to a correlated equilibrium just as the simpler Nash equilibrium could be improved. The answer is yes—but there is a catch. The catch is called the “revelation
principle”.
The principle is that both players have to trust the 3rd person or machine with their secrets. It turns out that there always is a correlated equilibrium in which both players have an incentive to reveal their secrets honestly to the arbitrator (though not to each other) and both players have an incentive to carry out the instructions of the arbitrator.
The algorithms used by the arbitrator in a correlated equilibrium are not rocket science. They can’t be. They have to be understandable and understood by both players. How else can they be convinced that they will do better by trusting the arbitrator than by just playing the game without arbitration?
So if the game players are two powerful AIs, they will want to use a very simple and secure computer system as the jointly trusted arbiter. I can’t help but wonder whether the best way to avoid uFAIs that secretly seek to take over the world is to somehow make use of the revelation principle.
I love the way Ken Binmore explains the revelation principle in “Fun and Games” in the context of auctions and mechanism design. Unfortunately a full explanation would take too much space here, but it’s one of the really beautiful results in game theory. Here’s an example of what it’s good for: suppose you want to sell a house and you have a Bayesian prior over the prices that different buyers might be willing to pay. You’re faced with the problem of setting up an auction that would maximize the expected selling price. Now there are many types of auctions: first price sealed bid, Vickrey, English, and to quote the book
...There are many others. She might set entry fees to be paid by all bidders. She might designate minimum bids, or otherwise restrict the bids that can be made. She might even seed the auction room with shills primed to push the bidding up if things look slow. The possibilities are bewilderingly large...
Nevertheless, the “revelation principle” offers the seller a rigorous way to maximize over this whole space of possible auctions and design the best possible mechanism (which will in fact be different for different priors over buyers). It’s a startling result, but the trick is quite obvious once you understand it.
Overall, I feel that “Fun and Games” is the best book I’ve ever owned, even though it breaks my heart to relegate Cormen/Leiserson/Rivest and Schelling to 2nd and 3rd. I must’ve plugged it here once or twice already.
If I was a seller, I would never use what you term the “best possible mechanism”. This notion of mechanism design, and more generally of rational play, is certainly interesting mathematics, but in practice it often leads to mechanisms that consistently perform very badly (sometimes it gives good mechanisms, but that is no thanks to the formalism). I don’t think the revelation principle really has much to say about practical mechanisms. I’m not arguing that mechanism design shouldn’t be approached theoretically; I’m just saying that you might use a different model if you re-examined the maxim “rational play ends at a Nash equilibrium” and its justification.
This notion of mechanism design, and more generally of rational play, is certainly interesting mathematics, but in practice it often leads to mechanisms that consistently perform very badly (sometimes it gives good mechanisms, but that is no thanks to the formalism).
When you say that a mechanism “performs badly”, do you mean that it performs badly for one party (and hence very well for the other party) or do you mean that it performs badly for all parties to the attempted transaction?
I’m just saying that you might use a different model if you re-examined the maxim “rational play ends at a Nash equilibrium” and its justification.
Could you re-examined the maxim “rational play ends at a Nash equilibrium”? The usual justification is that rational play can not possibly end anywhere else—otherwise one rational player or the other would change strategies. What is wrong with that, in a two person game? For that matter, doesn’t the justification still work when there are many players?
By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly. Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
When I say that rational play may not “end” at a Nash equilibrium, I mean that when rational players (fully aware of each others’ rationality, etc.) sit down to play a game, we should not be too confident that they will play a Nash equilibrium. I think my objection to your reasoning is that the players are not sequentially given opportunities to deviate; they choose a strategy and then play it. That is the definition of a strategy; if you are allowed to change your strategy iteratively then you are playing a new game, in which the strategy set has simply been enlarged and to which a similar criticism applies. Here is an example which at least calls into doubt the normal justification.
Suppose that a mechanism with two Nash equilibria, A and B, is deployed commonly at auctions all around the world. Because the mechanism was carefully designed, the goods are allocated efficiently at both equilibria. In Italy, everyone plays equilibrium A. Knowing this, and aware of the rationality of the average Italian, all Italians participating at auctions in Italy select equilibrium A. In America, everyone plays equilibrium B. Knowing this, and aware of the rationality of the average American, all Americans participating at auctions in America select equilibrium B. Now a poor American tourist in Italy participates in an auction, and (ignorant of the world as Americans are) he tries to play equilibrium B. Consequently, the auction fails to allocate goods efficiently—the mechanism made no guarantees about what happened when some individuals played from one equilibrium and others played from a different equilibrium. The American’s failure cannot be attributed to some failure of rationality; after playing, the Italians might also all wish that they had changed their strategy. This is also a real problem for mechanisms; there are certain classes of problems and mechanisms for which you can prove that this sort of thing will always be possible. You can try, as the mechanism designer, to suggest an equilibrium A to the players. But if one equilibrium is better for some players and worse for others, why should the players automatically accept your proposed equilibrium? If they all choose to play another Nash equilibrium B which is better for all of them, are they behaving irrationally?
(Of course this example does not apply to dominant strategy truthful mechanisms.)
By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly.
Thx for the reference to Abreu-Matsushima (in your response to noitanigami). I wasn’t familiar with that. Nonetheless, to the extent I understood, the mechanism only fails in the case of collusion among the bidders (violating an assumption of the proof—right?). And the seller can protect himself by making a bid himself on each item in the lot (a minimum selling price).
Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
I assume you are referring to VCG here. Yeah, chalk up another failure for excessively iterated removal of dominated strategies. It seems we really do need a theory of “trembling brain equilibrium”. But, then, short of a fully competitive market, nothing achieves Pareto optimality, so I don’t think VCG should be judged too harshly. It is not a practical mechanism, but it is somewhat enlightening.
Regarding Nash equilibrium:
Here is an example which at least calls into doubt the normal justification.
But your example of the American playing B while the Italians play A is not a Nash equilibrium. Your example only demonstrates that it is foolish to promote a mechanism for which the equilibrium is not unique.
To clarify: Abreu-Matsushima fails in practice, regardless of whether there is collusion (and certainly it fails entirely if there is a coalition of even two players). VCG is dominant strategy truthful, but fails in the presence of even two colluding players. I agree that VCG is extremely interesting, but I also think that you should not consider the problem solved once you know VCG. Also, there are mechanisms which do much better than competitive markets can hope to. The question now is how well a benevolent dictator can allocate goods (or whatever you are trying to do).
I agree that my example is not a Nash equilibrium. The point was that rational players may not play a Nash equilibrium. If your notion of a reasonable solution is “it works at equilibria” then sure, this isn’t a counterexample. But presumably the minimal thing you would want is “it works when the players are all perfectly rational and don’t conclude” which this example shows isn’t even satisfied if there are multiple Nash equilibria.
Most mechanisms don’t have unique Nash equilibrium. The revelation principle also doesn’t preserve the uniqueness of a Nash equilibrium, if you happened to have one at the beginning.
A Nash equilibrium is frequently not Pareto efficient; if everyone changed their strategy at once, everyone could do better.
The Traveler’s Dilemma is a game that’s similar to the Prisoner’s Dilemma, and humans usually don’t play the Nash Equilibrium strategy.
In other words,
This notion of mechanism design, and more generally of rational play, is certainly interesting mathematics, but in practice it often leads to mechanisms that consistently perform very badly (sometimes it gives good mechanisms, but that is no thanks to the formalism)
means “people often don’t behave the way game theory says they should, and assuming that they will is often foolish.”
A Nash equilibrium is frequently not Pareto efficient; if everyone changed their strategy at once, everyone could do better.
If everyone does better at a different Nash equilibrium, then that just shows that being a NE is necessary, but not sufficient for mutual rationality.
If everyone does better at a joint strategy that is not an NE (PD, for example), then one of the players is not playing rationally—he could do better with another strategy, assuming the other player stands pat.
… people often don’t behave the way game theory says they should, and assuming that they will is often foolish.
Assuming that they won’t be rational can often be foolish too.
Rational-agent game theory is not claimed to have descriptive validity; its validity is prescriptive or normative. Or, to be more precise, it provides normatively valid advice to you, under the assumption that it is descriptively valid for everyone else.
And yes, I do appreciate that this is a very weird kind of validity for a body of theory to claim for itself.
For example, the Abreu-Matsushima mechanism implements essentially any implementable property at the unique rationalizable Nash equilibrium (a much stronger guarantee than anything preserved by the revelation principle). If you actually use the Abreu-Matsushima mechanism you will find that it basically never works unless the players want it to work (and often, not even then) as has been verified empirically.
The VCG mechanism maximizes social welfare whenever the players play undominated strategies. In practice, the possibility of even very weak collusion between even two players destroys this guarantee.
In general, the claim that rational players will always choose a Nash equilibrium has little empirical support, and in fact doesn’t have very good theoretical support either outside of two-player zero sum games (of course in the OP the game is two-player and zero sum; there my complaint is that common knowledge of rationality is a bad assumption).
The Nash equilibrium suggests playing randomly in matching pennies—yet you can do much better than that if facing an irrational opponent—such as a typical unmodified human. The Nash equilibrium is for when both players play rationally.
It’s indeed a mystery to me why anyone bothered to post and discuss “solutions” different from Rayhawk’s in the swords and armor thread. This stuff is like arithmetic: one right answer, nothing to argue about.
As a bonus, I’ll give an introduction to the notion of “correlated equilibrium” invented by Aumann, using a model game invented by Shapley. Imagine you’re playing a variant of Rock Paper Scissors where a win gives you 1 point, but a lose or a draw give 0 points. (So the game is no longer zero-sum—this is essential.) Obviously, if you use some strategy more than 1⁄3 of the time, the other guy may adjust to that; therefore the only Nash equilibrium is mixed where you both play each strategy with probability 1⁄3, which gets you both an expected payoff of 1⁄3. But the problem with this result is that sometimes the game ends in a draw and no one wins any money. So it would be mutually beneficial to somehow arrange that you never play the same strategy. But doesn’t the uniqueness of the Nash equilibrium mean that any such arrangement would be unstable?
Well, here’s how you do it. Suppose you both ask a trusted third party to randomly pick one of the six non-draw outcomes of the game, and then privately tell each of you which strategy to play (without telling you what they said to the other guy). For example, they might randomly pick “Rock Scissors”, tell you to play Rock, and tell your opponent to play Scissors. In this freaky situation, even though no one’s forcing you to follow the advice, doing so is an equilibrium! This means that neither of you can gain anything by deviating from the advice—provided that the opponent doesn’t deviate. And your expected payoff is now 1⁄2, because draws cannot happen, which is better then the Nash equilibrium payoff of 1⁄3. This is called a “correlated equilibrium”. It’s one of the examples that show how even non-binding agreements, “cheap talk”, can still make people better off.
Under the assumption of universal rationality. Without that assumption (which would not be fulfilled in a real fantasy sword-fighting game), the best strategy/mixed strategy does not correspond to the Nash equilibrium, and there remains plenty to argue about.
What happens when some percentage of people are picking randomly, some people are “stylin”, and some people are performing “misinformed” calculations and/or simulations? The fact that some people actually did the latter shows that the effect must be taken into account.
The effect of irrationality should be taken into account, but unless you have a good way to do so (like a solid model of the effect), adopting another strategy would be akin to betting on red sometimes.
In some sense, I agree with you. The problem as posed had a clear answer that was calculable by a known method (if one had done the requisite reading in game theory). The thing I particularly liked about Rayhawk’s post was the link to the a library of game theory software and tools for the construction and analysis of finite extensive and strategic games: gambit. That link was the kind of novel and useful pointer that is one of the many reasons I have for reading LW :-)
On the other hand, I find that the world frequently fails to present situations to me that are intelligible to the point that I can build a payoff matrix and run the numbers. So, as a simple exercise standing in for a more complex world there was potentially much more to say about the puzzle. In that vein I particularly liked Nominull’s fast and frugal answer:
I expect that I would find it very difficult to mimic Rayhawk’s application of gambit in the bulk of real life circumstances. Nominull’s heuristic (which incidentally produced one of the options from the optimal mixed strategy) seems more generally applicable. I can imagine using Nominull’s heuristic in much fuzzier contexts for much lower data gathering costs and getting pretty good results thereby. Not that I’ve tested it or anything… but it’s the sort of thing I’ll be looking for an opportunity to try out in the real world someday, and see if it helps :-)
As an extra added bonus, I’ll mention the “revelation principle” which applies in games where there is some uncertainty regarding the other guy’s payoffs or constraints. (No math or proofs here—I just want to sketch the problem and assert the solution. For proofs and details, see a good game theory textbook, such as Myerson.)
For example, say that Player1 has arthritis and can’t form a fist for “rock” without some pain. As a result, this player suffers a penalty of −1/3 whenever he plays “rock”—meaning he only has a net payoff of 2⁄3 against “scissors” and actually loses 1⁄3 against “rock” or “paper”. Player2 has heard rumors about Player1′s arthritis, but doesn’t know whether the pain occurs when Player 1 plays “rock” or whether it is “paper” that causes the problem. She assigns a Bayesian prior probability of 0.5 to each kind of arthritis. Player 1 knows about these priors—that is, these priors are common knowledge.
Clearly, Player1 wants to keep secret from Player2 exactly which form of arthritis he suffers from. Harsanyi (1967 or so) invented the concept of “Bayesian equilibrium” to characterize and calculate solutions to these kinds of games where the players are unsure of each other’s payoffs and beliefs.
Cousin_it just told us about how the basic concept of Nash equilibrium can sometimes be improved to “correlated equilibrium” if the players can find a mutually trusted 3rd person to assist them. Actually, they don’t need a person—a mutually trusted machine will do just fine. So the question naturally arises whether a Bayesian equilibrium can be improved to a correlated equilibrium just as the simpler Nash equilibrium could be improved. The answer is yes—but there is a catch. The catch is called the “revelation principle”.
The principle is that both players have to trust the 3rd person or machine with their secrets. It turns out that there always is a correlated equilibrium in which both players have an incentive to reveal their secrets honestly to the arbitrator (though not to each other) and both players have an incentive to carry out the instructions of the arbitrator.
The algorithms used by the arbitrator in a correlated equilibrium are not rocket science. They can’t be. They have to be understandable and understood by both players. How else can they be convinced that they will do better by trusting the arbitrator than by just playing the game without arbitration?
So if the game players are two powerful AIs, they will want to use a very simple and secure computer system as the jointly trusted arbiter. I can’t help but wonder whether the best way to avoid uFAIs that secretly seek to take over the world is to somehow make use of the revelation principle.
I love the way Ken Binmore explains the revelation principle in “Fun and Games” in the context of auctions and mechanism design. Unfortunately a full explanation would take too much space here, but it’s one of the really beautiful results in game theory. Here’s an example of what it’s good for: suppose you want to sell a house and you have a Bayesian prior over the prices that different buyers might be willing to pay. You’re faced with the problem of setting up an auction that would maximize the expected selling price. Now there are many types of auctions: first price sealed bid, Vickrey, English, and to quote the book
Nevertheless, the “revelation principle” offers the seller a rigorous way to maximize over this whole space of possible auctions and design the best possible mechanism (which will in fact be different for different priors over buyers). It’s a startling result, but the trick is quite obvious once you understand it.
Overall, I feel that “Fun and Games” is the best book I’ve ever owned, even though it breaks my heart to relegate Cormen/Leiserson/Rivest and Schelling to 2nd and 3rd. I must’ve plugged it here once or twice already.
If I was a seller, I would never use what you term the “best possible mechanism”. This notion of mechanism design, and more generally of rational play, is certainly interesting mathematics, but in practice it often leads to mechanisms that consistently perform very badly (sometimes it gives good mechanisms, but that is no thanks to the formalism). I don’t think the revelation principle really has much to say about practical mechanisms. I’m not arguing that mechanism design shouldn’t be approached theoretically; I’m just saying that you might use a different model if you re-examined the maxim “rational play ends at a Nash equilibrium” and its justification.
When you say that a mechanism “performs badly”, do you mean that it performs badly for one party (and hence very well for the other party) or do you mean that it performs badly for all parties to the attempted transaction?
Could you re-examined the maxim “rational play ends at a Nash equilibrium”? The usual justification is that rational play can not possibly end anywhere else—otherwise one rational player or the other would change strategies. What is wrong with that, in a two person game? For that matter, doesn’t the justification still work when there are many players?
By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly. Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
When I say that rational play may not “end” at a Nash equilibrium, I mean that when rational players (fully aware of each others’ rationality, etc.) sit down to play a game, we should not be too confident that they will play a Nash equilibrium. I think my objection to your reasoning is that the players are not sequentially given opportunities to deviate; they choose a strategy and then play it. That is the definition of a strategy; if you are allowed to change your strategy iteratively then you are playing a new game, in which the strategy set has simply been enlarged and to which a similar criticism applies. Here is an example which at least calls into doubt the normal justification.
Suppose that a mechanism with two Nash equilibria, A and B, is deployed commonly at auctions all around the world. Because the mechanism was carefully designed, the goods are allocated efficiently at both equilibria. In Italy, everyone plays equilibrium A. Knowing this, and aware of the rationality of the average Italian, all Italians participating at auctions in Italy select equilibrium A. In America, everyone plays equilibrium B. Knowing this, and aware of the rationality of the average American, all Americans participating at auctions in America select equilibrium B. Now a poor American tourist in Italy participates in an auction, and (ignorant of the world as Americans are) he tries to play equilibrium B. Consequently, the auction fails to allocate goods efficiently—the mechanism made no guarantees about what happened when some individuals played from one equilibrium and others played from a different equilibrium. The American’s failure cannot be attributed to some failure of rationality; after playing, the Italians might also all wish that they had changed their strategy. This is also a real problem for mechanisms; there are certain classes of problems and mechanisms for which you can prove that this sort of thing will always be possible. You can try, as the mechanism designer, to suggest an equilibrium A to the players. But if one equilibrium is better for some players and worse for others, why should the players automatically accept your proposed equilibrium? If they all choose to play another Nash equilibrium B which is better for all of them, are they behaving irrationally?
(Of course this example does not apply to dominant strategy truthful mechanisms.)
Thx for the reference to Abreu-Matsushima (in your response to noitanigami). I wasn’t familiar with that. Nonetheless, to the extent I understood, the mechanism only fails in the case of collusion among the bidders (violating an assumption of the proof—right?). And the seller can protect himself by making a bid himself on each item in the lot (a minimum selling price).
I assume you are referring to VCG here. Yeah, chalk up another failure for excessively iterated removal of dominated strategies. It seems we really do need a theory of “trembling brain equilibrium”. But, then, short of a fully competitive market, nothing achieves Pareto optimality, so I don’t think VCG should be judged too harshly. It is not a practical mechanism, but it is somewhat enlightening.
Regarding Nash equilibrium:
But your example of the American playing B while the Italians play A is not a Nash equilibrium. Your example only demonstrates that it is foolish to promote a mechanism for which the equilibrium is not unique.
To clarify: Abreu-Matsushima fails in practice, regardless of whether there is collusion (and certainly it fails entirely if there is a coalition of even two players). VCG is dominant strategy truthful, but fails in the presence of even two colluding players. I agree that VCG is extremely interesting, but I also think that you should not consider the problem solved once you know VCG. Also, there are mechanisms which do much better than competitive markets can hope to. The question now is how well a benevolent dictator can allocate goods (or whatever you are trying to do).
I agree that my example is not a Nash equilibrium. The point was that rational players may not play a Nash equilibrium. If your notion of a reasonable solution is “it works at equilibria” then sure, this isn’t a counterexample. But presumably the minimal thing you would want is “it works when the players are all perfectly rational and don’t conclude” which this example shows isn’t even satisfied if there are multiple Nash equilibria.
Most mechanisms don’t have unique Nash equilibrium. The revelation principle also doesn’t preserve the uniqueness of a Nash equilibrium, if you happened to have one at the beginning.
A Nash equilibrium is frequently not Pareto efficient; if everyone changed their strategy at once, everyone could do better.
The Traveler’s Dilemma is a game that’s similar to the Prisoner’s Dilemma, and humans usually don’t play the Nash Equilibrium strategy.
In other words,
means “people often don’t behave the way game theory says they should, and assuming that they will is often foolish.”
If everyone does better at a different Nash equilibrium, then that just shows that being a NE is necessary, but not sufficient for mutual rationality.
If everyone does better at a joint strategy that is not an NE (PD, for example), then one of the players is not playing rationally—he could do better with another strategy, assuming the other player stands pat.
Assuming that they won’t be rational can often be foolish too.
Rational-agent game theory is not claimed to have descriptive validity; its validity is prescriptive or normative. Or, to be more precise, it provides normatively valid advice to you, under the assumption that it is descriptively valid for everyone else. And yes, I do appreciate that this is a very weird kind of validity for a body of theory to claim for itself.
Can you give examples of a mathematical proof leading to an ineffective mechanism?
For example, the Abreu-Matsushima mechanism implements essentially any implementable property at the unique rationalizable Nash equilibrium (a much stronger guarantee than anything preserved by the revelation principle). If you actually use the Abreu-Matsushima mechanism you will find that it basically never works unless the players want it to work (and often, not even then) as has been verified empirically.
The VCG mechanism maximizes social welfare whenever the players play undominated strategies. In practice, the possibility of even very weak collusion between even two players destroys this guarantee.
In general, the claim that rational players will always choose a Nash equilibrium has little empirical support, and in fact doesn’t have very good theoretical support either outside of two-player zero sum games (of course in the OP the game is two-player and zero sum; there my complaint is that common knowledge of rationality is a bad assumption).
The Nash equilibrium suggests playing randomly in matching pennies—yet you can do much better than that if facing an irrational opponent—such as a typical unmodified human. The Nash equilibrium is for when both players play rationally.
I actually briefly considered mentioning correlated equilibria, but the post was getting long already.