By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly. Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
When I say that rational play may not “end” at a Nash equilibrium, I mean that when rational players (fully aware of each others’ rationality, etc.) sit down to play a game, we should not be too confident that they will play a Nash equilibrium. I think my objection to your reasoning is that the players are not sequentially given opportunities to deviate; they choose a strategy and then play it. That is the definition of a strategy; if you are allowed to change your strategy iteratively then you are playing a new game, in which the strategy set has simply been enlarged and to which a similar criticism applies. Here is an example which at least calls into doubt the normal justification.
Suppose that a mechanism with two Nash equilibria, A and B, is deployed commonly at auctions all around the world. Because the mechanism was carefully designed, the goods are allocated efficiently at both equilibria. In Italy, everyone plays equilibrium A. Knowing this, and aware of the rationality of the average Italian, all Italians participating at auctions in Italy select equilibrium A. In America, everyone plays equilibrium B. Knowing this, and aware of the rationality of the average American, all Americans participating at auctions in America select equilibrium B. Now a poor American tourist in Italy participates in an auction, and (ignorant of the world as Americans are) he tries to play equilibrium B. Consequently, the auction fails to allocate goods efficiently—the mechanism made no guarantees about what happened when some individuals played from one equilibrium and others played from a different equilibrium. The American’s failure cannot be attributed to some failure of rationality; after playing, the Italians might also all wish that they had changed their strategy. This is also a real problem for mechanisms; there are certain classes of problems and mechanisms for which you can prove that this sort of thing will always be possible. You can try, as the mechanism designer, to suggest an equilibrium A to the players. But if one equilibrium is better for some players and worse for others, why should the players automatically accept your proposed equilibrium? If they all choose to play another Nash equilibrium B which is better for all of them, are they behaving irrationally?
(Of course this example does not apply to dominant strategy truthful mechanisms.)
By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly.
Thx for the reference to Abreu-Matsushima (in your response to noitanigami). I wasn’t familiar with that. Nonetheless, to the extent I understood, the mechanism only fails in the case of collusion among the bidders (violating an assumption of the proof—right?). And the seller can protect himself by making a bid himself on each item in the lot (a minimum selling price).
Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
I assume you are referring to VCG here. Yeah, chalk up another failure for excessively iterated removal of dominated strategies. It seems we really do need a theory of “trembling brain equilibrium”. But, then, short of a fully competitive market, nothing achieves Pareto optimality, so I don’t think VCG should be judged too harshly. It is not a practical mechanism, but it is somewhat enlightening.
Regarding Nash equilibrium:
Here is an example which at least calls into doubt the normal justification.
But your example of the American playing B while the Italians play A is not a Nash equilibrium. Your example only demonstrates that it is foolish to promote a mechanism for which the equilibrium is not unique.
To clarify: Abreu-Matsushima fails in practice, regardless of whether there is collusion (and certainly it fails entirely if there is a coalition of even two players). VCG is dominant strategy truthful, but fails in the presence of even two colluding players. I agree that VCG is extremely interesting, but I also think that you should not consider the problem solved once you know VCG. Also, there are mechanisms which do much better than competitive markets can hope to. The question now is how well a benevolent dictator can allocate goods (or whatever you are trying to do).
I agree that my example is not a Nash equilibrium. The point was that rational players may not play a Nash equilibrium. If your notion of a reasonable solution is “it works at equilibria” then sure, this isn’t a counterexample. But presumably the minimal thing you would want is “it works when the players are all perfectly rational and don’t conclude” which this example shows isn’t even satisfied if there are multiple Nash equilibria.
Most mechanisms don’t have unique Nash equilibrium. The revelation principle also doesn’t preserve the uniqueness of a Nash equilibrium, if you happened to have one at the beginning.
By performs badly, I meant that it fails to exhibit the properties its designers imagined, or “proved.” For example, if the designers prove that this mechanism generates the maximum possible revenue and the mechanism ends up generating no revenue when deployed in practice, I would say it performs badly. Similarly, if the mechanism is intended to maximize the social welfare but then selects a pareto inefficient outcome, I would say that it performs badly.
When I say that rational play may not “end” at a Nash equilibrium, I mean that when rational players (fully aware of each others’ rationality, etc.) sit down to play a game, we should not be too confident that they will play a Nash equilibrium. I think my objection to your reasoning is that the players are not sequentially given opportunities to deviate; they choose a strategy and then play it. That is the definition of a strategy; if you are allowed to change your strategy iteratively then you are playing a new game, in which the strategy set has simply been enlarged and to which a similar criticism applies. Here is an example which at least calls into doubt the normal justification.
Suppose that a mechanism with two Nash equilibria, A and B, is deployed commonly at auctions all around the world. Because the mechanism was carefully designed, the goods are allocated efficiently at both equilibria. In Italy, everyone plays equilibrium A. Knowing this, and aware of the rationality of the average Italian, all Italians participating at auctions in Italy select equilibrium A. In America, everyone plays equilibrium B. Knowing this, and aware of the rationality of the average American, all Americans participating at auctions in America select equilibrium B. Now a poor American tourist in Italy participates in an auction, and (ignorant of the world as Americans are) he tries to play equilibrium B. Consequently, the auction fails to allocate goods efficiently—the mechanism made no guarantees about what happened when some individuals played from one equilibrium and others played from a different equilibrium. The American’s failure cannot be attributed to some failure of rationality; after playing, the Italians might also all wish that they had changed their strategy. This is also a real problem for mechanisms; there are certain classes of problems and mechanisms for which you can prove that this sort of thing will always be possible. You can try, as the mechanism designer, to suggest an equilibrium A to the players. But if one equilibrium is better for some players and worse for others, why should the players automatically accept your proposed equilibrium? If they all choose to play another Nash equilibrium B which is better for all of them, are they behaving irrationally?
(Of course this example does not apply to dominant strategy truthful mechanisms.)
Thx for the reference to Abreu-Matsushima (in your response to noitanigami). I wasn’t familiar with that. Nonetheless, to the extent I understood, the mechanism only fails in the case of collusion among the bidders (violating an assumption of the proof—right?). And the seller can protect himself by making a bid himself on each item in the lot (a minimum selling price).
I assume you are referring to VCG here. Yeah, chalk up another failure for excessively iterated removal of dominated strategies. It seems we really do need a theory of “trembling brain equilibrium”. But, then, short of a fully competitive market, nothing achieves Pareto optimality, so I don’t think VCG should be judged too harshly. It is not a practical mechanism, but it is somewhat enlightening.
Regarding Nash equilibrium:
But your example of the American playing B while the Italians play A is not a Nash equilibrium. Your example only demonstrates that it is foolish to promote a mechanism for which the equilibrium is not unique.
To clarify: Abreu-Matsushima fails in practice, regardless of whether there is collusion (and certainly it fails entirely if there is a coalition of even two players). VCG is dominant strategy truthful, but fails in the presence of even two colluding players. I agree that VCG is extremely interesting, but I also think that you should not consider the problem solved once you know VCG. Also, there are mechanisms which do much better than competitive markets can hope to. The question now is how well a benevolent dictator can allocate goods (or whatever you are trying to do).
I agree that my example is not a Nash equilibrium. The point was that rational players may not play a Nash equilibrium. If your notion of a reasonable solution is “it works at equilibria” then sure, this isn’t a counterexample. But presumably the minimal thing you would want is “it works when the players are all perfectly rational and don’t conclude” which this example shows isn’t even satisfied if there are multiple Nash equilibria.
Most mechanisms don’t have unique Nash equilibrium. The revelation principle also doesn’t preserve the uniqueness of a Nash equilibrium, if you happened to have one at the beginning.