Incentive compatibility and the Revelation Principle

In which the Revelation Principle is introduced, showing all mechanisms can be reduced to incentive compatible mechanisms. With this insight, a solution (of sorts) is given to the public good problem in the last post. Limitations of the Revelation Principle are also discussed.

The formalism I introduced last time will now start paying off. We were left with the question of how to check whether a mechanism exists that satisfies some particular goal, naively requiring us to search over all possible procedures. Luckily though, the space of all possible mechanisms can be reduced to something manageable.

Observe the following: suppose through divine intervention we were granted a mechanism (M, g) that implements a social choice function f. In other words, the outcome when agents of types θ1, …, θn interact with the mechanism is exactly the outcome prescribed by the function f for those types. Since a person’s type encodes all relevant variation in their characteristics, preferences, or beliefs, we’d know what that person wants to do if we knew their type. In particular, when agent i is type θi, we expect her choice of message to be mi(θi). Once all the messages are sent, the function g translates the agents’ choices into an outcome so that g(m1(θ1), …, mn(θn)) = f(θ1, …, θn).

But wait! Since we expect a type θi agent to send message mi(θi), why don’t we just package that inside the mechanism? Each agent will tell us their type and the mechanism designer will play in proxy for the agent according to the original mechanism. We’ll call this the direct mechanism for f since all we need to know is that agents tell us their types and then are assigned outcome f(θ), no matter what we’ve blackboxed in the middle.

Why did we expect an agent of type θi to send message mi(θi)? Presumably because that message maximized her utility. In particular, it had to be at least as good as the message mi(θʹi) she’s send when her type is different, giving us:

Since the outcomes of the mechanism coincide with f, we can conclude

Even though we started off with some mechanism, this last statement doesn’t say anything about the mechanism itself, only the social choice function it implements. Let’s call any social choice function f that satisfies this constraint for every agent and for all possible types of others strategy-proof or dominant-strategy incentive compatible (DSIC). Note that this can add up to a lot of constraints.

These observations lead us to the following powerful tool:

The Revelation Principle (for dominant strategies): A social choice function f is implementable in dominant strategies if and only if it is dominant-strategy incentive compatible. Alternatively, every mechanism that implements f in dominant strategies is equivalent to a DSIC direct mechanism.

Rather than requiring an agent to always (weakly) prefer their type’s outcome, it might suffice that agents get greater utility on average for their true type:

Social choice functions f that satisfy this constraint for each agent are Bayesian incentive compatible (BIC). Just as we had a Revelation Principle for dominant strategies, we have an analogous one for Bayes-Nash equilibrium:

The Revelation Principle (for Bayes-Nash equilibrium): A social choice function f is implementable in Bayes-Nash equilibrium if and only if it is Bayes-Nash incentive compatible.

As a check whether you’re following, consider these questions:

  1. Suppose you have a social choice function f that is (dominant-strategy or Bayesian) incentive compatible for the type spaces Θ 1, …, Θ n. Now, throw out some types for each agent. Is the restriction of f to these smaller type spaces still incentive compatible?

  2. Is it easier for a social choice function to be DSIC or BIC?

Answers in the footnotes.1

With the reduction provided by the various versions of the Revelation Principle, we can get away with analyzing only incentive compatible direct mechanisms rather than all possible ones. If a social choice function isn’t incentive compatible, then it can’t be implemented, no matter how fancy we get.

There are many reasons why we wouldn’t want to use a direct mechanism in practice: full types might be too complex to communicate easily, agents might be worried about privacy, or agents might not trust the mechanism operator to use the information like promised (changing the rules of the game after the fact, despite a claimed commitment to a particular outcome)2 , 3. Still, direct mechanisms are very straightforward for people to use. Rather than investing time to figure out how to play the game, the participants only have to consider what their true preferences are.

In any case, direct mechanisms are very handy theoretically. From this perspective, all incentive problems boil down to encouraging agents to be honest about their private information. The only leeway we have is whether being honest should be a dominant strategy, a best response to the honesty of others, minimize the agent’s maximimum regret, etc.

Returning to the painting puzzle

In our housemate story, we don’t need to consider all possible mechanisms; we only need to consider procedures where they write down their valuation and both have an incentive to be honest about their true preferences. This is still a big space of possible mechanisms, but is much more manageable. Think for a moment about what direct mechanism you’d recommend to the housemates.

Here is one procedure that always makes the efficient decision, though it’s less than ideal: Collect valuations simultaneously. If the sum of the two reports θJack + θJill is at least $300, the room will be painted. Jack’s payment will be pJack = 300 − θJill if the room is painted and zero otherwise. Similarly, Jill’s payment will be pJill = 300 − θJack when the room is painted and zero otherwise.

Take a moment to convince yourself that honesty is (weakly) dominant for each person under this mechanism. For instance, suppose Jack’s value is $120. What would happen if he reports above this, say at $150? If Jill reports something less than $150, then the room isn’t painted and Jack’s utility is zero, the same as when he is honest. If Jill reports between $150 and $180, the room is painted and Jack gets a payoff of 120 − (300 − θJill) = θJill − 180 < 0, less than his payoff from honesty where the room isn’t painted. If Jill reports above $180, the room is painted whether or not Jack reports honestly at $120 or dishonestly at $150, so the payoff is the same. Similar reasoning goes through for any report Jack considers below $120.

So what’s the issue here? Look at the sum of the payments. When the room is painted, Jack and Jill have a $300 bill, but the mechanism only collects 300 − θJack + 300 − θJill < 300. While the mechanism is efficient, it assumes extra money is coming from somewhere! If the individual values could be anywhere between $0 and $300, then the deficit could be up to $300.

In an attempt to solve this budget issue, we could tweak the payments to the following: pJack = 450 − θJill and pJill = 450 − θJack when the room is painted and both 150 when it isn’t. Honest reporting is still a dominant strategy and there is never a budget shortfall, but now the two are paying out $300 whether or not the room is painted. Hard to imagine them agreeing to use a procedure that could leave them worse off than never having considered the renovation.

Is there a mechanism that makes the efficient decision but avoids both these pitfalls, never forcing someone to pay more than their valuation while still covering the bill? Alas, this turns out to be impossible.

Second-best mechanisms and the Myerson-Satterthwaite impossibility

Myerson and Satterthwaite (1983) prove no possible mechanism exists that paints the room exactly when efficient without requiring an outside subsidy or making one agent worse off for having participated. The original paper was framed in terms of a single buyer and seller considering whether to trade an item, showing perfectly efficient trade is impossible in the presence of incomplete information when types are independently distributed and trade isn’t a foregone conclusion4.

Rather than looking for the “first-best” mechanism that always makes the efficient decision, we’re going to be forced to find a “second-best” mechanism that maximizes welfare while still satisfying our constraints. Surprisingly, the naive “vote and split the cost” mechanism is the best feasible procedure in dominant strategies. Restated in direct mechanism terms, if both submit a valuation greater than $150, the room is painted and they split the cost equally. Jeff Ely provides a very nice graphical proof of this fact here.

Honesty as a dominant strategy is a compelling feature of a mechanism—agents don’t have to put any thought into what others might do. Requiring honesty to be dominant might be overly strict though. Not all games have dominant strategies. On the other hand, we expect all (well-behaved) games to have a Nash equilibrium. If we weaken dominant-strategy IC to Bayes-Nash IC, we can do slightly better, but full efficiency is still impossible. The best feasible direct mechanism—assuming values are uniform between $0 and $300—is to paint the room if and only if the values sum to more than $375, with each paying $150 and the person with the high valuation giving the other one-third of the difference in their valuations, i.e.

and similarly for Jill. This direct mechanism corresponds to a Bayes-Nash equilibrium of the following, more intuitive mechanism: each writes down a bid and the room is painted if the bids sum to more than $300, with excess total payments over $300 split equally between them (so if Jack bids $200 and Jill bids $150, Jack pays $175 and Jill pays $125).

Conclusion

By reducing the design problem to encouraging the roommates to be honest via the Revelation Principle, we can identify the best feasible mechanism and end up uncovering a general impossibility about bargaining along the way. In the next post, I’ll delve further into what is possible under dominant-strategy incentive compatibility.

Previously on: Mechanism Design: Constructing Algorithms for Strategic Agents

Next up: Strategyproof Mechanisms: Impossibilities


  1. Question 1: Yes, it is still incentive compatible since the constraint still holds for all types kept, with the constraints involving all types thrown out being no longer relevant. Question 2: Bayesian incentive compatibility is the weaker condition, implied by dominant-strategy incentive compatibility, since if the constraint holds for every possible realization, then it must hold on average for any distribution across possible types.

  2. Although cryptography could help solve the privacy and commitment problems of direct mechanisms. See Naor et al 1999, “Privacy preserving auctions and mechanism design”.

  3. I’ve also glossed over the issue of full vs partial implementation. The revelation principle guarantees some equilibrium of the direct mechanism coincides with our social choice function. However, there might be other “bad” equilibria in the direct mechanism, whil an indirect mechanism might have a unique good equilibrium.

  4. Luckily, the welfare loss disappears quickly as the size of the market grows. Once there are at least six people on each side of the market, the overall welfare loss due to incomplete info is less than 1% (Satterthwaite et al 2014, “Optimality vs Practicality in Market Design)