Strategyproof Mechanisms: Possibilities

De­spite dic­ta­tor­ships be­ing the only strat­e­gyproof mechanisms in gen­eral, more in­ter­est­ing strat­e­gyproof mechanisms ex­ist for spe­cial­ized set­tings. I in­tro­duce sin­gle-peaked prefer­ences and dis­crete ex­change as two fruit­ful do­mains.

Strat­e­gyproof­ness is a very ap­peal­ing prop­erty. When in­ter­act­ing with a strat­e­gyproof mechanism, a per­son is never worse off for be­ing hon­est (at least in a causal de­ci­sion-the­o­retic sense), so there is no need to make con­jec­tures about the ac­tions of oth­ers. How­ever, the Gib­bard-Sat­terth­waite the­o­rem showed that dic­ta­tor­ships are the only uni­ver­sal strat­e­gyproof mechanisms for choos­ing from three or more out­comes. If we want to avoid dic­ta­tor­ships while keep­ing strat­e­gyproof­ness, we’ll have to nar­row our at­ten­tion to spe­cific ap­pli­ca­tions with more struc­ture. In this post, I’ll in­tro­duce two re­stricted do­mains with more in­ter­est­ing strat­e­gyproof mechanisms.

Be­fore jump­ing into those, I should men­tion an­other po­ten­tial es­cape route from Gib­bard-Sat­terth­waite: ran­dom­iza­tion. So far, I’ve only con­sid­ered de­ter­minis­tic so­cial choice rules, where a type pro­file cor­re­sponds to a par­tic­u­lar out­come. Con­sid­er­ing ran­dom­ized so­cial choice rules—map­ping a type pro­file onto a lot­tery over out­comes—will widen the scope of pos­si­bil­ity just slightly. In­stead of one per­son be­ing the dic­ta­tor, we can flip a coin to de­cided who is in charge. In par­tic­u­lar, any strat­e­gyproof mechanism that chooses an out­come with cer­tainty when all agents rank it as their first choice must be a ran­dom dic­ta­tor­ship, se­lect­ing among agents with some fixed prob­a­bil­ity and then choos­ing that agent’s fa­vorite. Un­like a de­ter­minis­tic dic­ta­tor­ship, a ran­dom dic­ta­tor­ship with equal weights on agents seems palat­able enough to use in prac­tice.

Sin­gle-peaked preferences

Our first re­fuge from dic­ta­tor­ship the­o­rems is the land of sin­gle-peaked prefer­ences on a line, a tree, or more gen­er­ally a me­dian graph. Th­ese set­tings have enough struc­ture on prefer­ences that a mechanism doesn’t have to waste power elic­it­ing rank­ings through dic­ta­tor­ship. In­stead, we can de­duce enough in­for­ma­tion from an agent’s ideal out­come that we have power left over to do some­thing more in­ter­est­ing with a mechanism.

In the pre­vi­ous post, I dis­cussed how strat­e­gyproof­ness comes down to the mono­ton­ic­ity of a so­cial choice rule. Mono­ton­ic­ity says that if the out­come cho­sen by the rule is your fa­vorite, the out­come can’t change if you re­port a differ­ent rank­ing with the same top el­e­ment but with other op­tions swapped around. This es­sen­tially re­stricts a strat­e­gyproof mechanism to us­ing agents’ fa­vorite al­ter­na­tives as the only in­puts.

With two al­ter­na­tives to choose from, know­ing an agent’s fa­vorite im­me­di­ately tells us their full rank­ing, so we have room to build non-dic­ta­to­rial mechanisms like ma­jor­ity rule. With three al­ter­na­tives and un­re­stricted prefer­ences, know­ing an agent’s fa­vorite doesn’t tell us any­thing about the rank­ing of the other two. For ex­am­ple, sup­pose we have three op­tions: red, yel­low, and blue. If two agents agree that red is best, we don’t have enough in­for­ma­tion to un­am­bigu­ously say whether switch­ing from blue to yel­low with make those agents bet­ter off or not.

What if in­stead the three al­ter­na­tives had some nat­u­ral or­der­ing? For in­stance, some house­mates have a ter­rible heat­ing sys­tem with three set­tings: freez­ing, tem­per­ate, and burn­ing. It’s rea­son­able to as­sume that any­one who loves it hot or cold has “tem­per­ate” as their sec­ond choice, with no one prefer­ring both ex­tremes to some­thing in the mid­dle. Mov­ing the ther­mo­stat from “freez­ing” to “tem­per­ate” makes ev­ery­one who has “tem­per­ate” or “burn­ing” as their ideal bet­ter off and those with “freez­ing” as their ideal worse off, al­low­ing us to or­der the al­ter­na­tives:

This or­der­ing al­lows us to re­duce the de­ci­sion be­tween three al­ter­na­tives into a mul­ti­ple bi­nary de­ci­sions that will be con­sis­tent with one an­other. One strat­e­gyproof, non-dic­ta­to­rial mechanism would be to start with the ther­mo­stat at “tem­per­ate” and hold a vote whether or not to in­crease the tem­per­a­ture. If a ma­jor­ity sup­port a tem­per­a­ture in­crease, the ther­mo­stat is bumped up and we’re done. If that vote fails, an­other one is held for a tem­per­a­ture de­crease, with the ma­jor­ity win­ner of that vote be­ing the fi­nal out­come. Another mechanism that gives the same re­sults would be for ev­ery­one to sub­mit their ideal tem­per­a­ture and choose the me­dian (with a dummy vote of “tem­per­ate” to break ties if there are an even num­ber of house­mates). Take a mo­ment to con­vince your­self this mechanism is strat­e­gyproof.

The me­dian vote is well-defined even if the ther­mo­stat has more than three tem­per­a­tures. We can push this all the way and con­sider ev­ery pos­si­ble tem­per­a­ture. With a con­tinuum of out­comes, it’ll be eas­ier to think in terms of a util­ity func­tion rather than a rank­ing, though we’re still only con­cerned with or­di­nal com­par­i­sons. Choos­ing the me­dian will be strat­e­gyproof as long as prefer­ences are sin­gle-peaked, with a sin­gle lo­cal max­i­mum and util­ity fal­ling off as we go fur­ther in ei­ther di­rec­tion away from it. Here is one ex­am­ple of a sin­gle-peaked prefer­ence, de­pict­ing the util­ity for each tem­per­a­ture:

Since this agent has a com­pli­cated rank­ing over tem­per­a­tures, ask­ing agents to re­port only their peaks rather than their full util­ity func­tion comes off as a prac­ti­cal ad­van­tage rather than a se­vere con­straint.

Con­sider the thought pro­cess of some­one de­cid­ing which tem­per­a­ture to re­port, know­ing the me­dian re­port will be cho­sen as the fi­nal out­come. If her ideal would be the me­dian re­port (whether those are be­ing made hon­estly or not), she has di­rect con­trol over the out­come as long as she stays the me­dian, so re­port­ing some­thing differ­ent makes her worse off. If her ideal is above the me­dian, re­port­ing some­thing higher leaves the out­come un­changed. Re­port­ing some­thing lower ei­ther leaves the out­come un­changed or it makes the out­come drop even fur­ther away from her ideal than it already was. Similar rea­son­ing goes through when her ideal is be­low the me­dian. Hence she is never worse off for re­port­ing her peak hon­estly. Con­trast this with a mechanism that av­er­ages the peaks to get the out­come, which is not strat­e­gyproof.

Choos­ing an out­come on the real line has many nat­u­ral ap­pli­ca­tions like set­ting the tem­per­a­ture of a ther­mo­stat or the sales tax rate of a city. We can go fur­ther than lines though. For in­stance, sin­gle-peaked prefer­ences make sense on a tree since we can talk un­am­bigu­ously about go­ing closer or fur­ther from some point as we move along the edges. Here is one tree, with an ex­am­ple of sin­gle-peaked prefer­ences for it de­scribed by util­ities for each node and the peak cir­cled:

As on a line, we’ll ask agents to re­port their peaks and choose the me­dian node, i.e. the node that min­i­mizes the dis­tance to each agent’s re­port. Sup­pose there are five agents, la­beled a through e, and they re­port the fol­low­ing peaks:

The out­come will be the node cir­cled in red, re­sult­ing in agent d get­ting his first choice since he is the “me­dian” of the five.

Go­ing even fur­ther, there are non-dic­ta­to­rial strat­e­gyproof mechanisms on any graph where me­di­ans are unique and well-defined. We can also tweak the me­dian rule, throw­ing in some dummy vot­ers or weight­ing agents differ­ently. How­ever, it turns out that the me­dian is es­sen­tially the only strat­e­gyproof mechanism that treats agents and out­comes sym­met­ri­cally.

Discrete exchange

In the pre­vi­ous ex­am­ples, the mechanism chose a sin­gle out­come for all agents. Con­sider in­stead a situ­a­tion where each agent owns one ob­ject and agents might want to swap things around. Rather than a sin­gle out­come, it makes more sense to think of sub-out­comes de­scribing who gets which ob­ject, es­pe­cially if each cares only about what he ends up with and not what the oth­ers get. In­differ­ences over parts of the al­lo­ca­tion not in­volv­ing that agent is an­other prefer­ence re­stric­tion that al­lows us to evade dic­ta­tor­ship the­o­rems.

For ex­am­ple, three Ro­man sol­diers have cur­rently as­signed du­ties which their ec­cen­tric new cen­tu­rion will al­low them to trade around. At the mo­ment, An­to­nius is a stan­dard-bearer, Bru­tus is a trum­peter, and Cato is an ar­tillery­man. Even though the full out­come will be a list of who gets what, we con­sider only the prefer­ence of each over the three jobs. Per­haps An­to­nius has prefer­ences trupeter ≻ stan­dard-bearer ≻ atillerman, Bru­tus has prefer­ences stan­dard-bearer ≻ atillerman ≻ trupeter, and Cato has prefer­ences trupeter ≻ stan­dard-bearer ≻ atillerman. Given this, it’s nat­u­ral to say An­to­nius and Bru­tus should switch jobs, with Cato stuck as an ar­tillery­man.

We can for­mal­ize this in­cli­na­tion us­ing David Gale’s Top Trad­ing Cy­cle al­gorithm, which op­er­ates as fol­lows:

  1. Each agent starts as ac­tive. Each ob­ject starts as ac­tive and point­ing at the agent that owns it.

  2. Ac­tive agents point at their fa­vorite ac­tive ob­ject.

  3. At least one cy­cle must ex­ist from agent to ob­ject to agent and etc. De­ac­ti­vate agents and ob­jects in a cy­cle.

  4. Iter­ate steps 2 and 3 un­til all ob­jects are de­ac­ti­vated. The fi­nal al­lo­ca­tion is each ob­ject go­ing to the agent point­ing at it.

In the ex­am­ple, An­to­nius and Cato point at trum­peter, with Bru­tus point­ing at stan­dard-bearer. De­ac­ti­vat­ing the An­to­nius trum­peter Bru­tus stan­dard-bearer An­to­nius cy­cle, only Cato is left ac­tive, so he points at his own job, and the mechanism is done.

As I’ve told the story, the sol­diers already have con­trol over a par­tic­u­lar job. For in­stance, if An­to­nius liked be­ing a stan­dard-bearer best, he could guaran­tee he keeps the job. What would we do if the sol­diers were new and didn’t have a pre-ex­ist­ing job? One op­tion is to ran­domly as­sign tasks and run the al­gorithm from there. This is ac­tu­ally how New Or­leans matches K-12 stu­dents with pub­lic schools as of 2012. Since the mechanism is strat­e­gyproof, par­ents don’t have to worry about gam­ing the sys­tem when they rank their top choices from the 67 schools in the dis­trict1. Un­like many of the toy ex­am­ples I’ve given, here is a real-world case of im­prove­ments made by mechanism de­sign.


By re­strict­ing prefer­ences to be sin­gle-peaked or in­differ­ent over the al­lo­ca­tions of oth­ers, we can find use­ful strat­e­gyproof mechanisms. In the next post, I’ll con­tinue ex­plor­ing strat­e­gyproof mechanisms in the very fruit­ful spe­cial case when we can make trans­fers be­tween agents, in­tro­duc­ing the famed Vick­rey-Clarke-Groves mechanisms.

The idea of sin­gle-peaked prefer­ences on a line is quite old, dat­ing back to Dun­can Black in 1948. For a full char­ac­ter­i­za­tion of strat­e­gyproof mechanisms on a line, see Moulin (1980). For a char­ac­ter­i­za­tion of which sin­gle-peaked prefer­ence do­mains ad­mit non-dic­ta­to­rial, strat­e­gyproof mechanisms, see Nehring and Puppe (2007).

For an overview of mechanism de­sign as­pects of school choice, see Ab­dulkadiroglu and Son­mez (2003).

Pre­vi­ously on: Strat­e­gyproof Mechanisms: Impossibilities

Next up: Mechanism De­sign with Money

  1. Though tech­ni­cally there might be a tiny rea­son to mis­rep­re­sent prefer­ences since the par­ents since the rank­ing is trun­cated to the top eight schools rather than all 67.