Strategyproof Mechanisms: Impossibilities

In which the limits of dom­i­nant-strat­egy im­ple­men­ta­tion are ex­plored. The Gib­bard-Sat­terth­waite dic­ta­tor­ship the­o­rem for un­re­stricted prefer­ence do­mains is stated, show­ing no uni­ver­sal strat­e­gyproof mechanisms ex­ist, along with a proof for a spe­cial case.

Due to the Reve­la­tion Prin­ci­ple, most de­sign ques­tions can be an­swered by study­ing in­cen­tive com­pat­i­ble mechanisms, as dis­cussed in the last post. In­cen­tive com­pat­i­bil­ity comes in many differ­ent fla­vors cor­re­spond­ing to differ­ent solu­tion con­cepts—dom­i­nant-strat­egy IC and Bayes-Nash IC be­ing the two most com­mon. In this post, I’ll delve into what’s pos­si­ble un­der dom­i­nant strat­egy in­cen­tive com­pat­i­bil­ity.

Re­call that a strat­egy is dom­i­nant if play­ing it always leads to (weakly) higher pay­offs for an agent than other strat­egy would, no mat­ter what strate­gies other agents play. A so­cial choice func­tion is dom­i­nant-strat­egy in­cen­tive com­pat­i­ble if hon­est rev­e­la­tion is a dom­i­nant strat­egy in the di­rect mechanism for that SCF1. The ap­peal of im­ple­men­ta­tion in dom­i­nant strate­gies is that an agent doesn’t need to think about what other agents will do. Even in the worst case, a dom­i­nant-strat­egy IC so­cial choice func­tion leaves an agent bet­ter off for be­ing hon­est. Since the need for strate­gic think­ing is elimi­nated, dom­i­nant-strat­egy IC is also referred to as strat­e­gyproof­ness.

Gib­bard-Sat­terth­waite: uni­ver­sal strat­e­gyproof mechanisms are out of reach

Ar­row’s the­o­rem is well-known for show­ing dic­ta­tor­ships are the only ag­gre­ga­tors of or­di­nal rank­ings that satisfy a set of par­tic­u­lar crite­ria. The re­sult is com­monly in­ter­preted as say­ing there is no perfect vot­ing sys­tem for more than two can­di­dates. How­ever, since Ar­row deals with so­cial welfare func­tions which take a pro­file of prefer­ences as in­put and out­puts a full prefer­ence rank­ing, it re­ally says some­thing about ag­gre­gat­ing a set of prefer­ences into a sin­gle group prefer­ence. Most of the time though, a full rank­ing of can­di­dates will be su­perflu­ous—all we re­ally need to know is who wins the elec­tion. Although Ar­row doesn’t give so­cial welfare func­tions much room to ma­neu­ver, maybe there are still some nice so­cial choice func­tions out there.

Alas, it’s not to be. Alan Gib­bard and Mark Sat­terth­waite have shown that, in gen­eral, the only strat­e­gyproof choice from three or more al­ter­na­tives that is even slightly re­spon­sive to prefer­ences is a dic­ta­tor­ship by one agent.

Stated for­mally:

Gib­bard-Sat­terth­waite dic­ta­tor­ship the­o­rem: Sup­pose f: L(A)n → A is a so­cial choice func­tion for n agents from pro­files of or­di­nal rank­ings L(A)n to a set of out­comes A with at least three el­e­ments. If f is strat­e­gyproof and onto2, then f is dic­ta­to­rial.

Be­ing strat­e­gyproof seems in­tu­itively more de­sir­able to me than the prop­er­ties in Ar­row’s the­o­rem (es­pe­cially the much cri­tiqued in­de­pen­dence of ir­rele­vant al­ter­na­tives crite­rion). As it turns out though, Gib­bard-Sat­terth­waite and Ar­row are equiv­a­lent! Weak­en­ing our am­bi­tion from so­cial welfare func­tions to so­cial choice func­tions didn’t give us any­thing.

Gib­bard-Sat­terth­waite seems a great blow to mechanism de­sign at first glance. On re­flec­tion, per­haps we were ask­ing for too much. A com­pletely generic strat­e­gyproof mechanism that can be used in any situ­a­tion does sound too good to be true.

There are three ways to pro­ceed from this point:

  1. Build­ing strat­e­gyproof mechanisms for spe­cial­ized ap­pli­ca­tions,

  2. Weak­en­ing strat­e­gyproof­ness to an­other form of in­cen­tive com­pat­i­bil­ity, or

  3. Step­ping side­ways from or­di­nal prefer­ences and en­rich­ing the model while adding fur­ther as­sump­tions.

On that last path, note that switch­ing from or­di­nal to car­di­nal prefer­ences can’t save us un­less we offset that gen­er­al­iza­tion with other as­sump­tions—car­di­nal in­for­ma­tion ex­pands the or­di­nal type space, which can only make im­ple­men­ta­tion more difficult.

Since dic­ta­tor­ships are the only uni­ver­sal strat­e­gyproof mechanisms for choos­ing be­tween three op­tions, it’s worth in­ves­ti­gat­ing why life is pleas­ant with only two op­tions. In this case, lots of non-triv­ial strat­e­gyproof mechanisms ex­ist. For ex­am­ple, sup­pose a com­mit­tee is re­stricted to paint­ing a bike shed ei­ther red or blue due to zon­ing re­stric­tions. The so­cial choice func­tion that paints the shed red if and only if a ma­jor­ity have red as their first choice is strat­e­gyproof, as is the so­cial choice to paint the shed red if and only if the com­mit­tee unan­i­mously lists red as their top choice.

Of course, not ev­ery vot­ing rule for bi­nary out­comes will be strat­e­gyproof, like the rule that paints the shed red if and only if an odd num­ber of com­mit­tee mem­bers top-rank red. While I can’t imag­ine any­one ar­gu­ing in fa­vor of this rule, what’s go­ing wrong here? The prob­lem is this odd rule isn’t mono­tonic—rais­ing red in some­one’s prefer­ence rank­ing can cause the out­come to switch away from red.

Here are the for­mal defi­ni­tions of strat­e­gyproof­ness and mono­ton­ic­ity:

A so­cial choice func­tion f: L(A)n → A is strat­e­gyproof if for each agent i, the so­cial choice satis­fies for all rank­ings and .

A so­cial choice func­tion f: L(A)n → A is mono­tonic if for each agent i, we have that im­plies and for all rank­ings and .

Take a mo­ment to con­vince your­self that mono­ton­ic­ity is just a slight rephras­ing of strat­e­gyproof­ness and hence equiv­a­lent. Though they are iden­ti­cal at heart, mono­ton­ic­ity car­ries two use­ful in­ter­pre­ta­tion as an in­var­i­ance prop­erty. First, if an agent sub­mits a new rank­ing where the pre­vi­ous out­come goes up, the out­come can’t change. Se­cond, sub­mit­ting a new rank­ing where the rank of the pre­vi­ous out­come rel­a­tive any other op­tion is un­changed has to leave the out­come un­changed as well.

When there are only two al­ter­na­tives, we can think of the im­plicit out­come space as one di­men­sional, with one out­come on the left and the other on the right. Go­ing to­wards one out­come cor­re­sponds ex­actly with go­ing away from the other. With three or more al­ter­na­tives, we don’t have the same nice struc­ture, lead­ing to the im­pos­si­bil­ity of a non-triv­ial mono­tonic rule.

In the next post, I’ll de­scribe do­mains where we can en­rich the out­come space be­yond a bi­nary choice and still get strat­e­gyproof­ness. Since the out­come space won’t nat­u­rally have a nice or­der struc­ture, we’ll have to en­sure it does by re­strict­ing the prefer­ences agents can have over it. Even though we don’t have uni­ver­sal strat­e­gyproof mechanisms other than dic­ta­tor­ships, we can un­cover strat­e­gyproof mechanisms for spe­cific ap­pli­ca­tions. In the mean­time, here’s a proof of Gib­bard-Sat­terth­waite for the cu­ri­ous.

Proof of Gib­bard-Sat­terth­waite (in a spe­cial case)

Sup­pose the com­mit­tee con­sists of two peo­ple, Betty and Bill. In ad­di­tion to red and blue, the city re­cently ap­proved yel­low as an op­tion to paint bike sheds. Each per­son has six pos­si­ble rank­ings over the three col­ors, and there are 36 prefer­ence pro­files con­tain­ing one rank­ing from each. The 36 will fall into six rele­vant classes. Here is an ex­am­ple of each class along with some short­hand to de­scribe it:

  1. r > b > y: Both agents agree on the rank­ing of red over blue over yel­low.

  2. r > b^y: Both agents agree red is the best. Betty puts blue as her sec­ond choice, while Bill has yel­low as his sec­ond choice.

  3. b^y > r: Both agree red is worst. Betty thinks blue is bet­ter than yel­low, while Bill thinks yel­low is bet­ter.

  4. r ∣ (b > y): Both agents agree blue is bet­ter than yel­low. Betty thinks red is bet­ter than them both, while Bill thinks red is worse than both.

  5. (b > y) ∣ r: Both agree blue is bet­ter than yel­low. Betty thinks red is worst, while Bill thinks red is best.

  6. r^b^y: Betty has the rank­ing red over blue over yel­low, while Bill has the re­verse.

For the no­ta­tion sum­ma­riz­ing the pro­file, r > b in­di­cated both agree that red is bet­ter than blue. r^b says there is a con­flict be­tween the two with Betty prefer­ring red. r ∣ (b > y) says there are two prefer­ence con­flicts, with Betty prefer­ring red over the other two op­tions, so we al­ter­na­tively think of this pro­file as r^y, r^b, and y > b.

Now, we’ll as­sign the 36 pro­files a color, us­ing each at least once, in a way that is mono­tonic. This will hap­pen in six steps as de­picted in the fol­low­ing di­a­gram.

  1. At least one pro­file must be red by as­sump­tion. Start­ing from that pro­file (what­ever it is), we could move red to the top of both rank­ings and the out­come would still be red by mono­ton­ic­ity. Then all other pro­files with red top-ranked by both also must be red since any swaps of blue and yel­low can’t change the out­come since red is still rel­a­tively above both. This gives us 1a,b,c,d. Blue and yel­low go through similarly.

  2. Con­sider the pro­file b^y > r at the bot­tom of the di­a­gram. The pro­file y > b > r got yel­low in 1f, so if Betty starts lik­ing blue bet­ter from 2, the out­come has to stay yel­low or switch to blue. Since mono­ton­ic­ity can’t tell us more than that, we have to make a choice. Let’s de­cide in fa­vor of Betty and pick blue.

  3. Since we chose to re­solve the con­flict b^y as b, three more col­or­ings fol­low since we must re­solve this par­tic­u­lar con­flict in the same way ev­ery­where. Keep in mind that b^y is a differ­ent con­flict than y^b since it might mat­ter who prefers which color. Con­sider 3b. This can’t be yel­low since yel­low in­creases be­tween this pro­file and 2, but 2 isn’t yel­low. It also can’t be red since 1k isn’t red, so we con­clude 3b must be blue. Now con­sider 3a. Even though the con­flict b^y re­solves in fa­vor of blue, the out­come can’t be blue since 1c isn’t blue. Hence 3a must be red. From 3a, we con­clude that r^y was re­solved as r, so this new rule must ap­ply ev­ery­where. From 3c, we get a third rule that b^r = b.

  4. With two new rules in the third step in ad­di­tion to the rule from the sec­ond, more col­or­ings fol­low. Th­ese col­or­ings then give us the rules r^b = r, y^b = y, and y^r = y.

  5. With all pos­si­ble pair­wise re­s­olu­tions set­tled, all pro­files can be col­ored.

We’ve found a mono­tonic, onto col­or­ing! No­tice that this is a dic­ta­tor­ship by Betty, choos­ing her top-ranked color for each pro­file. Every­thing comes down to fa­vor­ing Betty over Bill in step two. Since Betty was pivotal once, she ends up hav­ing com­plete con­trol. Of course, we could have re­solved b^y as y in­stead, which would have given us a dic­ta­tor­ship by Bill. That choice in step two was the only de­gree of lat­i­tude we had, so these are the only two mono­tonic, onto col­or­ings.

This spe­cial-case proof of Gib­bard-Sat­terth­waite was in­spired by Hansen (2002), “Another graph­i­cal proof of Ar­row’s im­pos­si­bil­ity the­o­rem”. Full proofs of the the­o­rems, done si­mul­ta­neously side-by-side, are given in Reny (2001), “Ar­row’s the­o­rem and the Gib­bard-Sat­terth­waite the­o­rem: a unified ap­proach”.

Pre­vi­ously on: In­cen­tive-Com­pat­i­bil­ity and the Reve­la­tion Principle

Next up: Strat­e­gyproof Mechanisms: Possibilities


  1. Re­call that the di­rect mechanism for an SCF is where we sim­ply ask the agents what type they are and then as­sign the out­come pre­scribed by the SCF for that type pro­file.

  2. The func­tion f is onto if ev­ery out­come in A oc­curs for at least one in­put. The main role of this as­sump­tion to pre­vent f from covertly re­strict­ing its image to two el­e­ments, so it’s al­most with­out loss of gen­er­al­ity.