A Rationalist Argument for Voting

The ar­gu­ment that vot­ing is ir­ra­tional is com­mon­place. From your point of view as a sin­gle voter, the chance that your one vote will sway the elec­tion are typ­i­cally minis­cule, and the di­rect effect of your vote is null if it doesn’t sway the elec­tion. Thus, even if the ex­pected value of your preferred can­di­date to you is hugely higher than that of the likely win­ner with­out your vote, when mul­ti­plied by the tiny chance your vote mat­ters, the over­all ex­pected value isn’t enough to jus­tify the small time and effort of vot­ing. This prob­lem at the heart of democ­racy has been noted by many — most promi­nently, Con­dorcet, Hegel, and Downs.

There have been var­i­ous coun­ter­ar­gu­ments posed over the years:

  • Vot­ers get some kind of in­trin­sic util­ity from the act of vot­ing ex­pres­sively.

  • Vot­ers get some util­ity be­cause chang­ing the mar­gin of the elec­tion af­fects how the re­sult­ing gov­ern­ment be­haves. (I per­son­ally find this ar­gu­ment highly im­plau­si­ble; the level of effects that would be nec­es­sary seem visi­bly lack­ing.)

  • If vot­ers have a suffi­ciently al­tru­is­tic util­ity func­tion, the ex­pected util­ity of a bet­ter gov­ern­ment for all cit­i­zens could be suffi­cient to make vot­ing worth it.

  • Vot­ing it­self is ir­ra­tional, but hav­ing a policy of vot­ing is ra­tio­nal.

    • This could be true if, for in­stance, pay­ing at­ten­tion to poli­tics were in­trin­si­cally good for one’s men­tal health, and yet akra­sia would pre­vent pay­ing suffi­cient at­ten­tion with­out a policy of vot­ing. I sus­pect most read­ers here will de­ci­sively re­ject that idea.

    • This could also be true if there were some kind of iter­ated or out­ro­spec­tive pris­on­ers dilemma [Ed: ac­tu­ally, more like stag hunt] in­volved, in which vot­ing was co­op­er­a­tion and not-vot­ing was be­trayal.

Of all of the above, I find the last bul­let most in­ter­est­ing. But I am not go­ing to pur­sue that here. Here, I’m go­ing to pro­pose a differ­ent ra­tio­nale for vot­ing; one that, as far as I know, is novel.

Par­ti­ci­pat­ing in demo­cratic elec­tions is a group skill that re­quires prac­tice. And it’s worth prac­tic­ing this skill be­cause there is an ap­pre­cia­ble chance that a fu­ture elec­tion will have a sig­nifi­cant im­pact on ex­is­ten­tial risk, and thus will have a util­ity differ­en­tial so high so as to make a life­time of vot­ing worth it.

Let’s build a toy model with the fol­low­ing vari­ables:

  • c: The cost of vot­ing, in utilons.

  • i: “im­por­tance”, the prob­a­bil­ity that your high­est val­ues — whether that is the sur­vival of your eth­nic group, the flour­ish­ing of hu­man­ity in gen­eral, max­i­miz­ing plea­sure for all sen­tient be­ings, or what­ever — hang in the bal­ance in any given fu­ture elec­tion. Call such elec­tions “im­por­tant”.

  • u: the util­ity differ­en­tial, in utilons, at stake in im­por­tant elec­tions. To can­cel out utilons, we can fo­cus on the di­men­sion­less quan­tity u/​c.

  • l: num­ber of peo­ple “like you” in any given election

  • t: the chance that a given per­son like you truly no­tices the elec­tion is important

  • f<t: the chance of false pos­i­tives in notic­ing im­por­tant elections

  • s: the chance that a per­son like you will, if they vote, cast a cor­rectly strate­gic bal­lot in an im­por­tant election

  • b<s: chance that they will, if they vote, cast an anti-strate­gic (bad) ballot

  • p: marginal slope of prob­a­bil­ity of a good out­come. The chance that m strate­gic bal­lots will have the power to swing the elec­tion is roughly pm over the plau­si­ble range of val­ues of m.

Note that t, f, s, and b re­fer to in­di­vi­d­u­als’ marginal chances, but in­de­pen­dence is not as­sumed; out­comes can be cor­re­lated across vot­ers. So the util­ity benefit per elec­tion per voter of the policy of “vot­ing iff you no­tice that the elec­tion is im­por­tant” is uit(s-b)p, while the cost is itc+(1-i)fc. The util­ity benefit per elec­tion of “always vot­ing” is ui(s-b)p, while its costs are c. If u/​c can take val­ues above 1e11 and i is above 1e-4 — val­ues I con­sider plau­si­ble — then for rea­son­able choices of the other vari­ables “always vot­ing” can be a ra­tio­nal policy.

This model is weak in sev­eral ways. For one, the chances of swing­ing an elec­tion with strate­gic votes are not lin­ear with the num­ber of votes in­volved; it’s prob­a­bly more like a lo­gis­tic cdf, and l could eas­ily be large enough that the deriva­tive isn’t ap­prox­i­mately con­stant. For an­other, adopt­ing a policy of vot­ing prob­a­bly has side-effects; it prob­a­bly in­creases t, pos­si­bly de­creases f, and may in­crease one’s abil­ity to sway the votes of other vot­ers who do not count to­wards l. All of these struc­tural weak­nesses would tend to lead the model to un­der­es­ti­mate the ra­tio­nal­ity of vot­ing. (Of course, nu­mer­i­cal is­sues could lead to bias in ei­ther di­rec­tion; I’m sure some peo­ple will find val­ues of i>1e-4 to be ab­surdly high.)

Yet even this sim­ple model can es­ti­mate that vot­ing has pos­i­tive ex­pected value. And it ap­plies whether the ex­is­ten­tial threat at is­sue is a geno­ci­dal regime such as has oc­curred in the past, or a novel threat such as pow­er­ful, mis­al­igned AI.

Is this a novel ar­gu­ment? Some­what, but not en­tirely. The ex­treme util­ity differ­en­tial for ex­is­ten­tial risk is prob­a­bly to some de­gree al­tru­is­tic. That is, it’s rea­son­able to ex­ert sub­stan­tially more effort to avert a low-risk pos­si­bil­ity that would de­stroy ev­ery­thing you care about, than you would if it would only kill you per­son­ally; and this im­plies that you care about things be­yond your own life. Yet this is not the ev­ery­day al­tru­ism of tran­sient welfare im­prove­ments, and thus it is harder to un­der­mine it with ar­gu­ments us­ing re­vealed prefer­ence.