How to do cost-effectiveness analysis for elections
Some professionals get this wrong.
You should use three parameters:
A. Goodness if your preferred candidate wins rather than loses
B. Probability that one vote for your candidate would flip the election
C. Cost per vote
Then cost-effectiveness is A*B/C.
I’m only really going to discuss B in this post. For B, you should come up with a probability distribution for the vote margin. In general you should use a normal distribution, with parameters depending on the election. Let μ be the mean and σ the standard deviation of vote margin as a fraction of the total (for example, an election you win 60-40 has margin +20%). Then the density of the normal distribution at zero is e^(-1/2 * (μ/σ)^2) / (σ*√(2π)). And so B is that divided by the number of voters, N. You can also use this Google Sheet formula.
If you have a good understanding of US (or wherever) politics, the particular election. and math, you can choose μ and σ well. And N is easy.
You can stop reading now; the rest is minor.
Normal distribution resources: calculator and graph.
You can use 1% vote margin rather than 1 vote; you just have to do so for both B and C.
For B, here’s a simple heuristic for partisan general elections: assume σ is 7%. Then if the election is a tossup (μ ≈ 0%, P(win) ≈ 50%), B is 5.7/N. If one candidate is favored (μ ≈ ±6%, P(win) ≈ 20% or 80%), B is 4/N. If one candidate is strongly favored (μ ≈ ±12%, P(win) ≈ 5% or 95%), B is 1.3/N. But really σ depends on the election; it can be between about 3.5% (e.g. two weeks before a presidential election) and 10% (far out from an off-cycle state-level election with a high-variance candidate), depending.
For B, one common flawed approach is to assume this election will be about as close as similar elections in the past. That generally leads to bad inferences. Past election results can inform your normal distribution, but you basically have to make a distribution. (I’m not justifying this view here, but I feel confident.)
C is often tricky; it depends on the intervention (and the election). Note that online sources and chatbots are often wrong about cost per vote.
If you’re determining C by averaging over a distribution, you have to take the harmonic mean rather than the arithmetic mean. Or: you have to think in terms of votes per cost, not cost per vote.
Some kinds of elections are more complicated. If your goal is a majority in the House, what matters in winning in worlds where the House is close, so you should multiply probability that the House is close, goodness in worlds where the House is close, and probability that one vote for your candidate would flip the election in worlds where the House is close (for any consistent operationalization of the House is close). If your goal is flipping the presidency, you need to think about the Electoral College; one good approach is to multiply probability of flipping a state by probability that state is the tipping point. For elections with more than two strong candidates, vote margin isn’t normally distributed so you need a different approach for B.
Most interventions are marginal: the number of voters they affect is a tiny fraction of the total. Other interventions are not; for example, nominating a stronger candidate can increase vote margin by several percentage points. This matters because for marginal interventions you can just consider the probability that each vote for your candidate flips the election, but for non-marginal interventions that probability changes as you add votes. Instead you have to consider the probability that your candidate wins before and after the intervention (generally by inferring this from probability distributions for vote margin, before and after), then take the difference.
The fact that votes can tie doesn’t matter. One way to think about this is to think in units of 1000 votes rather than 1 vote. Another way is to suppose ties will be broken in one direction or the other.
In the last 15 years, election work has become more effectiveness-focused. We’re now most of the way through the moneyball transition. Election efforts now use data-based targeting, use RCTs, and try to minimize “cost per net vote.” But many professionals still only care about numbers in certain contexts. For one, it’s unusual to use numbers to prioritize between different elections, even though elections differ dramatically in (1) importance and (2) probability that one vote will flip them.
This post is part of my sequence inspired by my prioritization research and donation advising work.
For people following my daily posting: I have two more posts in me that I really want to write decently well: donations are super important and the US government is super important. The latter is particularly tricky — for one thing, most people say they already agree; there’s confusing inconsistencies in people’s current attitudes that I want to untangle. Anyway, today’s post and several future posts will be shorter and less important, to give me more time to write those two.
I think there are a few implications in how you compute the mathematics here which suggest the practice itself is degenerative toward the implied goal of democracy. I don’t think this is a good or legitimate practice in principle, and political donations are heavily regulated in my country of origin.
Having “goodness” of a particular electoral result priced into a particular outcome, with no variable that updates or adjusts based on information about others’ preferences, implies a kind of epistemic superiority over what outcomes are best—an authority that stands outside, and potentially above, the collective decision procedure democracy is meant to embody.
But the core principle of democracy is not simply that good outcomes occur. It is that outcomes are generated through a process in which individuals’ preferences are given equal standing, regardless of wealth, influence, or strategic sophistication. In that sense, democratic legitimacy depends not only on results, but on the fairness of influence over those results.
Are your donation preferences being deployed to prevent capital intervention and relative skew of institutions toward structures that limit monetary intervention by privileged groups—or not? How do you reason that participation in buying elections can be considered good, rather than a form of self-exemption from an established decision policy via consequentialism?
From the outside, it looks like the entire process where this is employed has led to extreme policy skew toward privileged groups, over and above what democratic governance was intended to produce. The reason for this being institutional capture from drift via self-exemption by bad faith actors.
Let me elaborate: I broadly agree with the framing here, in that the probability of flipping a vote is going to be related to the margin of the race; in a race decided by a couple hundred votes, a single vote-flip counts for 0.5%; far more than it does in a national election.… if you’re voting in an election where one candidate has a 6% edge, your vote has roughly a 1 in 12 chance of changing the outcome! Thats massive leverage that you can’t hope to replicate in larger elections.
The value (which I believe maps to “goodness”) of that vote flip is going to be related to:
- the budget over which the politician has leverage
- what fraction of that budget spend affects you
- their probability of listening to what you have to say
While the budget is smaller in absolute terms, in terms of how it affects you it basically remains constant with election scale. i.e. the national budget is larger, but spread over 300M people, a local election has a smaller budget spread over a smaller population, but the per-person impact is about the same.
Moreover, precisely because local politicians know that every vote counts, they’re much more responsive to constituents than state or national politicians.
Given that A & B are much larger in local elections, I think there’s a lot of value there. The notable exception is if the policy is made at a higher level of jurisdiction.
This is mostly false! You have to think about σ. If two candidates are tied ex ante, that doesn’t mean your vote is infinitely powerful. The crucial question is probability that your vote will flip the election. And on your particular example, maybe you’d have a ~1/12 chance of flipping the election if your candidate’s vote margin was a random number between −0 and −12, but “your candidate is expected to lose by 12 votes” is lower tractability than that because there’s a 50% chance your candidate will lose by more than 12 votes, and there’s some chance that they’ll win without you, and the crucial −0 scenario is less likely than the −12 scenario.
No, people are affected more by the federal government than their local government. The federal government matters more than all local governments combined. But federal vs local government is not relevant to this post so I don’t want to get into it.
So I did make a math mistake, but I think we’re in broad agreement. Let me be explicit for a race with total expected votes N=400 (e.g. seat on a city council for one district of a small town)
With N=400
sigma = sqrt(400 * 0.5 * 0.5) = 10
a 6-point lead means expected votes would be:
A : 212
B : 188
This corresponds to a win probability for A of cdf(12/10) ~88%
Changing one’s vote from A to B changes the expected counts to:
A : 211
B : 189
This corresponds to a win probability for A of cdf(11/10) ~86%
So yes, it’s only 2% change vs my earlier assertion of 8%, my mistake.
But I think we agree that sigma matters! And my point is that in small local elections, sigma is small, and your vote counts for a lot!
I agree that if you only care about federal policy, this doesn’t apply (I’d missed that in the initial post). But if you care about libraries, or how aggressive the police are, those are local issues where someone can have a strong influence in policy.