A voting theory primer for rationalists

What is voting theory?

Voting theory, also called social choice theory, is the study of the design and evaulation of democratic voting methods (that’s the activists’ word; game theorists call them “voting mechanisms”, engineers call them “electoral algorithms”, and political scientists say “electoral formulas”). In other words, for a given list of candidates and voters, a voting method specifies a set of valid ways to fill out a ballot, and, given a valid ballot from each voter, produces an outcome.

(An “electoral system” includes a voting method, but also other implementation details, such as how the candidates and voters are validated, how often elections happen and for what offices, etc. “Voting system” is an ambiguous term that can refer to a full electoral system, just to the voting method, or even to the machinery for counting votes.)

Most voting theory limits itself to studying “democratic” voting methods. That typically has both empirical and normative implications. Empirically, “democratic” means:

  • There are many voters

  • There can be more than two candidates

In order to be considered “democratic”, voting methods generally should meet various normative criteria as well. There are many possible such criteria, and on many of them theorists do not agree; but in general they do agree on this minimal set:

  • Anonymity; permuting the ballots does not change the probability of any election outcome.

  • Neutrality; permuting the candidates on all ballots does not change the probability of any election outcome.

  • Unanimity: If voters universally vote a preference for a given outcome over all others, that outcome is selected. (This is a weak criterion, and is implied by many other stronger ones; but those stronger ones are often disputed, while this one rarely is.)

  • Methods typically do not directly involve money changing hands or other enduring state-changes for individual voters. (There can be exceptions to this, but there are good reasons to want to understand “moneyless” elections.)

Why is voting theory important for rationalists?

First off, because democratic processes in the real world are important loci of power. That means that it’s useful to understand the dynamics of the voting methods used in such real-world elections.

Second, because these real-world democratic processes have all been created and/​or evolved in the past, and so there are likely to be opportunities to replace, reform, or add to them in the future. If you want to make political change of any kind over a medium-to-long time horizon, these systemic reforms should probably be part of your agenda. The fact is that FPTP, the voting method we use in most of the English-speaking world, is absolutely horrible, and there is reason to believe that reforming it would substantially (though not of course completely) alleviate much political dysfunction and suffering.

Third, because understanding social choice theory helps clarify ideas about how it’s possible and/​or desirable to resolve value disputes between multiple agents. For instance, if you believe that superintelligences should perform a “values handshake” when meeting, replacing each of their individual value functions by some common one so as to avoid the dead weight loss of a conflict, then social choice theory suggests both questions and answers about what that might look like. (Note that the ethical and practical importance of such considerations is not at all limited to “post-singularity” examples like that one.)

In fact, on that third point: my own ideas of ethics and of fun theory are deeply informed by my decades of interest in voting theory. To simplify into a few words my complex thoughts on this, I believe that voting theory elucidates “ethical incompleteness” (that is, that it’s possible to put world-states into ethical preference order partially but not fully) and that this incompleteness is a good thing because it leaves room for fun even in an ethically unsurpassed world.

What are the branches of voting theory?

Generally, you can divide voting methods up into “single-winner” and “multi-winner”. Single-winner methods are useful for electing offices like president, governor, and mayor. Multi-winner methods are useful for dividing up some finite, but to some extent divisible, resource, such as voting power in a legislature, between various options. Multi-winner methods can be further subdivided into seat-based (where a set of similar “seats” are assigned one winner each) or weighted (where each candidate can be given a different fraction of the voting power).

What are the basics of single-winner voting theory?

(Note: Some readers may wish to skip to the summary below, or to read the later section on multi-winner theory and proportional representation first. Either is valid.)

Some of the earliest known work in voting theory was by Ramon Llull before his death in 1315, but most of that was lost until recently. Perhaps a better place to start would be in the French Academy in the late 1700s; this allows us to couch it as a debate (American Chopper meme?) between Jean-Charles de Borda and Nicolas de Condorcet.

Condorcet: “Plurality (or ‘FPTP’, for First Past the Post) elections, where each voter votes for just one candidate and the candidate with the most votes wins, are often spoiled by vote-splitting.”
Borda: “Better to have voters rank candidates, give candidates points for favorable rankings, and choose a winner based on points.” (Borda Count)
Condorcet: “Ranking candidates, rather than voting for just one, is good. But your point system is subject to strategy. Everyone will rate some candidate they believe can’t win in second place, to avoid giving points to a serious rival to their favorite. So somebody could win precisely because nobody takes them seriously!”
Borda: “My method is made for honest men!”
Condorcet: “Instead, you should use the rankings to see who would have a majority in every possible pairwise contest. If somebody wins all such contests, obviously they should be the overall winner.”

In my view, Borda was the clear loser there. And most voting theorists today agree with me. The one exception is the mathematician Donald Saari, enamored with the mathematical symmetry of the Borda count. This is totally worth mentioning because his last name is a great source of puns.

But Condorcet soon realized there was a problem with his proposal too: it’s possible for A to beat B pairwise, and B to beat C, while C still beats A. That is, pairwise victories can be cyclical, not transitive. Naturally speaking, this is rare; but if there’s a decision between A and B, the voters who favor B might have the power to artificially create a “poison pill” amendment C which can beat A and then lose to B.

How would a Condorcet cycle occur? Imagine the following election:

1: A>B>C

1: B>C>A

1: C>A>B

(This notation means that there’s 1 voter of each of three types, and that the first voter prefers A over B over C.) In this election, A beats B by 2 to 1, and similarly B beats C and C beats A.

Fast-forward to 1950, when theorists at the RAND corporation were inventing game theory in order to reason about the possibility of nuclear war. One such scientist, Kenneth Arrow, proved that the problem that Condorcet (and Llull) had seen was in fact a fundamental issue with any ranked voting method. He posed 3 basic “fairness criteria” and showed that no ranked method can meet all of them:

  • Ranked unanimity: if every voter prefers X to Y, then the outcome has X above Y.

  • Independence of irrelevant alternatives: If every voter’s preferences between some subset of candidates remain the same, the order of those candidates in the outcome will remain the same, even if other candidates outside the set are added, dropped, or changed.

  • Non-dictatorial: the outcome depends on more than one ballot.

Arrow’s result was important in and of itself; intuitively, most people might have guessed that a ranked voting method could be fair in all those ways. But even more important than the specific result was the idea of an impossibility proof for voting.

Using this idea, it wasn’t long until Gibbard and Satterthwaite independently came up with a follow-up theorem, showing that no voting system (ranked or otherwise) could possibly avoid creating strategic incentives for some voters in some situations. That is to say, there is no non-dictatorial voting system for more than two possible outcomes and more than two voters in which every voter has a single “honest” ballot that depends only on their own feelings about the candidates, such that they can’t sometimes get a better result by casting a ballot that isn’t their “honest” one.

There’s another way that Arrow’s theorem was an important foundation, particularly for rationalists. He was explicitly thinking about voting methods not just as real-world ways of electing politicians, but as theoretical possibilities for reconciling values. In this more philosophical sense, Arrow’s theorem says something depressing about morality: if morality is to be based on (potentially revealed) preferences rather than interpersonal comparison of (subjective) utilities, it cannot simply be a democratic matter; “the greatest good for the greatest number” doesn’t work without inherently-subjective comparisons of goodness. Amartya Sen continued exploring the philosophical implications of voting theory, showing that the idea of “private autonomy” is incompatible with Pareto efficiency.

Now, in discussing Arrow’s theorem, I’ve said several times that it only applies to “ranked” voting systems. What does that mean? “Ranked” (also sometimes termed “ordinal” or “preferential”) systems are those where valid ballots consist of nothing besides a transitive preferential ordering of the candidates (partial or complete). That is, you can say that you prefer A over B or B over A (or in some cases, that you like both of them equally), but you cannot say how strong each preference is, or provide other information that’s used to choose a winner. In Arrow’s view, the voting method is then responsible for ordering the candidates, picking not just a winner but a second place etc. Since neutrality wasn’t one of Arrow’s criteria, ties can be broken arbitrarily.

This excludes an important class of voting methods from consideration: those I’d call rated (or graded or evaluational), where you as a voter can give information about strength of preference. Arrow consciously excluded those methods because he believed (as Gibbard and Satterthwaite later confirmed) that they’d inevitably be subject to strategic voting. But since ranked voting systems are also inevitably subject to strategy, that isn’t necessarily a good reason. In any case, Arrow’s choice to ignore such systems set a trend; it wasn’t until approval voting was reinvented around 1980 and score voting around 2000 that rated methods came into their own. Personally, for reasons I’ll explain further below, I tend to prefer rated systems over purely ranked ones, so I think that Arrow’s initial neglect of ranked methods got the field off on a bit of a wrong foot.

And there’s another way I feel that Arrow set us off in the wrong direction. His idea of reasoning axiomatically about voting methods was brilliant, but ultimately, I think the field has been too focused on this axiomatic “Arrovian” paradigm, where the entire goal is to prove certain criteria can be met by some specific voting method, or cannot be met by any method. Since it’s impossible to meet all desirable criteria in all cases, I’d rather look at things in a more probabilistic and quantitative way: how often and how badly does a given system fail desirable criteria.

The person I consider to be the founder of this latter, “statistical” paradigm for evaluating voting methods is Warren Smith. Now, where Kenneth Arrow won the Nobel Prize, Warren Smith has to my knowledge never managed to publish a paper in a peer-reviewed journal. He’s a smart and creative mathematician, but… let’s just say, not exemplary for his social graces. In particular, he’s not reluctant to opine in varied fields of politics where he lacks obvious credentials. So there’s plenty in the academic world who’d just dismiss him as a crackpot, if they are even aware of his existence. This is unfortunate, because his work on voting theory is groundbreaking.

In his 2000 paper on “Range Voting” (what we’d now call Score Voting), he performed systematic utilitarian Monte-Carlo evaluation of a wide range of voting systems under a wide range of assumptions about how voters vote. In other words, in each of his simulations, he assumed certain numbers of candidates and of voters, as well as a statistical model for voter utilities and a strategy model for voters. Using the statistical model, he assigned each virtual voter a utility for each candidate; using the strategy model, he turned those utilities into a ballot in each voting method; and then he measured the total utility of the winning candidate, as compared to that of the highest-total-utility candidate in the race. Nowadays the name for the difference between these numbers, scaled so that the latter would be 100% and the average randomly-selected candidate would be 0%, is “Voter Satisfaction Efficiency” (VSE).

Smith wasn’t the first to do something like this. But he was certainly the first to do it so systematically, across various voting methods, utility models, and strategic models. Because he did such a sensitivity analysis across utility and strategic models, he was able to see which voting methods consistently outperformed others, almost regardless of the specifics of the models he used. In particular, score voting, in which each voter gives each candidate a numerical score from a certain range (say, 0 to 100) and the highest total score wins, was almost always on top, while FPTP was almost always near the bottom.

More recently, I’ve done further work on VSE, using more-realistic voter and strategy models than what Smith had, and adding a variety of “media” models to allow varying the information on which the virtual voters base their strategizing. While this work confirmed many of Smith’s results — for instance, I still consistently find that FPTP is lower than IRV is lower than approval is lower than score — it has unseated score voting as the undisputed highest-VSE method. Other methods with better strategy resistance can end up doing better than score.

Of course, something else happened in the year 2000 that was important to the field of single-winner voting theory: the Bush-Gore election, in which Bush won the state of Florida and thus the presidency of the USA by a microscopic margin of about 500 votes. Along with the many “electoral system” irregularities in the Florida election (a mass purge of the voter rolls of those with the same name as known felons, a confusing ballot design in Palm Beach, antiquated punch-card ballots with difficult-to-interpret “hanging chads”, etc.) was one important “voting method” irregularity: the fact that Ralph Nader, a candidate whom most considered to be ideologically closer to Gore than to Bush, got far more votes than the margin between the two, leading many to argue that under almost any alternative voting method, Gore would have won. This, understandably, increased many people’s interest in voting theory and voting reform. Like Smith, many other amateurs began to make worthwhile progress in various ways, progress which was often not well covered in the academic literature.

In the years since, substantial progress has been made. But we activists for voting reform still haven’t managed to use our common hatred for FPTP to unite behind a common proposal. (The irony that our expertise in methods for reconciling different priorities into a common purpose hasn’t let us do so in our own field is not lost on us.)

In my opinion, aside from the utilitarian perspective offered by VSE, the key to evaluating voting methods is an understanding of strategic voting; this is what I’d call the “mechanism design” perspective. I’d say that there are 5 common “anti-patterns” that voting methods can fall into; either where voting strategy can lead to pathological results, or vice versa. I’d pose them as a series of 5 increasingly-difficult hurdles for a voting method to pass. Because the earlier hurdles deal with situations that are more common or more serious, I’d say that if a method trips on an earlier hurdle, it doesn’t much matter that it could have passed a later hurdle. Here they are:

(0. Dark Horse. As in Condorcet’s takedown of Borda above, this is where a candidate wins precisely because nobody expects them to. Very bad, but not a serious problem in most voting methods, except for the Borda Count.)
1. Vote-splitting /​ “spoiled” elections. Adding a minor candidate causes a similar major candidate to lose. Very bad because it leads to rampant strategic dishonesty and in extreme cases 2-party dominance, as in Duverger’s Law. Problematic in FPTP, resolved by most other voting methods.
2. Center squeeze. A centrist candidate is eliminated because they have lost first-choice support to rivals on both sides, so that one of the rivals wins, even though the centrist could have beaten either one of them in a one-on-one (pairwise) election. Though the direct consequences of this pathology are much less severe than those of vote-splitting, the indirect consequences of voters strategizing to avoid the problem would be exactly the same: self-perpetuating two-party dominance. This problem is related to failures of the “favorite betrayal criterion” (FBC). Problematic in IRV, resolved by most other methods.
3. Chicken dilemma (aka Burr dilemma, for Hamilton fans). Two similar candidates must combine strength in order to beat a third rival. But whichever of the two cooperates less will be the winner, leading to a game of “chicken” where both can end up losing to the rival. This problem is related to failures of the “later-no-harm” (LNH) criterion. Because LNH is incompatible with FBC, it is impossible to completely avoid the chicken dilemma without creating a center squeeze vulnerability, but systems like STAR voting or 3-2-1 minimize it.
4. Condorcet cycle. As above, a situation where, with honest votes, A beats B beats C beats A. There is no “correct” winner in this case, and so no voting method can really do anything to avoid getting a “wrong” winner. Luckily, in natural elections (that is, where bad actors are not able to create artificial Condorcet cycles by strategically engineering “poison pills”), this probably happens less than 5% of the time.

Note that there’s a general pattern in the pathologies above: the outcome of honest voting and that of strategic voting are in some sense polar opposites. For instance, under honest voting, vote-splitting destabilizes major parties; but under strategic voting, it makes their status unassailable. This is a common occurrence in voting theory. And it’s a reason that naive attempts to “fix” a problem in a voting system by adding rules can actually make the original problem worse.

(I wrote a separate article with further discussion of these pathologies)

Here are a few of the various single-winner voting systems people favor, and a few (biased) words about the groups that favor them:

FPTP (aka plurality voting, or choose-one single-winner): Universally reviled by voting theorists, this is still favored by various groups who like the status quo in countries like the US, Canada, and the UK. In particular, incumbent politicians and lobbyists tend to be at best skeptical and at worst outright reactionary in response to reformers.

IRV (Instant runoff voting), aka Alternative Vote or RCV (Ranked Choice Voting… I hate that name, which deliberately appropriates the entire “ranked” category for this one specific method): This is a ranked system where to start out with, only first-choice votes are tallied. To find the winner, you successively eliminate the last-place candidate, transferring those votes to their next surviving preference (if any), until some candidate has a majority of the votes remaining. It’s supported by FairVote, the largest electoral reform nonprofit in the US, which grew out of the movement for STV proportional representation (see the multi-winner section below for more details). IRV supporters tend to think that discussing its theoretical characteristics is a waste of time, since it’s so obvious that FPTP is bad and since IRV is the reform proposal with by far the longest track record and most well-developed movement behind it. Insofar as they do consider theory, they favor the “later-no-harm” criterion, and prefer to ignore things like the favorite betrayal criterion, summability, or spoiled ballots. They also don’t talk about the failed Alternative Vote referendum in the UK.

Approval voting: This is the system where voters can approve (or not) each candidate, and the candidate approved by the most voters wins. Because of its simplicity, it’s something of a “Schelling point” for reformers of various stripes; that is, a natural point of agreement as an initial reform for those who don’t agree on which method would be an ideal end state. This method was used in Greek elections from about 1860-1920, but was not “invented” as a subject of voting theory until the late 70s by Brams and Fishburn. It can be seen as a simplistic special case of many other voting methods, in particular score voting, so it does well on Warren Smith’s utilitarian measures, and fans of his work tend to support it. This is the system promoted by the Center for Election Science (electology.org), a voting reform nonprofit that was founded in 2012 by people frustrated with FairVote’s anti-voting-theory tendencies. (Full disclosure: I’m on the board of the CES, which is growing substantially this year due to a significant grant by the Open Philanthropy Project. Thanks!)

Condorcet methods: These are methods that are guaranteed to elect a pairwise beats-all winner (Condorcet winner) if one exists. Supported by people like Erik Maskin (a Nobel prize winner in economics here at Harvard; brilliant, but seemingly out of touch with the non-academic work on voting methods), and Markus Schulze (a capable self-promoter who invented a specific Condorcet method and has gotten groups like Debian to use it in their internal voting). In my view, these methods give good outcomes, but the complications of resolving spoil their theoretical cleanness, while the difficulty of reading a matrix makes presenting results in an easy-to-grasp form basically impossible. So I personally wouldn’t recommend these methods for real-world adoption in most cases. Recent work in “improved” Condorcet methods has showed that these methods can be made good at avoiding the chicken dilemma, but I would hate to try to explain that work to a layperson.

Bucklin methods (aka median-based methods; especially, Majority Judgment): Based on choosing a winner with the highest median rating, just as score voting is based on choosing one with the highest average rating. Because medians are more robust to outliers than averages, median methods are more robust to strategy than score. Supported by French researchers Balinski and Laraki, these methods have an interesting history in the progressive-era USA. Their VSE is not outstanding though; better than IRV, plurality, and Borda, but not as good as most other methods.

Delegation-based methods, especially SODA (simple optionally-delegated approval): It turns out that this kind of method can actually do the impossible and “avoid the Gibbard-Satterthwaite theorem in practice”. The key words there are “in practice” — the proof relies on a domain restriction, in which voters honest preferences all agree with their favorite candidate, and these preference orders are non-cyclical, and voters mutually know each other to be rational. Still, this is the only voting system I know of that’s 100% strategy free (including chicken dilemma) in even such a limited domain. (The proof of this is based on complicated arguments about convexity in high-dimensional space, so Saari, it doesn’t fit here.) Due to its complexity, this is probably not a practical proposal, though.

Rated runoff methods (in particular STAR and 3-2-1): These are methods where rated ballots are used to reduce the field to two candidates, who are then compared pairwise using those same ballots. They combine the VSE advantages of score or approval with extra resistance to the chicken dilemma. These are currently my own favorites as ultimate goals for practical reform, though I still support approval as the first step.

Quadratic voting: Unlike all the methods above, this is based on the universal solvent of mechanism design: money (or other finite transferrable resources). Voters can buy votes, and the cost for n votes is proportional to n². This has some excellent characteristics with honest voters, and so I’ve seen that various rationalists think it’s a good idea; but in my opinion, it’s got irresolvable problems with coordinated strategies. I realize that there are responses to these objections, but as far as I can tell every problem you fix with this idea leads to two more.

TL; DR?

  • Plurality voting is really bad. (Borda count is too.)

  • Arrow’s theorem shows no ranked voting method is perfect.

  • Gibbard-Satterthwaite theorem shows that no voting method, ranked or not, is strategy-free in all cases.

  • Rated voting methods such as approval or score can get around Arrow, but not Gibbard-Satterthwaite.

  • Utilitarian measures, known as VSE, are one useful way to evaluate voting methods.

  • Another way is mechanism design. There are (1+)4 voting pathologies to worry about. Starting from the most important and going down: (Dark horse rules out Borda;) vote-splitting rules out plurality; center squeeze would rule out IRV; chicken dilemma argues against approval or score and in favor of rated runoff methods; and Condorcet cycles mean that even the best voting methods will “fail” in a few percent of cases.

What are the basics of multi-winner voting theory?

Multi-winner voting theory originated under parliamentary systems, where theorists wanted a system to guarantee that seats in a legislature would be awarded in proportion to votes. This is known as proportional representation (PR, prop-rep, or #PropRep). Early theorists include Henry Droop and Charles Dodgson (Lewis Carroll). We should also recognize Thomas Jefferson and Daniel Webster’s work on the related problem of apportioning congressional seats across states.

Because there are a number of seats to allocate, it’s generally easier to get a good answer to this problem than in the case of single-winner voting. It’s especially easy in the case where we’re allowed to give winners different voting weights; in that case, a simple chain of delegated voting weight guarantees perfect proportionality. (This idea has been known by many names: Dodgson’s method, asset voting, delegated proxy, liquid democracy, etc. There are still some details to work out if there is to be a lower bound on final voting weight, but generally it’s not hard to find ways to resolve those.)

When seats are constrained to be equally-weighted, there is inevitably an element of rounding error in proportionality. Generally, for each kind of method, there are two main versions: those that tend to round towards smaller parties (Sainte-Laguë, Webster, Hare, etc.) and those that tend to round towards larger ones (D’Hondt, Jefferson, Droop, etc.).

Most abstract proportional voting methods can be considered as greedy methods to optimize some outcome measure. Non-greedy methods exist, but algorithms for finding non-greedy optima are often considered too complex for use in public elections. (I believe that these problems are NP-complete in many cases, but fast algorithms to find provably-optimal outcomes in all practical cases usually exist. But most people don’t want to trust voting to algorithms that nobody they know actually understands.)

Basically, the outcome measures being implicitly optimized are either “least remainder” (as in STV, single transferable vote), or “least squares” (not used by any real-world system, but proposed in Sweden in the 1890s by Thiele and Phragmen). STV’s greedy algorithm is based on elimination, which can lead to problems, as with IRV’s center-squeeze. A better solution, akin to Bucklin/​median methods in the single-winner case, is BTV (Bucklin transferable vote). But the difference is probably not a big enough deal to overcome STV’s advantage in terms of real-world track record.

Both STV and BTV are methods that rely on reweighting ballots when they help elect a winner. There are various reweighting formulas that each lead to proportionality in the case of pure partisan voting. This leads to an explosion of possible voting methods, all theoretically reasonable.

Because the theoretical pros and cons of various multi-winner methods are much smaller than those of single-winner ones, the debate tends to focus on practical aspects that are important politically but that a mathematician would consider trivial or ad hoc. Among these are:

  • The role of parties. For instance, STV makes partisan labels formally irrelevant, while list proportional methods (widely used, but the best example system is Bavaria’s MMP/​mixed member proportional method) put parties at the center of the decision. STV’s non-partisan nature helped it get some traction in the US in the 1920s-1960s, but the only remnant of that is Cambridge, MA (which happens to be where I’m sitting). (The other remnant is that former STV advocates were key in founding FairVote in the 1990s and pushing for IRV after the 2000 election.) Political scientist @jacksantucci is the expert on this history.

  • Ballot simplicity and precinct summability. STV requires voters to rank candidates, and then requires keeping track of how many ballots of each type there are, with the number of possible types exceeding the factorial of the number of candidates. In practice, that means that vote-counting must be centralized, rather than being performed at the precinct level and then summed. That creates logistical hurdles and fraud vulnerabilities. Traditionally, the way to resolve this has been list methods, including mixed methods with lists in one part. Recent proposals for delegated methods such as my PLACE voting (proportional, locally-accountable, candidate endorsement; here’s an example) provide another way out of the bind.

  • Locality. Voters who are used to FPTP (plurality in single-member districts) are used to having “their local representative”, while pure proportional methods ignore geography. If you want both locality and proportionality, you can either use hybrid methods like MMP, or biproportional methods like LPR, DMP, or PLACE.

  • Breadth of choice. Ideally, voters should be able to choose from as many viable options as possible, without overwhelming them with ballot complexity. My proposal of PLACE is designed to meet that ideal.

Prop-rep methods would solve the problem of gerrymandering in the US. I believe that PLACE is the most viable proposal in that regard: maintains the locality and ballot simplicity of the current system, is relatively non-disruptive to incumbents, and maximizes breadth of voter choice to help increase turnout.

Oh, I should also probably mention that I was the main designer, in collaboration with dozens of commenters on the website Making Light, of the proportional voting method E Pluribus Hugo, which is now used by the Hugo Awards to minimize the impact and incentives of bloc voting in the nominations phase.

Anticlimactic sign-off

OK, that’s a long article, but it does a better job of brain-dumping my >20 years of interest in this topic than anything I’ve ever written. On the subject of single-winner methods, I’ll be putting out a playable exploration version of all of this sometime this summer, based off the work of the invaluable nicky case (as well as other collaborators).

I’ve now added a third article on this topic, in which I included a paragraph at the end asking people to contact me if they’re interested in activism on this. I believe this is a viable target for effective altruism.