Why Democracy?

I am an avid and radical believer in the systemic property of democracy. But if you had asked me (before I wrote this, anyway) why I hold such a strong and deeply-held belief, I would have been uncomfortable with the amount of cultural conditioning that would come to mind. I grew up in in the West where you are saturated with a nominally pro-democracy viewpoint for your whole life, and so it is easy to endorse it as an ethical axiom, as opposed to in support of ethical axioms. It isn’t enough for me to just feel strongly in support of radical democracy—I need to be able to tell you why.

Originally posted here.

What is Democracy?

Some definitions focus on implementation, stressing that this property lies in the ability for a population to choose their representatives in an electoral system:

Democracy is a form of government in which the people have the authority to choose their governing legislators.

I dislike this focus on implementation—surely there is a more general property we’re attempting to describe here? I want a definition which is clearly phrased as a heuristic that can be maximized. We want something with some dimensionality, some ability to place different systems of governance on an axis of exactly how democratic they are, which doesn’t seem to be the case with definitions that focus on implementation. We know too well that merely the act of having elections doesn’t make a system democratic.

Other definitions are much more dimensional, but they describe the distribution of power like “everyone is equal on everything” or are generally vague around the specific rules about how power is distributed:

[Democracy is] a system of government based on a [belief in freedom and equality], in which power is either held by elected representatives or directly by the people themselves

But I find this unsatisfying as well, because “everyone getting power over every decision” seems like an obviously poor choice for getting things done. Referendums are more often used as binary bludgeons for loaded questions as they do ways of genuinely capturing a group’s preferences. There are certainly situations where a decision will affect everyone equally (e.g. the adoption of a standard or new law, environmental costs, the fate of large public goods) but it seems like some decisions should be local. I don’t think I should have power over what colour someone’s bins on the other side of the planet are, and I don’t really see why they should have much say over mine. Proportionality of power seems to represent an important aspect of just power, and is therefore something we should want to capture within our definition.

I’d like to propose a specific definition that I’ll try to stick with for the rest of this essay. I think this definition has some advantages—it doesn’t mention anything about implementation, it’s phrased as a property we can maximize, and it tries to avoid ethical axioms:

Democracy is a property of a system of governance where agents within that system have power over decisions in proportion to how much those decisions impact them.

You might have a totally different definition, and that’s fine. But this is the property I’m arguing for, so be wary of swapping your definition with this one and then things not making sense.

Who judges “impact”?

At the end of the day, there are few purely empirical and objective measurements that can define this term. Ethics and morality are both intensely subjective and constantly evolving, and they are plagued with the fact that they are ultimately about predicting future costs as well as tallying past and current costs, and so will always contain a degree of uncertainty. It is likely that any system which attempts to enforce a permanent and unchanging system of objective measurement—as opposed to attempting to accurately measure evolving subjective perceptions—will suffer from the issues of input distortion described below. At the end of the day, we are trying to capture a truthful description of how people think about impact, not an “objectively correct” proscription. It is my opinion that such a proscription is reprehensible.

So, if we are to accurately measure the impact of decisions, we must select systems of input-gathering (e.g. voting—how we get information into the system) which address the size of preference and allow people to signal impact, and in doing so exert more power over that decision.

Normative Arguments

The first thing I think of when you ask me why this property of democracy is worth pursuing are two intertwined ethical axioms I think are quite good—that we should minimize suffering, and that we should maximize agency.

But, I hear you say, you just said “we shouldn’t endorse ethical axioms!” Well, no, I said that our support of democracy shouldn’t be an axiom. At some point, if we’re talking about ranking future states of the universe by desirability, we do need to arbitrarily decide our goals, and we should try to make those goals as generalizable as possible.

Maximizing Agency

What do I mean specifically by this concept “individual agency”? I mean that, for some group of individuals, we try to maximize both the depth and breadth of future choices that each member has. This does have some differences to other definitions within sociology, so keep that in mind.

Agency means more than just the number of boxes you get to tick in the voting booth—it is also within things like being able to choose where to live, what to say, who to associate with, and how to utilize your time with as wide an array of interesting possibilities as possible. It also implies a maximization of population (when possible) as other people are some of the most interesting things that can happen—i.e. they create lots of meaningfully different futures for everyone to choose from, which increases everyone’s agency. A true measure of agency must encompass a totality of life’s possibilities, not merely your engagement with a state institution.

We might recognise this agency as “freedom” or “free will”. Every system of governance doles out this individual agency along some rationale, from the extremes of an all-powerful dictator to an owned slave. Democracy is a property where we maximise for the overall average agency, treating each individual with the same priority. If a decision affects only yourself, it follows that in a highly democratic system you would have full control over that decision. All power taken from you from that point on is distributed proportionally to others who lay well-grounded claims of being affected, but the default state is autonomy.

Minimizing Suffering

Let’s define this second axiom—this concept of “suffering”—as the other side of the coin of individual agency. Imagine the full space of possibilities for what the agency (as described above) of an individual can be. It’s obviously a massive space of possibilities—every single possible choice that every single possible person could ever make, each choice leading to an entirely new set of choices for everyone else to make in response. Now, each individual only has access to some very small subset of this space, which is restricted to their specific life, circumstances, and power—power here very specifically meaning their ability to navigate through this tree of decisions, a concept intertwined with agency.

The reason why I don’t want to define “suffering” as “discomfort” or “hardship” is because there are types of decisions in which people choose to experience discomfort for some reward of future expanded agency. Austerity, rationing, environmental regulation, or really any decision involving any form of delayed gratification fit into this category. Within this essay, suffering is not just an experiential quality—it has an intimate relationship with individual agency and the relationship you have to the decisions which brought about that experience.

Every individual will have some kind of valuation of their current set of future decisions. If an individual’s set of future choices are different from the set of choices they’d like to be able to make, then we might describe this individual as “suffering”. If they’re very divergent—say, they’d like to be deciding on what colour shirt to wear, but instead they are deciding on which shoe to eat—then we might say that that individual is suffering a lot.

If a system is very democratic, that would seem to require giving people the ability to exert large amounts of control over decisions that they judge will cause them lots of suffering. It really is the other side of the coin of maximizing agency, and these two heuristics pretty much pull in the same direction. However, the inclusion of suffering is an important factor, because we want to exclude systems that provide lots of interesting decisions for the population, but where lots of people are having a terrible time.

Functional Arguments

Can we argue for democracy beyond individual ethics? Can we find a more empirical justification for maximising this property within a system of governance?

A good start is asking what a system of governance optimizes itself for. I think systems of governance over a long enough period of time inevitably optimise for stability. The system that survives, wins. “Getting to be the Monopoly on Violence” is the mother-of-all first-mover-advantages. And so, we must judge these systems of governance on their ability to sustain themselves.

Any system of governance contains the 3 distinctive hallmarks of a predictive ethical system. It contains some form of gathering information about the world around it (input), a way of simulating what might happen in the future (prediction), and a mechanism for ranking those futures by how desirable they are and acting upon that (consensus). All political entities contain a set of bureaucratic institutions to perform these functions—to gather the numbers, crunch them, and decide on some course of action. So, naturally, we might ask how good different systems are at running these simulations, and be so bold as to correlate how well these systems perform at these 3 tasks with how well they will manage to self-propagate into the future.

Input: Systems That Lie To Themselves

Any predictive simulation requires a truthful snapshot of the world up to the point the simulation begins. It requires well-grounded knowledge. Democracy must be partially defined by an ability to reflect the true and honest beliefs of its constituents within a structure of power. When a data input is distorted—for instance by a cultural expectation, material cost, or threat of violence for expressing that view—then the system receives false information about the world around it.

It is notoriously difficult for authoritarian regimes to determine the true level of fealty of their subjects, and consequently something they obsess over. The cost of each citizen expressing their true preferences is far higher than the cost of expressing the preference the regime wants them to express. Sometimes those preferences will truly align, but when they don’t, the regime can never fully trust that the citizen isn’t harbouring some hidden opinion. And so the regime engages in heavy-handed oppression and surveillance of its people in order to try and control this—usually justifying their action through elevating the heuristic of “stability” far above all others.

Authoritarian regimes (among other hierarchical structures) are more susceptible to p-hacking (or distorting their inputs) because they are systems that lie to themselves.

Example 1: Mao’s infamous agrarian reforms were catastrophic—at least partially—because the information-gathering function of the bureaucracy lied about the state of the world. This was due to the high cost of expressing truth:

Local officials were fearful of Anti-Rightist Campaigns and competed to fulfill or over-fulfill quotas based on Mao’s exaggerated claims, collecting “surpluses” that in fact did not exist and leaving farmers to starve. Higher officials did not dare to report the economic disaster caused by these policies, and national officials, blaming bad weather for the decline in food output, took little or no action. [1]

However, we shouldn’t confuse this ability of a predictive system to lie to itself to be some sole quality of a centralized economic plan. Capitalist markets also display an ability to lie to themselves, with speculative boom-bust cycles often containing some level of input distortion.

Example 2: In the 2008 financial crisis, one reason why the situation was allowed to get so bad before the market corrected itself was that credit agencies produced distorted ratings for loans.

The skewed assessments, in turn, helped the financial system take on far more risk than it could safely handle. [2]

In this case, it was the inverse—a lack of cost for distorting the inputs—that caused the market to lie to itself and eventually experience a large correction when reality caught up. The universe always balances its books, and there is simply no cheating it through the power of belief.

In both cases, the incentives of the input-gathering mechanisms of the system were not aligned with the overall need to have a truthful and accurate understanding of the world. A system which achieves a high level of democracy may be more resilient to this misaligned cost. There does need to be some cost involved around reporting inputs, but this cost needs to be focused on truthfulness so that the best option is generally to be honest about the world around you and your preferences for that world. Maximizing the property of democracy necessitates that we give people a large amount of proportionality when punishing them. We see notions of this today, when we attempt to put individuals and the state on an equal footing within a legal system. Having power to defend yourself from punishments means that a large consensus needs to form in order for that punishment to happen. This guards against the costs associated with truthful inputs.

Prediction: Condorcet’s Jury

The great Marquis de Condorcet—radical egalitarian, revolutionary, and a man who did not deserve to die as he did—first described the principle of Condorcet’s Jury in 1785.

Consider a pool of n voters who must choose between 2 options, who are both competent (they have a greater than 50% chance of making a “correct” decision) and independent (they don’t consider other people’s votes when they’re making their own). Condorcet’s Jury Theorem states that as n increases, the probability that the consensus of the final decision will be “correct” approaches certainty. The more people you ask to vote, the more likely you will be to get the right answer.

There is still significant debate around the extensibility of Condorcet’s Jury to non-binary choices, and while that remains an open question, there are some convincing arguments. And we shouldn’t ignore that as we make the decisions available to people more and more complex, we enter territory where there are no perfect solutions—only imperfect realities containing tradeoffs like Arrow’s impossibility theorem. And, finally, what options are “correct” are much clearer when dealing with epistemic questions of fact, as opposed to inherently arbitrary questions of policy.

Nevertheless, when consensus can be harnessed to collectively predict future states of the universe, we may expect any predictive model to benefit from this theorem. And when the sustainability of a system depends on its ability to predict future states, the statistical models that democracy provides will often outperform any algorithmic model that attempts to artificially simulate the inner goals of the demos.

Consensus: The Tyranny Of Commodus

Commodus was a cruel, violent and narcissistic sociopath. At the age of 19, he fully assumed the office of Emperor of Rome and its Empire when his father Marcus Aurelius died. During his 12-year reign of terror he murdered, tortured, and toyed with everyone around him until his dictatorial habits became intolerable. After several assassination attempts, he was drowned in his bathtub by an old friend. He almost single handedly managed to crash the largest and most vibrant economy in the world through a combination of disastrous decisions and uninterested neglect, wrought unspeakable horrors on those around him, and plunged the Roman Empire into a period of intense political instability and suffering.

Marcus Aurelius, this man’s father, is held up as one of the greatest Roman Emperors to ever exist. If ever there was a blueprint for the “Philosopher King″ archetype, it was this man. He handed Commodus an Empire within its golden years, which had a highly sophisticated set of institutions set up and running smoothly. These institutions did fairly well at our three components of: data gathering (Rome after all invented the tax collector), prediction (they definitely perfected the art of logistics) and consensus—ah. Well, that was Aurelius at the top and a strict hierarchy of subordinates that formed an extremely centralized governance structure. All authority and legitimacy effectively flowed from the Emperor, buoyed by a deep story of “noble Empire”. The system of governance was simply not designed to handle the tyranny of Commodus, instead depending on the cool-headed leadership the previous Emperors had shown.

If you’ve studied much history, you know that the story of Commodus is incredibly common amongst hierarchical power structures. Even if some totalitarian power is currently “benevolent enough” for you to accept its limiting of agency, it can very quickly elevate some extremely destructive decisions. If our previous two components are working well, we know what the world is like and we can predict roughly what the possible consequences of our actions should be. This was the case in Commodus’ Rome—his advisors knew that reducing the silver in the Denarius would cause inflation—but it was the component of consensus that failed when we depended too heavily on single individuals. A wide consensus is far better able to avoid large and unfounded divergences in goals, and much more able to produce a smooth, continuous evolution in goals as the world changes around it. At the very least, a rapid change in societal goals and strategies should be in response to a wide and rapid change in consensus.

Counter-Arguments

Naturally, when presenting as all-encompassing a view as this one, we should look to the arguments against maximizing democracy and see if any of the work we’ve done above can assuage any of these concerns. However, I’m not going to start off very well, by basically dismissing:

Normative Counterarguments

A normative counterargument is a statement that proposes alternative ethical axioms than the two I have laid out above. If you begin from a different arbitrary set of ethical axioms—and yes, all ethical axioms are arbitrary—then I have no real rational argument to present in order to dissuade you. I can argue that you should alter your ethical axioms, but that argument is not fundamentally going to be a reasonable one—it’s going to be a terribly flawed, emotive, and passionate affair which I will spare you of.

Functional Counterarguments

Majoritarianism, or Mob Rule

Many opponents of radical democracy will say something along the lines of being wary of “mob rule”. This is when a large majority of the population are able to exert intimidatory or disproportionate influence over institutions of the state, as opposed to a state operating on the Rule of Law. There is obviously some interplay here as most people would agree that laws should be arrived at through democratic governance, and so mob rule seems to more encompass clandestine or illegal influence.

I think it’s important here to distinguish between temporary majoritarianism and institutionalized majoritarianism. We don’t necessarily have an issue with a strong majority making a single decision. The issue here is when a majority segment of society is somehow consistently united, polarised and balkanized from the other segments. When a large portion of the country consistently rules only for itself and ignores the goals of the other, then that will undoubtedly lead to erosions of rights, power and freedom for the mob-ruled minority.

However, I would argue that it is more difficult to build these structures of institutionalized majoritarianism in radically direct democracies. It is absolutely not impossible to balkanize a demos in a radical democracy, but it requires more work and you have to convince more people. If individuals with aligned goals are very easily able to communicate, organise, and express political power then they will come together to cooperate and achieve their goals. When these power relationships are less institutionalized, familial, structural, and more so truthful representations of a population’s preferences, then surely we would see a greater intersectionality within our political structures? It is unlikely that a cabal aligning to any abstract identity—“Republican”, “Atheist ″, “Technologist”, “Environmentalist”—will perfectly align to any individual. There will be Republican atheists and technologist environmentalists, and it will sometimes be in their interest to “cross” whatever aisle they’re currently sitting on.

The Average Punter

Another common objection hits primarily at this concept of a “competent” voter as laid out above—that is, for a consensus to converge on a “correct” opinion about the world, the average individual needs to have a better-than-even chance of choosing that correct opinion. However, there are clearly many concepts that a governance system may need to reason about that the average person would have a less-than-even chance of. Clearly, there are some decisions that require the ability to defer to the experts.

We’ve talked briefly about voting systems that allow the demos to communicate the relative size of their preferences, and systems like Quadratic Voting show some promise in this area. But another mechanism which seeks to resolve this specialization issue is that of Liquid Democracy, which is defined by the ability to defer your vote—that is, give control over your vote to someone else who you decide.

Imagine that there is an upcoming decision about cancer research. I don’t know very much about cancer research, but I have a long and trusting relationship with my doctor and I know that they do possess a good level of knowledge about this. I can defer my vote to them, and maybe even all future votes regarding medical issues. Critically though, I can remove or modify this deferment at any time. Liquid Democracy is a fascinating concept in that it organically generates very fluid (i.e. they change a lot) “nodes of power” within the population, and those nodes can be very reactive to changes. You create “representatives″ through a state of constant election, as opposed to the staggering iterative 4-6 year processes of current day representative democracy. Of course, such a system would require limits on things like how many votes a single person can have deferred to them, and perhaps some form of term limits of diminishing returns to discourage the formation of permanent structures of power.

Open Questions & Negative Examples

It can be useful, when trying to nail down a specific definition of a term, to figure out what the edge cases are, and what the negative cases are. Here are a few scenarios that I don’t think count as democracies, but are interestingly different, or at least pose interesting questions:

1. The Sortition

Imagine a society wherein every decision is made via a random sortition of the entire population. The random sortition may grow or shrink based on the desired level of confidence, but it never encompasses a majority of constituents. And yet, this system does to a high probability produce the same results as performing full surveys for every decision. But is such a system highly democratic?

2. The Benevolent Omniscient

Imagine a society governed entirely by an extremely intelligent being, an ideal “Philosopher King”. This being is able to predict the material needs of every individual and ensures that all needs are met. Nobody goes hungry, homeless, or even unoccupied. Generally, if you ask people, they are fairly content and agree with the decisions that the AI makes for them—but they are never allowed to make their own decisions. Is this system democratic?

3. The Yeerks

Imagine a society where every human is taken over by a parasitic brain-worm which slightly modifies their desires to include never getting rid of the brain-worm, but otherwise leaves them autonomous. Could such a society ever form a truly democratic consensus?

4. Literally The West?

Imagine a society in which constant elections are run, councils sortitioned from the people, censuses taken, etc. in order to capture the preferences of the demos as accurately as possible. However, the state’s actions are completely uncorrelated with those preferences—that is, it pursues its own goals and doesn’t attempt to help its citizens achieve their own. Would we describe this system as highly democratic?

5. No Spoon For You

Imagine a society in which all individuals are kept trapped within a simulation. Within the confines of the simulation, each individual is a god-like entity who is able to shape reality and experience anything they want and fulfil any desire they want. However, they cannot leave the simulation. Do individuals within this society have agency?

6. The Perfect Union

Imagine a society that, at a certain moment in time, perfectly captures the preferences of every individual. The society expends a large amount of work to generate a perfect constitutional document from this data. This document outlines how to address those preferences in excruciating detail and will probably do an almost perfect job. However, the constitution specifies that it is a permanent document. It can be amended, but in general it is a document that embodies the eternal authority state and will stay unchanged forever. Is such a society highly democratic that first day? Will it be as democratic in a millenium?