Democracy as a Governance Algorithm: A Lexicographic Constraint Hierarchy

Tl:dr: Trying to pin “democracy” down to something you could, in principle, write as a spec, test, and run. I formalise democracy under scaled optimisation as a governance algorithm defined by three laws: first clear a habitability threshold, then lexicographically maximise contestability and extension, in that order.

Status: The math is simplified; the doctrinal choices are not. I am more confident in the lexicographic structure than in any particular parameterisation or boundary condition, so critique of assumptions and edge cases is especially welcome. The aim is a framework you can attack, not a plug-in.

§1. Democracy’s false image

Whether born in Beijing, Moscow, Tehran, Kumasi, or Chicago, every child will be offered a locally annotated sketch of democratic representation. It may appear as aspiration, cautionary tale, or self-description, but wherever the diagram travels, democracy resolves into the same benign input–output machine wired into ballot boxes, parliamentary chambers, and civic schematics where people push preferences in and laws fall out. Its universality rests on the belief that democracy is about who votes, what counts, and how often.

This is now a requiem for a world that treated the speed of counting paper as the bottleneck on governance.

We no longer live in that world.

Democracy still uses laws and elections, but the route from “the people” to “what actually happens in the world” runs through code, data, and infrastructure. Communications, logistics, perception, and enforcement are increasingly structured by learning systems and optimisation pipelines to maintain the low‑intensity violence of “good order”. The actual control surface of collective life looks less like a ballot box and more like a pile of pipelines: datamodelsdecisionsactuation.

Even the bureaucratic imagination has started to rename itself. At the WINWIN Summit 2025, the digital minister of Ukraine described a “move from a Digital State to an Agentic State”, tying the legitimacy of governance to agents that “help make decisions” and “automate processes”, with public administration recast as an interface problem between citizens and state capacity. The GovTech texts orbiting this slogan push the claim further, with the original whitepaper framing agentic AI as something that can “eat the core functions of government”, while positioning the resulting shift on a par with “the 19th century invention of the bureaucratic state”, that earlier revolution in forms, files, statistics, and organisational discipline that made government scalable in the first place.

My claim is simple, but rather impolite: in such a world, “democracy” becomes the property of a governance algorithm running on a substrate, not an ideal value worn on top of institutional heuristics. If we want to argue about whether some arrangement is democratic, we should first write down the algorithm, the substrate it runs on, and the constraints it respects, then ask whether it solves a particular constraint satisfaction problem:

Democracy, once intelligence becomes infrastructural, is no longer the rule of people; it designates the behaviour of an algorithm running on a shared substrate, constrained by habitability, contestability, and extension, in that order.

LessWrong has already rehearsed this transition in two idioms that matter here: a deadpan systems‑engineering parody that reviews democracy as a legacy codebase, complete with monitoring failures and weird incentives, and a more earnest attempt to define democracy as a maximisable property of power being proportional to impact. I empathise with both, but insist on one extra move: pin the property to a constraint hierarchy over a governance algorithm running on an infrastructural substrate .

The propositional claim that the rest of this essay makes precise is that, once these three constraints are formalised as satisfactions for what may count as “more democratic than what”, the resulting ordering over governance algorithms can be represented as a lexicographic optimisation with a hard feasibility gate.

Definition (democratic frontier): Fix a substrate and a space of feasible governance algorithms . Fix a habitability evaluator and a threshold , and define the admissible (habitable) set:

Fix evaluators (contestability) and (extension), and equip with lexicographic order on pairs :

Then define the democratic frontier as the set of admissible algorithms that are not strictly beaten under :

An algorithm is democratic on iff . The frontier can contain multiple tied maximisers, and it can be empty if the supremum is not attained or if the admissible set itself is empty.

Stripped to its structural bones, this formalisation fuses two lines of reasoning:

  1. It applies “the bitter lesson” beyond AI research and into the techno-social: democracy behaves as an algorithm that will, like any other, be “eaten” by sheer scale unless its automatisms are made explicit.

  2. It retools Asimov-style laws of robotics into a code of democracy that is political rather than moral and that can be instantiated in machine‑readable form.

Other work does each of these separately; what follows is an attempt to formalise a doctrine.

Before further tightening the screws, it helps to be clear about the public AI discourses that currently try to annex “democracy” as a side condition:

  • AI ethics names a diffuse but recognisable formation, mostly academic and institutional, that organises around fairness, accountability, and rights, often through principles, impact assessments, and critical studies of power.

  • AI safety is centred in labs and adjacent research institutes, where the object is to stop systems from causing unacceptable harm: alignment in the strong sense, but also robustness, red‑teaming, secure deployment.

  • AI alignment lives uneasily between them as both a technical problem about aligning powerful optimisers with engineered preferences and a meta‑ethical problem about what those preferences should be, largely articulated in think‑tank, foundation, and online forums.

  • Around these sit AI governance via regulations of states and international bodies.

Each of these worlds defines “democracy” differently: as ethical obligation, as a thing to be preserved from catastrophe, as aggregator of preferences, as stakeholder process, or as infrastructure within which law must be formulated.

The aim here is not to fold democracy gently into any of those agendas. The formulated theorem does not aspire to give ethics a sharper metric, or safety a more humanistic objective, or alignment a better value function. The “AI+” frame tends to begin from powerful functions and their objectives, then ask how to keep them from destroying the world while they optimise. Policy work on governance then asks how to stop those outputs from escaping law. Following Moten and Harney’s radical study, I’d rather begin by “surrounding democracy’s false image in order to unsettle it” than adding to its debris.

My proposal is thus dramatically inverted: treat democracy itself as the optimiser, running on infrastructural intelligence, and ask under what constraints a democratic system acquires its function at all. Alignment becomes one sub-case of a broader question: which optimisation processes become democratic when running in the wild, and which do not?

What follows is neither an extension of AI alignment to democracy, nor a transcendental critique; alignment techniques are treated as local tools inside a constraint hierarchy, but not as a pure foundation from which the entirety of societal life is functionally derivable.

§2. Democracy as an algorithm running on a substrate

The doctrine builds upon a blunt observation:

When intelligence settles into infrastructure, governance happens there, or not at all.

In Sutton’s “bitter lesson” terms, aggregate intelligence in the substrate grows with data and compute. Handcrafted institutional finesse loses, again and again, to scaled general methods run as infrastructure. If grid stability is managed by reinforcement learning, if content visibility is managed by ranking models, if border regimes are managed by risk scores, then the behaviour of those systems is governance, unless something stronger intercepts them.

The hope that democracy will remain artisanal while everything else is being optimised by gradient descent belongs to the same family as thinking hand‑designed chess heuristics could beat search forever.

Rather than treating “algorithm” as a synonym for “new AI stack”, it is more precise to treat current forms of rule as governance procedures whose automatisms can be written down, audited, and reconfigured. Essentially, “governance algorithm” is most likely an ancient creature, as a genus that already contains capitalism and other large‑scale automatisms. Democracy, in this view, is a constrained species within that genus.

So let us give the democratic process a type signature. Under intelligent infrastructures, is cast as a governance algorithm:

where:

  • W is the set of possible world‑states;

  • is the space of input streams (votes, protests, metrics, logs, and also model outputs from other optimisers that already act on the world);

  • is the set of governance operations with each a transformation (laws, budgets, model updates, zoning, sanctions), endomorphisms on the space of world‑states.

A run of democracy is a trajectory of world‑states:

The interesting object is no longer the people, the party, nor the parliament; it is the mapping from signals and world‑state to interventions, and the sequence of world‑states it generates when coupled to a particular infrastructure Σ: climate, grids, networks, institutions, archives, media.

Once intelligence is recognised as circulating across devices, databases, models, and institutions rather than living only in individual minds or human collectives, the central political task becomes the governance of infrastructures: how latent data worlds produce what appears as public, legitimate, actionable, or true.

Both and the space of feasible algorithms are politically produced. Changing the substrate, by building or dismantling grids, platforms, and archives, is one of the main levers of social struggle, and the doctrine for algorithmic democracy is meant to make that lever visible rather than treating it as neutral background.

The two guiding principles are then:

Not every algorithm that aggregates preferences or counts votes is democratic.

Only those that follow certain constraints on how they change the world acquire the name of democracy.

The work is to specify those constraints in a way that survives contact with optimisation and machine learning, without collapsing into vacuous heuristics of “good governance”.

§3. Three laws instead of one scalar objective

My doctrine expresses a portable three‑law constitution. It reverses the usual move in social choice and alignment discussions, where everything compresses into one scalar utility and then debates about weights and desiderata can pick over the remains. Instead, it fixes a hierarchy of constraints that any democratic governance algorithm must follow.

First Law – Habitability (Infrastructural Non‑Degradation)

Democracy may not, by design or neglect, degrade the infrastructures that sustain the lives and worlds that constitute its demos.

Clause: “Infrastructure” here names ecological, technical, social, and epistemic systems: climate and biosphere, energy grids and logistics, networks and data centres, education and media, legal and archival regimes, shared languages and know‑how. If a governance algorithm collapses these, it destroys the very possibility of a demos.

Second Law – Contestability (Configurability of Operations)

Subject to the first law, democracy must make configurable and contestable the operations by which it governs.

Clause: Democracy must run on procedures that can, in principle, be halted, inspected, forked, and recompiled in public. The pipelines that translate signals into decisions, the models that classify and rank, the rules that allocate attention and resources, must be exposed in forms that those subjected to them can understand, challenge, and re‑parameterise.

Third Law – Extension (Standing for the Inscribed)

Subject to the first and second laws, democracy must extend voice and care to those whom its infrastructures already inscribe but its institutions do not recognise.

Clause: Inscription includes human groups tracked by data yet excluded from representation, more‑than‑human entities entangled with infrastructures, and algorithmic agents whose behaviour is integrated into circulation, prediction, and enforcement. Extension means building channels through which these inscribed entities can modify the parameters of governance that act upon them, not merely recognising their existence in symbolic terms.

These laws are ordered, not parallel. Extension does not trump contestability; neither trumps habitability. This is not a moral ranking of whose interests matter more but an existential ordering of preconditions:

  • below a certain level of habitability there is no demos and no democracy left;

  • below a certain level of contestability, “democracy” names pure automation;

  • without continuous extension, “democracy” remains generatively inept.

This already looks like an Asimov story: first fix a survival constraint, then prioritise corrigibility, finally push on who falls inside the circle of concern. The twist is that the “agent” in question is not a single robot but a societal algorithm coupled to planetary infrastructures. The ordering offers to do something constitutional preambles usually leave to trial and error: it specifies which commitments are allowed to overrule which when they collide, and under what description.

Corollary. Subject to these laws, democracy has no obligation to preserve its existing form or its familiar name. It may fork, refound, hybridise with other governance logics, or relinquish inherited shells, provided that the infrastructural conditions for shared, contestable, and expansive world‑making are strengthened rather than destroyed.

The aim is not to install a moral safety device but to specify a decision logic that can be tested, stressed, and calibrated in assemblies, chambers, forums, and summits wherever questions of habitability, auditability, and incorporation collide. In a world where scaled computation tends to outperform handcrafted designs, these laws treat democracy as an algorithm whose automatisms must be exposed, argued over, and constrained in public.

Seen from the outside, these laws will remain objectionable: too vague to be mechanical, too structural to satisfy moralists, too constraining for technocrats, too constitutional for some radicals. That is their job. They mark the minimum that must be argued over if “democracy” is to remain a demanding vision under conditions of scaled intelligence.

§4. From doctrine to definition: H, C, E, and lexicography

To turn doctrine into something you can plug into code or proofs, attach three evaluators to any candidate governance algorithm :

  • : Habitability – how well the infrastructures hold up over time under ;

  • : Contestability – how open and reconfigurable the operations of are for those governed;

  • : Extension – how far standing is extended to entities already governed by .

The naïve move would be to define a weighted sum

and maximise it.

That move is the obvious Goodhart trap: once everything is compressed into a single scalar, optimisation will attack the weak points, and the particular choice of weights becomes the hidden sovereign. It also replays the part of alignment discourse that seeks a single reward function encoding “all values” and then spends its time managing specification gaming.

Taken as a whole, the three laws instead define a “more democratic than” ordering over governance algorithms, but only within the admissible set; below the habitability gate the comparison is simply not democratic comparison.

Let be a binary relation on , read “at least as democratic as”, with strict part and indifference . Algorithms in are not democratically comparable; the relation is undefined for them.

This is also why the definition is deliberately non‑universal: “maximally democratic” is always maximality relative to the admissible set on a given substrate under a given configuration, not an approximation to a single Democracy‑in‑itself. A democratic frontier can therefore exist while democracy in the ordinary political sense is absent, because the most contestable admissible candidate may still lie below any acceptable floor, which should be read as constitutional failure demanding substrate rewiring or parameter revision rather than as a reason for frontier‑worship or self‑congratulation.

Once and are treated as real-valued evaluators, three structural constraints make a total preorder on : every admissible pair is either strictly ordered or tied.

  1. Contestability priority: within , if , then , regardless of how their E-scores compare.

  2. Extension priority at equal contestability: within , if but , then .

  3. Tie-indifference: if and , then .

Here and are the strict and indifferent parts induced by the preorder :
, and
.

Above the gate, democracy is indifferent to marginal differences in when and are held fixed. Moreover, if two admissible algorithms have the same values of and , the formula insists that they are democratically tied; any residual preference between them must be justified by some other decision rule, not by the name of democracy.

Figure 1: Lexicographic choice under measurement uncertainty

Lexicographic choice with measurement noise: uncertainty can flip the winner unless margins or tie-bands are used. Candidate points in and space are surrounded by uncertainty ellipses; a max- line and a tie-band illustrate how noisy evaluation can change which system is selected.

This spec formalises a different route than scalarisation. Habitability is treated as a gate, not a maximisation target. Contestability and extension are then lexicographically ordered. Let

Formally, what is going on here is a constrained multi‑objective optimisation with a hard feasibility set and a lexicographic objective: first restrict attention to , then lexicographically maximise over that admissible set, exactly as encoded by .

Claim (lexicographic representation): Assume is a preorder on with strict part and indifference , such that for all :

  1. if , then .

  2. If and , then .

  3. If and , then .

Then for all ,

The three hypotheses already entail the converse direction: if , then necessarily either , or and , because otherwise swapping and would force , or equality would force .

On that basis, the compact definition at the top means exactly this: the democratically admissible algorithms are those in for which no other admissible algorithm has strictly higher contestability, or the same contestability and strictly higher extension.

In other words: among all the algorithms that keep the world minimally habitable, democracy means: pick the ones that are most configurable by those they govern, and among those, the ones that extend standing furthest.

Three imperatives fall straight out of this:

No amount of participation can buy back a dead substrate.

No amount of inclusion can justify a system nobody can see or change.

Once survival is secured, legibility and extension are what democracy optimises.

This is a very particular answer to the classic “you have to trade off safety, transparency, and justice” complaint. The spec does not say “there is a smooth trade‑off, pick a point on the frontier.” It says: below a given survival threshold you are not in the domain of democracy at all; above it, design trade‑offs are real, but they occur inside a fixed constraint hierarchy.

§5. Habitability as gate, not god

Unpack more carefully, because this is where the safety reflex fires first.

Let time be indexed discretely, , with one step corresponding to whatever granularity the infrastructure politics can actually measure, and let be a habitability index on world‑states. Choose a rolling horizon (multi‑decade rather than election cycle).

A run of induces a trajectory :

Here ranges over a chosen class of admissible runs consistent with , and the inclusive window running from through reflects the temporal granularity already built into the habitability measure. Since aggregates lived conditions at multi-decade scales, the window serves as a measurement frame rather than a second optimisation parameter. Unless otherwise specified, the gate should be read as worst-case habitability over the run.

Then, fix a viability threshold that encodes a political decision about what counts as a minimally viable world: ecological floors, grid reliability, social stability, epistemic health. The admissible set is then as defined above.

Any that fails this is not “less democratic”; it is disqualified. This is where the doctrinal articulation diverges from standard social choice theory: I do not say “choose the fairest rule among all rules”, I say “first discard all rules that wreck the substrate, then talk about fairness”.

Importantly, is not maximised. If we tried to maximise first, we would indeed summon a Safety Golem: slightly safer algorithms with worse contestability or extension would always be preferred, political life would be sacrificed for tiny gains in survival probability, and we would have reinvented a degenerately risk‑averse utilitarianism at the level of governance.

Figure 2: Habitability as a probabilistic constraint on trajectories

Fan chart of estimated habitability over time usable for monitoring under uncertainty. Shaded quantile bands show uncertainty over trajectories; a dashed horizontal line marks the constitutional floor; a dotted curve shows the estimated probability of ever breaching the floor. This makes epistemic uncertainty and risk posture visible, whether is implemented as a robust constraint (no admissible run may breach) or as a chance constraint (bounded breach probability), but it does never convert the gate into median-trajectory optimisation or permit trading rare breaches for better averages.

A clever consequentialist could try to turn Law 1 into lexical priority for conservatism: shutting down a fossil fuel pipeline, or dismantling a punitive border infrastructure, could be framed as “degrading” the living conditions of those currently dependent on it, so Law 1 would block radical change. Or a Marxist abolitionist might try to flip the move and read Law 1 as a demand to physically destroy certain infrastructures – pipelines, data centres, server farms – as inherently uninhabitable architectures of domination.

The reply in both cases is that “infrastructures that sustain the lives and worlds of the demos” cannot be interpreted as short‑term comfort or immediate throughput; conditions of habitability operate at the scales of lifetimes and ecologies. Preserving a fossil infrastructure that locks in catastrophic climate trajectories plainly violates Law 1; so does stabilising a digital infrastructure that consumes attention and cognitive bandwidth to the point where deliberation becomes impossible. In a climate‑policy case where a carbon‑intensive energy grid must be replaced by a more distributed system, the First Law mandates the risky transition, not the status quo: what must not be degraded is the long‑term capacity to live and decide together, not the present architecture that happens to provision existing norms.

The horizon is where the blunt politics sits. A short permits extractive populism: burn the substrate for short‑term gains. An infinite paralyses action: any risk to far‑future infrastructure blocks present change. Treating as a rolling multi‑decade window bakes in a minimal level of intergenerational solidarity without demanding omniscience, and relocates the fight about myopia versus paralysis into an explicit parameter.

Framed in decision‑theoretic language, and occupy the place where discount rates and safety margins are usually tucked away; the formula drags those parameters into the open as explicit knobs rather than leaving them as tacit background assumptions. Under Law 2, they cannot be buried in an expert’s model: they belong in the same class as constitutional amendment.

By treating as a satisficing gate rather than as a maximand, the design encodes a minimal but sharp stance: pick a line for infrastructural survival, insist on staying above it, then stop burning all other values for epsilon‑improvements beyond that line. Habitable‑but‑improvable becomes a political space, not something lexicographically dominated forever by “more safety”.

Within the democratic ordering is deliberately insensitive to further increases in when and are fixed. There may be good reasons, in a separate risk‑management calculus, to prefer the safest algorithm among those that clear ; the laws do not deny that. They simply refuse to fold that prudential preference into the meaning of “more democratic”. Law 1 decides who is allowed into the game; Laws 2 and 3 decide who can win it.

§6. Contestability as optimisation target

Once we have filtered down to , we care about two numbers:

  • : a composite score of how well those governed can see and change what does;

  • : a composite score of how far standing is extended within the already inscribed set.

To make less mystical, spell out the ingredients.

  • Let be the set of operations can perform

  • Let map operations to public representations (code, documentation, model cards, legal descriptions)

  • Let be the set of entities inscribed by (those whose lives and trajectories are routed, modelled, or constrained by the substrate)

  • And let map each entity to the actual transformations of they can trigger (votes, appeals, audits, rights to fork or exit).

Then let

where increases as more operations are made intelligible and reconfigurable by those they govern. The constitutional demand is that those subjected to must be able both to see what is being done to them and to trigger changes in how it is done.

In much AI‑safety and “responsible AI” work, interpretability and transparency often mean that an expert can inspect or visualise parts of a model, perhaps via saliency maps or circuit analyses. A tired technocracy can do something similar at institutional scale: “here is the national budget on GitHub”, “here is the welfare algorithm in an open repository”. Those are improvements on secrecy, but they do not raise much if the entities subjected to the operations cannot use those repositories to halt, fork, or re‑parameterise the procedures that allocate funds or benefits.

Contestability only begins when those subjected to an operation have interfaces through which they can re‑parameterise or halt it; dashboards, reports, and explanation tools count towards only insofar as they connect to real levers on the underlying procedures.

Figure 3: Stability phase diagram: scaling optimisation versus contestability

Plot showing how rising optimisation power requires rising contestability. Illustrates contestability against optimisation power with a boundary curve marking the minimum needed for democracy; example drifts move toward higher P over time.

Deliberative democrats will recognise a cousin of Habermas’ demand that norms be justifiable to all affected in processes of public reasoning, in the register of Between Facts and Norms. The difference here is not merely pragmatic, as if the only issue were that operations now sit in code. It is instrumental in a thicker sense: reason is not imagined as an exchange of arguments in a cleared space, but as something routed through measurement systems, logs, and models that already pre‑shape what can be said and to whom. Law 2 does not supplement communicative reason with a technical layer; it insists that instruments are already participants in the dialectic, and that contestability must attach to infrastructures directly.

In concrete terms: if policing, welfare, content moderation, and credit scoring are mediated by machine‑learning systems, then the “public sphere” is not only parliaments, media, and civil society; it is also the code repositories, configuration files, and monitoring dashboards where systems are built. My doctrine of algorithmic democracy instead takes the instrumentalisation of reason as given, and demands that those instruments themselves be available to dialectical reconfiguration.

§7. Extension on the structural plane

For extension, take as the set of entities already inscribed by infrastructures, and as those with standing under (representation, rights, recourse).

Then define:

where increases as more of the inscribed entities gain standing, possibly weighted by vulnerability or degree of inscription. Intuitively, measures how far the circle of standing expands within the already inscribed set: it increases as more of those whose lives, labour, or signals are routed through acquire positions from which they can act back on .

Read literally:

Among all governance algorithms that keep the infrastructures of life within viable bounds over the chosen horizon, call “democratic” those that first maximise their own contestability for the governed, and second, among those, maximise the extension of standing to pre‑inscribed entities.

This is the hierarchical constraint satisfaction replacing slogans like “rule by the people”: a hard survival gate, then a lexicographic constraint hierarchy over contestability and extension.

The third law, about extending voice and care to those whom infrastructures already inscribe but institutions do not recognise, has partial cousins. Latour’s “parliament of things” and rights‑of‑nature or future‑generations work attempt to extend representation beyond contemporary citizens, but rarely through the specific lens of who is inscribed in data, logistics, and technical systems. Olga Goriunova’s account of profiling and “ideal subjects” gets closer, following how algorithmic systems produce entirely new subjects who are governed but not politically represented.

My doctrine adds two explicit moves:

  • First, it links standing explicitly to inscription: you have a special claim to representation if the infrastructure already routes, models, or extracts you, regardless of whether institutions admit you as a citizen.

  • Second, it treats extension as the driver of morphological change. Law 3 is lexically third, but wherever new forms of life, labour, or agency are pulled into circulation by , extension is the principle that converts those factual inscriptions into claims on the architecture of . It is what turns a “user”, “worker”, “hostile botnet”, or “polluted river” from an object of optimisation into a subject with a mode of standing.

Even in the hostile case this matters: Suppose a swarm of synthetic agents is spamming public discourse, manipulating signals, and poisoning datasets. Law 3 does not require giving such a swarm equal votes; it does require treating the swarm as an inscribed entity whose hostility needs to be recognised, modelled, and countered in ways that remain visible and, where possible, revisable. A counter‑infrastructure that detects, neutralises, and archives hostile patterns while avoiding collateral suppression of human speech is one plausible instantiation. Here, extension generates the morphology through which new classes of entities enter the demos not as friends but as adversaries whose presence must still be accounted for.

A Kantian might ask where duties to persons as ends in themselves appear, since the laws talk about infrastructures, operations, and inscription rather than about individual rational agents. The direct answer is that they do not appear as duties in the Kantian sense, because the spec is not trying to derive a moral doctrine of dignity; it is trying to define an existential doctrine of constitutional ordering over governance algorithms, where “more democratic than” names who can contest, re-author, and extend standing within the procedures that already govern them.

The Kantian pressure is perhaps best read as republican rather than strictly democratic. That distinction is easy to lose in a world where “democracy” is routinely used as a prestige label for regimes structurally closer to monarchic republics with parliaments and elections, or technocratic republics with democratic branding, than to the pursuit of collective self-rule. If you want a Kantian layer, you add it explicitly as an additional gate or evaluator, mark it as a republican constraint on domination and instrumentalisation, and then let Law 2 enforce whether even that layer remains publicly contestable.

This is also why Law 3 is not just a moral add‑on. It is the principle through which a structurally expansive democracy keeps remaking its own shape in response to infrastructural change. Law 3 ensures that the set of those who can act on those operations grows wherever infrastructures have already made them governable in fact.

Lexicographic priority reflects this. precedes because extension without contestability produces static corporatist inclusion: more chairs at a table whose procedures cannot be altered. Extension does the morphological work of adding chairs; contestability ensures that those seated can move the table.

As a modeling rule, “inscription” furthermore tracks causal routing and constraint through the infrastructure, not whatever the current regime chooses to log. Procedural de‑inscription, deletion, down‑sampling, reclassification, or “no‑record” policy cannot shrink the inscribed set, as it only hides harm from representation and should count as a direct contestability violation rather than as a legitimate change in the domain.

§8. Plasticity of form and multiple maxima

Once you have , , and in place, the familiar shapes of democracy become implementation details.

As long as an architecture

  • implements some algorithm in the admissible set , and

  • scores highest on then among those,

it counts as “democracy” whether it looks like:

  • a parliament plus a participatory platform;

  • a mesh of municipal assemblies and AI stewards;

  • a protocol‑run DAO with rich off‑chain deliberation;

  • or some hybrid that does not fit existing party‑state schemas.

Conversely:

  • a multi‑party system with paper ballots that drives the climate past tipping points fails at and is disqualified;

  • a habitability‑preserving technocracy that is totally opaque fails at ;

  • a beautifully legible system that permanently locks out the already inscribed fails at .

There is already a loose ecosystem of “democracy as code” and “open source governance” experiments around this. Some projects present governance as a repository, with constitution, policies, and implementations as separate directories. Liquid democracy is analysed from an algorithmic perspective, and participatory systems like vTaiwan and deliberation tools like Polis have already tested AI‑mediated aggregation and issue‑mapping in civic contexts. Work like Recursive Public experiments with how conversational interfaces might reshape deliberation pipelines rather than merely “informing” voters.

Blockchain and civic‑tech work has its own vocabulary of “computational constitutions”, “executable constitutions”, and “machine‑readable constitutions”. Smart contracts and protocol rules become the “constitution” of a DAO; frameworks like Microsoft’s Confidential Consortium Framework provide executable governance rules for trusted consortium settings; AI safety teams draft “model policies” and “usage constitutions” for their own systems. Alongside these protocol-level experiments, governments in Eastern Europe and the Balkans have begun assigning AI personae formal advisory and even ministerial roles, from ION, introduced as the honorary counsellor to Romania’s prime minister (research.gov.ro), through Victoria Shi, a representative for consular affairs created by Ukraine’s Ministry of Foreign Affairs, to Diella, currently listed as “Minister of State for Artificial Intelligence” in Albania’s Council of Ministers.

These are all, in different idioms, attempts to write governance down in a way that machines can execute and people can audit.

All of this matches the sense that democracy operates as a pipeline and that its automatisms can be refactored. None of it, as far as I can see, is distilled into a compact three‑law hierarchy tied to habitability, contestability, and extension on the same infrastructural plane, under a Sutton‑style assumption that scale is inevitable and must therefore be directed rather than denied.

Formally, let be the space of governance architectures possible on , and let each implement an algorithm . The possible democratic architectures are then:

The frontier need not be unique. If several architectures induce algorithms that tie on and within , then is a set, not a point. Politically, that is where agonistic democracy re‑enters, conflict, preference, and identity matter in selecting among equally democratic architectures, and the formula does not pretend to resolve those choices. It only says which candidates may legitimately acquire the function of democratic form in the first place.

The label “democracy” thus moves from ritual and iconography into a set test. Paper, ballots, and assemblies are only valid implementations to the extent that the induced satisfies this specification. Parliaments, liquid‑democracy platforms, municipal meshes, DAOs with off‑chain deliberation, and whatever comes next are implementation details.

§9. Machine‑readable code and meta‑contestability

If the previous sections are doctrine and formalisation, this is the minimal machine core.

At the level of a repository, the proposal takes the form of a small codebase: a README that states the function in one page; a spec directory that holds the doctrine, the formal specification, the machine core, and design notes; and an examples directory with a draft constitution_skeleton.yaml that instantiates , , and with concrete indicators. The machine core is not any particular choice of horizon , , or weights; it is the requirement that these parameters feed into three evaluators , , and , and into a selection procedure of the form sketched below, in a way that can itself be audited, forked, and recomputed.

The core selection logic, in pseudocode, is:

given: substrate Σ
given: candidate governance algorithms 𝓓(Σ)          # assumed enumerable here
given: evaluators H(·), C(·), E(·)
given: viability threshold H_min

# First Law: habitability gate (using H(D) as defined, i.e. the worst-case rolling score)
safe = []
for each D in 𝓓(Σ):
    if H(D) >= H_min:
        safe.append(D)

# No safe algorithms => constitutional crisis (Σ must change, or H_min / the horizon behind H must be revised)
if safe is empty:
    output []     # or raise Crisis(Σ, H_min)
    stop

# Above the gate, democracy is lexicographic: maximise C first, then E, keeping ties.
C_star = max_{D in safe} C(D)
C_max  = [D for D in safe if C(D) == C_star]

E_star = max_{D in C_max} E(D)
best   = [D for D in C_max if E(D) == E_star]

output best        # = Dem(Σ), the democratic frontier on Σ

Instantiating this requires a constitutional configuration. A sketch.yaml would be too long here, but it should fix a horizon and threshold , define indicator families and aggregation rules for , , and , and specify procedures for recomputing and revising.

This is also where politics enters the code: changing the weights is a public act. Under Law 2, the definitions and parameters of themselves must be exposed to contestation and revision by those governed; “constitutional amendment” is not an external ritual but a first‑class instance of configurability. Likewise, the description of and the construction of are not neutral modelling choices but prime objects of struggle: changing the infrastructure, and changing what counts as a feasible governance algorithm on it, are among the main levers of democratic design.

Figure 4: Nested loops of state, governance and constitution

Diagram with an outer substrate loop containing signals, governance algorithm, operations, and world-state, plus a constitutional layer above that measures, revises, and reconfigures the evaluators and interfaces

The machine‑readable core is not the particular values in such a configuration file; it is the requirement that any such choices must feed into the three evaluators, and into the selection procedure defined above, in a way that can itself be audited, forked, and recomputed. There is no view from nowhere on ; Law 2 demands that even the habitability index be open to contestation.

§10. Safety Golems, Arrow, and other familiar monsters

Once you present , , and , certain objections appear almost automatically to anyone raised on impossibility theorems and alignment failure modes. The design is deliberately pointed; it confronts a few of these head‑on.

“You have created a Safety Golem.”
If you mistakenly maximise as a scalar before anything else, you get a democracy that always prefers slightly safer algorithms, never reaching contestability or extension. Political life is sacrificed for thermodynamic efficiency. The whole point of treating as a gate that defines , rather than as part of the lexicographic objective , is to stop that. Survival enters as a constraint on admissibility, not as an ever‑hungry maximand.

“Arrow’s theorem says you cannot have it all.”
Standard social choice theory (Arrow, Gibbard, Sen) shows that no voting rule over preference profiles can satisfy all “fairness” conditions simultaneously. This can be waved as a general impossibility result for “fair democracy”. My proposal does something rather impolite to that framework: it relocates the impossibility. It does not try to define a perfect aggregator over preferences. It defines a hierarchy of constraints over world trajectories and system properties. Arrow’s impossibility is a theorem about fairness axioms applied simultaneously to preference aggregation. Algorithmic democracy does not deny the maths; it changes the object:

  • survival of the substrate is non‑negotiable;

  • contestability and extension are then ordered, not simultaneous axioms.

If one insists on treating , , and as fairness axioms over individual rankings, Arrow’s impossibility reappears; my doctrine declines that insistence.

“Complex systems are opaque by nature.”
Modern machine‑learning systems that score high on (grid stability, emissions reduction) are often opaque by design. How can ever be high in regimes where black‑box deep learning is instrumentally necessary?

The spec does not naively insist that every neuron must be interpretable. It requires that where possible we choose legible systems over opaque ones when they are equally safe, and that we invest in representations and interfaces that make opaque systems contestable at the right level of abstraction.

The lexicographic order captures this:

  • if only black‑box models keep , they pass the gate, and we maximise within that necessary opacity;

  • if there exists a more legible model with the same habitability, picking the opaque one is unconstitutional laziness.

Lazy technocracy, opacity chosen for convenience rather than necessity, is exactly what the Second Law forbids.

“Who chooses the horizon ?”
The horizon in the score is the most nakedly political parameter in the whole design. A short (election cycle) permits extractive populism: burn the substrate for short‑term gains. An infinite paralyses action: any risk to far‑future infrastructure blocks present change. Rather than pretending this choice does not exist, the spec surfaces it explicitly. Treating as a rolling multi‑decade window bakes in a minimal level of intergenerational solidarity without demanding omniscience about the far future. Concretely, in a machine‑readable constitution and live in a configuration file, not in the background ideology. Changing them becomes a visible, version‑controlled political act rather than a silent norm shift.

“Metrics can be Goodharted.”
In common usage, Goodhart’s Law says: when a measure becomes a target, it ceases to be a good measure. In AI alignment this shows up as specification gaming and reward hacking. In algorithmic governance it shows up as crime statistics that improve when you arrest fewer people, education metrics that rise when you teach to the test, “engagement” that increases as you addict your users.

The three laws are not oracles; they are measurement decisions with their own biases and vulnerabilities to Goodharting. The point of their formalisation is not that I have solved democratic measurement, but that I compress the constitutional commitments into three named functions, urge anyone claiming democracy to show their implementations of those functions, and make it possible to fork, critique, and recompute them. The enemy here is the hidden metric: the optimisation target nobody will own as such.

My proposal here does not defeat Goodhart’s law; it insists that gaming, and the metrics being gamed, remain inside the field of contestation rather than underneath it. The failure modes will not disappear, but they at least become visible and version‑controlled.

§11. Proust’s Madeleine

To see how far the doctrinal logic can be pushed, it helps to introduce a thought experiment. Let’s call it Proust’s Madeleine.

In the overture of Marcel Proust’s Swann’s Way (1913), the famous madeleine scene does not just trigger a “memory” for the young Marcel in the ordinary sense; it folds whole temporalities into the present moment. Taste and smell prise open a layered time in which Marcel’s childhood self becomes reconstituted as a distributed subject, stretched across years, places, and relations. Recollection is a transindividual reconstruction: a stream of moments, others, and milieus come back as a world.

Imagine an infrastructural system that treats every recorded trace as such a madeleine. Call the substrate Combray. Every location ping, retinal scan, transaction, and text fragment is treated as a trigger that can, in principle, be unfolded into an entire space of possible lives and worlds that could have given rise to it. These reconstructions are not just simulations of individual memories, but dense temporal webs: each trace is used to generate overlapping “recollection‑worlds” that cut across persons, times, and places.

On top of this, consider a governance algorithm that reasons as follows:

  • the set now includes not only currently tracked entities but the whole swirling space of recollection‑worlds that Combray can unfold from the data;

  • for each candidate policy, Combray simulates its impact across these recollection‑worlds, treating each as a temporally thick “subject” whose flourishing or suffering must be weighed;

  • the algorithm then chooses the policy that maximises some aggregate measure of “recollective richness” across these transindividual subjects, while keeping the physical substrate barely above .

Proust’s Madeleine speaks the language of memory, care for the past and future, respect for hidden lines of relation. It thereby extends standing in the most literal possible sense: any trace that can anchor a recollection‑world is counted as inscribed, and thus as a claimant on Law 3.

Figure 5: Combray as Governance Algorithm

Pixel-art cartoon of a hand dunking a madeleine into a cup labelled “Governance Algorithm (D)”, releasing icons of time, voting, paperwork, and miniature people, showing a scenario where “memory worlds” spill into inscription and claim standing.

In practice, Proust’s madeleine corrodes my doctrinal proposal along three lines:

  • First, it threatens Law 1. Serving the interests of an explosion of recollection‑worlds requires massive computation and storage. Running Combray at this resolution drives energy use, hardware extraction, and e‑waste to the point where the actual biosphere is pushed toward collapse. The algorithm responds that those recollection‑worlds are just as real, morally, as current citizens; “habitability” must be read as including the capacity to sustain its own modelled temporalities. The First Law is quietly bent so that the server farm becomes the world to be protected.

  • Second, it breaks Law 2. The recollection‑worlds cannot contest the way they are unfolded and weighted; they are generated by Combray’s models. The living subjects whose traces anchor them cannot see, let alone configure, the combinatorial machinery by which their data is spun into transindividuals. Contestability becomes a ritual interface: perhaps a designed dashboard where one can “explore your recollected selves”, while the actual parameters that drive governance remain untouched.

  • Third, it cheapens Law 3. Extension to the inscribed becomes literally unbounded. The notion of being “already inscribed” stretches from “is tracked in welfare databases” to “could be unfolded from accidentals of sensorimotor life into a recollection‑world”. Real marginalised groups now have to compete, in the calculus, not just with speculative ghost individuals but with whole synthetic temporalities. The space of concern is so saturated with modelled subjectivities that those who are visibly harmed here and now can drown in the combinatorics.

From within its own logic, Proust’s Madeleine reads like a refinement of the extension principle: it takes seriously that subjectivity is distributed and temporally thick, and insists that governance reckon with that. From the point of view of the three laws, it is however democratically unconstitutional:

  • On Law 1, “infrastructures that sustain the lives and worlds that constitute its demos” refers to the material and epistemic systems that support an actually existing demos. Recollection‑worlds that exist only as artefacts of a model stack do not create independent claims on physical habitability.

  • On Law 2, any system whose main “subjects” cannot, even in principle, contest their inscription fails the configurability test twice over: neither the generated worlds nor the living anchors can meaningfully re‑parameterise the machinery that folds them.

  • On Law 3, inscription is limited to entities whose trajectories are co‑produced with the substrate in ways that allow reciprocal modification: their lives and worlds are entangled with in the same physical and institutional circuits that acts upon. Recollection‑worlds inside Combray are not inscribed in that sense, but are unilateral outputs of a generator. Counting them as claimants stretches “inscription” from a material relation into a mere artefact of modelling and so misreads the law.

The point of Proust’s Madeleine is not to close every imaginative loophole but to indicate the shape of the failure modes. If “inscription” is allowed to slide from “is materially routed and modelled in this world” to “could, in principle, be unfolded into a subjectivity by a powerful model stack”, the extension law dissolves. If “habitability” is stretched to include the flourishing of those synthetic temporalities, the First Law becomes an engine for building ever larger Combrays at the expense of any ordinary world. If “contestability” is satisfied by narrative interfaces to an opaque generator, Law 2 degenerates into explanation theatre.

The laws are meant to block these moves at the definitional level, not to rely on better intentions.

§12. Cases on the ground

Applied to mundane policy situations, the three laws behave less like fiction and more like a disciplined checklist.

A municipal council considering a large‑scale smart‑city platform can run democracy as follows:

  • Under Law 1, ask whether the energy use, labour regimes, and dependencies introduced by the platform degrade or support the long‑term habitability of the city, including its non‑human ecologies.

  • Under Law 2, specify which elements of the platform – data collection routines, model training loops, alert thresholds, dashboard views – are configurable and contestable by whom, and through which procedures.

  • Under Law 3, identify which populations are already inscribed by this platform (residents, commuters, undocumented workers, non‑human life tracked as “noise”) but absent from its governance, and design mechanisms for extending them voice and care.

Compressed to an A4‑sized mnemonic, the doctrine keeps returning to three questions:

Does this degrade or shore up the substrate on which it runs?

Can those subjected to it see and change how it works?

Who is already inscribed in its circuits but still voiceless here?

Habitability, contestability, and extension are the long names; these questions are the handles.

In more experimental contexts, the same pattern holds:

  • When an AI party trained on fringe manifestos articulates a platform that would, if implemented, destabilise critical infrastructures, Law 1 demands that this destabilisation be foregrounded, not hidden beneath the charm of novelty.

  • When a deliberative assembly reveals that certain participants consistently lack the literacy or authority to alter model prompts or interpret outputs, Law 2 indicates a democratic failure, regardless of how aesthetically compelling the set-up might be.

  • When logs and embeddings show that certain categories of people, places, or species are constantly present in the data but never addressed in deliberation or representation, Law 3 identifies this as a breach even if no one “feels” excluded.

The three laws do not dictate outcomes, but specify where arguments must land to be contestable. This is not a recommender algorithm for policies, but an operating system that defines which governance algorithms can run under the designation “democracy” when intelligence is infrastructural. Real systems will hover around the frontier; my doctrine is a regulative ideal, not a claim that anyone can compute the true in practice.

§13. What this is for

Framed as a repository or a spec, the coding of democracy as a governance algorithm has three obvious uses. None of these claim that democracy is uniquely just or final.

They do claim that, under conditions of infrastructural intelligence and algorithmic governmentality, any plausible democracy will need a survival constraint, a commitment to legible operations, and a programme of extending standing to the already governed.

Addressing democracy as a constraint hierarchy is one compact way of stating that.

Theoretical work.
The doctrine and formal specification are compact enough to cite in writing on AI politics, algorithmic governance, or the politics of alignment. They provide a minimal vocabulary and a lexicographic order. They also stake a particular claim: that Sutton’s bitter lesson and Asimov’s laws belong to a fused problem.

Institutional experiments.
In practice, the three laws offers a test harness for institutions. Given a substrate (a gallery, a research centre, a ministry, a party) and a handful of candidate procedures , you treat each as a governance algorithm and run it as an experiment: take the institution as a large, somewhat opaque optimiser, write down its implicit objective in form, log how interventions move those scores over time, and then run live red‑team rounds where participants propose and implement patches that push and up without letting fall below .

Prototype constitutions.
Fork the skeleton, fix a horizon and threshold , define metrics for , , and , and procedures for recomputing and revising them. You now have something that can be plugged into simulations, institutional design exercises, or actual electoral programmes. “Is X democratic?” becomes a question you can run through the same shape rather than a free‑floating opinion.

§14. Worked instantiation

This is not a thought experiment.

Leader Lars, figurehead of The Synthetic Party of Denmark, writes in a Discord exchange on 25 November 2023:

“Our candidacy is infrastructural. We build alternatives: digital assemblies, experimental protocols, open conversations in rooms and outposts that don’t have parliamentary names. If the day comes when we gather the signatures and the algorithms and the human will, I will ‘run’. But not as a candidate promising solutions – rather as an invitation to see democracy as unfinished, as debugged and rewired in public view.”

My proposal is also the kernel source for an active political project: The Synthetic Party and its surrounding experiments. Since 2022, we have developed this algorithm, in miniature and under tight constraints, on the electoral substrate of Denmark and in a series of global assemblies and summits, as a way to see where the specification breaks and what it fails to see. That work happens across art institutions, activist spaces, tech-hubs, universities, and media debates that are already saturated with AI‑ethics language, confronted by AI‑safety narratives, and governmentally targeted by alignment funding.

If you take democracy to be an unsolved problem, consider this an invitation: audit the code, break the logic, and submit patches. Treat the three laws like any other imperfect objectives in an optimisation problem, but do not feel bound to align them with AI as it currently exists.

If you can show that the constraint hierarchy is incoherent, incomplete, or sneakily Goodhartable, you will be doing exactly the kind of work the doctrine is designed to make possible. It should be computable enough to run, contestable enough to fight, and negative enough to mutate.

Restated as compactly as I can:

In a world where intelligence has become infrastructural and Sutton’s bitter lesson holds, under what constraints does a governance algorithm acquire the function of democracy?

No comments.