Systemic Risks and Where to Find Them

Link post

Or: Todd Has a Presentation in London on Thursday and Three Academics (Some of Them Dead), Won’t Stop Arguing About Root Fungi

(The story follows the one in Seeing Like A State but applies a systemic perspective on AI Safety)

Epistemic Status: Written with my Simulator Worlds framing. E.g I ran this simulated scenario with claude in order to generate good cognitive basins, I then orchestrated it to play out a simulated scene with my instructions (with some changes for better comedic effect). This post is Internally Verified (e.g I think most of the claims are correct with 70-85% certainty).


The headset smells like someone else’s face.

“Just put it on, Todd.”

“Sandra, it truly—”

“I know. Put it on. You’re presenting to the Science and Technology Select Committee (UK) on Thursday about systemic risks from frontier AI and you currently think systemic risk means ‘a risk that is big.’”

“That is absolutely not—”

“You said that. In the pre-brief. I wrote it down. I’m going to have it framed.”

Sandra has worked at the Department for Science, Innovation and Technology for twenty-three years. She once corrected a visiting researcher from the Santa Fe Institute on his own citation and he sent her flowers. She has opinions about management cybernetics that she shares with nobody because nobody asks. She is paid less than the office coffee budget.

Todd was a postman in Swindon until eighteen months ago. His mate Dave got him the job.

“I’ve got forty-seven documents to fill in for the committee. Forty-seven. They’ve got boxes. I understand boxes. I’m good at boxes.”

“The boxes are wrong.”

“The boxes are government-mandated”

“Still wrong. Headset. Now.”

Introduction

He’s in a forest.

It takes a moment. The conference room doesn’t so much disappear as get gently shouldered aside by something much older. And then Todd is standing on soft ground, in cold air, surrounded by trees.

Except — and it takes him another moment to understand why it feels wrong — the trees are in rows. Perfect rows. Identical trees, identical spacing, stretching in every direction until the geometry gets bored and fades into mist. Norway spruce. He knows this because a small label is floating beside the nearest trunk like a museum placard: Picea abies. Planted 1820. Yield-optimised monoculture.

The ground is bare. Not the interesting kind of bare, with moss and leaf litter and the promise of hidden things — just dark, flat, dead soil. No undergrowth. No ferns. No birds. Nothing moving. The air tastes of resin and something chemical he can’t place.

A yield-optimised spruce monoculture in Germany. Every tree individually excellent. The forest is dying.

“Hello?” says Todd.

Nothing.

He walks between the rows. His footsteps sound wrong — too clean, too isolated, as if the forest has nothing to absorb them. He touches a trunk. The bark feels thin. Papery. Like something that’s been alive for a long time but has recently started to forget how.

“This is horrible,” he says. “Why is this horrible? It’s a forest. Forests are nice.”

Sandra’s voice in his earpiece: “It’s not a forest. That’s the point. Keep walking.”

He walks. The rows repeat. The silence repeats. It’s like being inside a spreadsheet that grew bark.

“Sandra, why am I here? I have documents. I have work to do, how the hell is this related to a bloody forest in the middle of nowhere?”

Todd starts muttering his mantra he has developed for the last few weeks

“AI capability leads to risk factor, risk factor leads to potential harm, you evaluate the capability, assess the risk, mitigate the harm. A, B, C. It’s clean. It makes sense. It fits in the boxes.”

“Todd, you’re doing it again!”

“Sorrrryyyy…”

“Now, the obvious follow up question is whether your framework describes a forest?”

“Why would I need to answer that?”

“Todd, does it describe a forest?”

“It doesn’t need to describe a forest, it needs to describe—”

“Does your A-B-C framework describe how this forest dies?”

Todd stops walking. He looks at the trees. At the bare soil. At the thin bark that’s starting, now that he’s paying attention, to peel at the edges. At the silence where birdsong should be.

“How does a forest die?”

“That’s the right question. And that’s why you’re here.”

Root Networks

Three people are standing in a clearing he could swear wasn’t there thirty seconds ago.

Two of them are already arguing. The third is watching with the patient expression of a man who has seen this argument happen before and knows exactly when to intervene.

The one in tweed sees Todd first. “Ah! You’re the governance chap. James Scott. Political science. Yale. Dead, technically, but they made me from my books. Try not to think about it.”

“I will absolutely think about it.”

“This is Michael—”

“Michael Levin, developmental biology, Tufts, not dead, I run the company that built this VR thing, Levin Enterprises, sorry about the headset smell—”

“And I’m Terrence Deacon, anthropology, Berkeley, unclear if dead, the simulation team had conflicting information and frankly I find the ambiguity productive—”

“Right,” says Todd. “Great. I’m Todd. I work in AI governance. I was a postman. I have a presentation to the Science and Technology Select Committee on Thursday. I need to know what a systemic risk actually is, and I need to know it in words that don’t require a PhD to understand, and I need to know it by Wednesday at the latest because I have to practice the slides on the train.”

Scott gestures at the trees. “This is a systemic risk.”

Todd looks around. “This? A forest?”

“This specific forest. What you’re standing in is the result of a decision made by the Prussian government in 1765. They looked at Germany’s forests — old growth, hundreds of species, tangled, messy, full of things doing things they couldn’t name or measure — and they saw waste. They wanted timber. So they cleared the old forests and planted these. Single species. Optimal spacing. Every tree selected for maximum yield.”

Todd waits. “And?”

“And it worked. For one generation, these were the most productive forests in Europe. The Prussians had cracked it. Scientific forestry. Rational management. Every tree individually perfect.”

“So what went wrong?”

This is where it happens. Levin can’t contain himself any longer. He’s been rocking on his heels and he breaks in like a man whose entire career has been building toward this specific interruption.

“What went wrong is that they thought the forest was the trees. But the forest isn’t the trees. The forest is the network. The mycorrhizal—”

“The what?”

Sandra, in Todd’s ear: “Fungal internet. Roots connected underground by fungi. Trees share nutrients and chemical warning signals through it. Like a nervous system made of mushrooms.”

“—the mycorrhizal networks connecting every root system to every other. The pest predators living in the undergrowth. The soil bacteria maintaining nutrient cycles. The entire living architecture that the Prussians classified as ‘mess’ and removed. Because their framework — their evaluation framework, Todd — measured individual trees. Height, girth, growth rate, timber yield. And every individual tree was excellent.”

“But the system—”

“The system was dying. Because the things that made it a system — the connections, the information flows, the mutual support — weren’t in any individual tree. They were in the between. And the between is exactly what the evaluation framework couldn’t see.”

As Levin speaks, the VR does something Todd isn’t expecting. The plantation dissolves backward — rewinding — and for a moment he sees what was there before. The old-growth forest, not a grid but a tangle. Trees at odd angles, different species, different ages, connected below the surface by a dense web of orange lines — the mycorrhizal network rendered visible, a living architecture of staggering complexity where every tree is linked to every other through branching fungal pathways.

Then the VR plays it forward. The old growth is cleared. The network is severed. The grid is planted. And the orange connections simply stop.

Left: the old-growth forest. The orange web is the mycorrhizal network — the connections that made it a living system. Right: the yield-optimised plantation. Same trees. No network.

Todd stares at the two images hanging in the air. The left one dense with orange connections. The right one bare.

“The dashboard says everything’s fine,” he says, looking at the grid.

“The dashboard measures trees,” says Sandra.

Deacon, who has been standing very still — which Todd is learning means he’s about to make everything more complicated — steps forward.

“The reason this matters — and this is crucial, Jim, because you always tell this story as ‘they removed biodiversity’ and that’s true but it’s not deep enough—”

“Oh here we go,” mutters Levin.

“—is that the forest’s living architecture wasn’t just useful. It was organisational. The mycorrhizal network was the forest’s information processing system. Warning signals about pest attacks propagating through the root network. Resources redistributed from healthy trees to stressed ones. The forest was performing a kind of distributed computation, and it was organised around constraints that existed in the relationships between species, not in any individual species.”

“What kind of constraints?” says Todd, because he is paid to ask questions even when he suspects the answers will make his headache worse.

“The kind that don’t physically exist anywhere but shape the dynamics of everything. The forest had a collective goal — maintaining its own viability — that wasn’t located in any tree, wasn’t programmed into any root, wasn’t specified by any forester. It emerged from the network. It was, if you’ll permit me the term—”

“Don’t say it,” says Levin.

“—teleological.”

“He said it.”

“TELEOLOGICAL behaviour! Goal-directed! The forest-as-a-whole was navigating toward stable states that no individual tree was aiming for, and the navigation was happening through the very networks that the Prussians couldn’t see and therefore destroyed. This is not a metaphor for what’s about to happen with AI governance. It is a structural description of the same failure mode.”

Sandra: “Todd. Translation: the forest wasn’t just a collection of trees. It was a living system with its own collective behaviour that emerged from the connections between trees. The Prussians’ framework measured trees. The system failed at the level of connections. Their dashboard said everything was fine right up until the forest died. That’s a systemic risk. Not A causes B causes C. The topology fails.”

“And my risk assessment framework—”

“Measures trees.”

Brasilia

The forest dissolves. Todd’s stomach makes a formal complaint. When the world reassembles, he’s floating above a city that looks like someone solved an equation and poured concrete on the answer.

Brasília. He recognises it from — actually, he doesn’t know where he recognises it from. Maybe Sandra sent him something. She does that.

The monumental axis stretches to the horizon. Everything is separated into zones. Residential. Commercial. Government. Traffic flow calculated. Sight lines optimised. From above, it’s either an airplane or a cross, depending on how much architecture school you’ve survived.

It’s beautiful. It’s also, somehow, the same kind of horrible as the forest. The same too-clean silence. The same absence of mess.

“Where is everyone?” says Todd.

“In the bits nobody designed,” says Scott.

The VR pulls Todd down toward street level, and the city splits in two. On the left, the planned core holds still — wide boulevards cutting a perfect grid, identical blocks separated by calculated distances, streets so straight they look ruled onto the earth. On the right, a different city altogether. Streets that curve because someone needed to get to the bakery. Roads that fork and rejoin for no reason except that two neighbours built walls at slightly different angles. Buildings pressed against each other like passengers on the Tube. Markets spilling out of doorways. Laundry on balconies.

The grid is silent. The sprawl is alive.

Left: the city someone designed. Right: the city people built. Two and a half million people live in Brasília’s satellite cities — the parts nobody planned. The parts that work.

“Oscar Niemeyer and Lúcio Costa,” says Scott. “Designed a whole capital city from scratch in 1956 where they separated every function and optimised every flow. It was supposed to be the most rational city ever conceived with two hundred thousand people in the planned core.”

“And the other bit?”

“Two and a half million. In the settlements nobody drew. With the corner shops and the street life and the walkable neighbourhoods and the community structures — all the things that make a city a city, and that the design optimised away because they weren’t in the model.”

“Because they’re the between again,” says Levin. “The city that works is the one that grew in the connections between the designed elements. It’s developmental, Jim, I keep saying this — Costa thought he could specify the mature form of a city from initial conditions, but a city is a developmental system, it discovers its own organisation through—”

“Michael, not everything is embryology—”

“This IS embryology! A developing embryo doesn’t work from a blueprint! The cells navigate toward the target form through local interactions! The collective discovers its own organisation! You can’t specify a city from above any more than you can specify an organism from a genome—”

“The genome analogy breaks down because a city has politics, Michael, there are power dynamics—”

“Power dynamics ARE developmental! Morphogenetic fields are—”

“STOP,” says Deacon, and even the simulation of James Scott shuts up. “You’re both right and you’re both being annoying about it. The structural point is this: the designed substrate — the plan, the mechanism, the genome — specifies constraints. What grows within those constraints has its own logic. Its own organisational dynamics. Its own emergent goals. You can design Brasília. You cannot design what Brasília becomes. That gap — between what you design and what grows — is where Todd’s systemic risks live.”

Todd has been looking at the two panels. The grid and the sprawl. One designed. One discovered.

“So the risk framework,” he says, slowly, not because he’s understanding but because he’s starting to see the shape of what he doesn’t understand, “measures the plan. It measures the mechanism. A causes B causes C. But the risk isn’t in the mechanism. It’s in what grows on the mechanism.”

“Now show him the Soviet Union,” says Sandra. “Before he loses it.”

“I’ve already lost it.”

“You’re doing fine. Soviet Union. Go.”

Central Planning

The geometry misbehaves. Todd arrives in a planning office that was either designed by M.C. Escher or generated by an AI that was asked to visualise ‘bureaucratic hubris.’ Staircases go in directions that staircases should not go. Input-output matrices cover blackboards that curve back into themselves. A portrait of Leonid Kantorovich — Nobel laureate, inventor of linear programming — hangs at an angle that suggests even the wall is uncertain about its commitments.

The three academics are already there, already arguing, already standing on different impossible staircases.

“—the Gosplan case is the purest example because they literally tried to specify every input-output relationship in an entire economy—”

“Sixty thousand product categories,” says Scott. “Centrally planned. Targets set. Resources allocated. The entire Soviet economy as an optimisation problem.”

“And it produced numbers,” says Deacon, who is standing on a staircase that appears to be going both up and down simultaneously. “Beautiful numbers. Targets met. Production quotas filled. The official economy was a masterwork of engineering.”

“And the actual economy?” says Todd.

“The actual economy,” says Scott, and he’s suddenly serious, the tweed-and-wine performance dropping for a moment, “ran on blat. Favours. Informal networks. Factory managers lying about their production capacity to create slack in the system. Shadow supply chains. Personal relationships doing the work that the plan couldn’t do because the plan couldn’t process enough information to actually coordinate an economy.”

Levin groans. “Oh no. Are we doing Hayek? Jim, please tell me we’re not about to do Hayek.”

“We are briefly doing Hayek.”

“Every libertarian with a podcast has done Hayek. The comment section is going to—”

“The comment section can cope. Todd, bear with me. This is the single most over-rehearsed argument in the history of economics, and I’m going to do it in ninety seconds, and the reason I’m doing it is that both sides got the punchline wrong.”

“I don’t know who Hayek is,” says Todd, and Levin mouths lucky you behind Scott’s back.

“Friedrich Hayek. Austrian economist. 1945. His insight — and I’m saying this with full awareness that it’s been turned into a bumper sticker by people who’ve never read him — is that knowledge in an economy is distributed. The factory manager in Omsk knows things about Omsk that no planner in Moscow can know. The baker knows what her street needs. The engineer knows which machine is about to break. This knowledge isn’t just difficult to centralise. It’s impossible to centralise. There’s too much of it, it’s too local, it changes too fast, and half of it is tacit — people know things they can’t articulate.”

“So a central plan—”

“A central plan takes all those local nodes — thousands, millions of them, each processing local information, each connected to the nodes around them — and replaces the whole network with a single point. One red dot in Moscow that every spoke has to feed into and every instruction has to flow out from.”

As Scott speaks, the VR renders the diagram on the blackboard. On the left, a distributed network — blue nodes connected by dense orange edges, information flowing locally between neighbours, no centre, no hierarchy, the whole thing humming with lateral connections. On the right, the same nodes rearranged into a spoke pattern, every connection severed except the line running to a single swollen red node at the centre. The orange peer-to-peer links reduced to ghost traces. Everything funnelled through one point.

Left: how knowledge actually lives in an economy — distributed, local, lateral. Right: what central planning requires — everything routed through one node. The red dot is not evil. It is simply overloaded. This has been pointed out before. You may have heard.

“And what happens,” says Todd, “when there’s too much information for one node?”

“It does what any cell does under metabolic stress,” says Levin immediately. “It simplifies its—”

“Michael, it’s an economy, not a cell—”

“It IS a cell! Or it’s like a cell! The central planner is a cell trying to process the signalling environment of an entire tissue and it doesn’t have the receptor bandwidth, so it defaults to—”

“What he’s trying to say,” says Scott, physically stepping between Levin and the blackboard, “is that the node makes things up. Not maliciously. It simplifies. It has to. It’s one node trying to do the work of millions. So it uses proxies. Quotas. Targets. Tonnes of steel.”

Morphogenetic defaults,” mutters Levin.

“If you say morphogenetic one more time I’m—”

“And the actual economy?” says Todd. “The one that needs, like, bread?”

“The one that needs bread in Omsk and ball bearings in Vladivostok routes around the bottleneck. Informally. Through blat. Through personal connections. Through the factory manager who calls his cousin instead of filing a requisition form. Through the orange connections that the plan says don’t exist.”

“So the shadow economy is—”

“—it’s the lateral connections reasserting themselves,” says Levin, who has apparently decided that if he can’t say morphogenetic he’ll find another way in. “This is what happens in regeneration too, when you sever a planarian and the remaining tissue has to re-establish communication pathways—”

“We are not,” says Scott, “comparing the Soviet economy to a flatworm.”

“I’m comparing the information architecture of—”

“He’s actually not wrong,” says Deacon, which makes both Scott and Levin turn toward him with matching expressions of suspicion. “The structural point holds. When you cut the lateral connections in any distributed system — biological, economic, social — the system either re-grows them informally or it dies. The Soviets got blat. A flatworm gets a new head. The mechanism is different. The topology is the same.”

“Thank you, Terrence, that was very—”

“I’m not on your side, Michael. I’m saying you stumbled into the right structure using the wrong analogy. As usual.”

Todd has been staring at the diagram on the blackboard. The dense orange network on the left. The hub-and-spoke on the right. Something is nagging at him.

“Hang on,” he says. “The Hayek thing. The market thing. His answer was: replace the planner with price signals. Let the market do the coordination. But that’s still just—” He points at the right side of the diagram. “That’s still a hub, isn’t it? The price signal is the hub. Everything gets routed through buy and sell instead of through plan and allocate, but it’s still—”

Scott smiles. The first genuine one Todd has seen. “Keep going.”

“It’s still a single coordination mechanism. You’ve just changed the colour of the red dot.”

“That,” says Scott, “is the part that Hayek got right and his fans get catastrophically wrong. He diagnosed the problem — centralised knowledge processing fails — and then prescribed a different centralised knowledge processor. A more efficient one, sure. Better at some things, worse at others. But still one mechanism trying to do the work of a network.”

“So the question isn’t planning versus markets—”

“The question is: what happens to the distributed knowledge when you reorganise the network? And nobody in 1945 was asking that question because they were all too busy arguing about ideology instead of topology.”

“I want it noted,” says Levin, “that I have been saying this about cell signalling for—”

“NOTED, Michael.”

Sandra, in Todd’s ear: “He’s saying the shape of the information network matters more than the ideology running it. File that. It comes back.”

“And when someone tried to fix the official system by removing the unofficial one—”

“Gorbachev,” says Scott. “Anti-corruption campaigns. Stricter enforcement. More rigorous adherence to the plan. He looked at the blat networks and saw corruption. Waste. Disorder. Mess.”

“The same mess the Prussians saw in the old-growth forest,” says Deacon.

“The same mess that Costa and Niemeyer zoned out of Brasília,” says Levin.

“He cut the planarian in half,” says Todd, and immediately looks surprised at himself.

Levin points at him with both hands. “YES. THANK you. He cut the—”

“I cannot believe we’re doing the flatworm,” says Scott.

“He severed the lateral connections! And unlike a planarian, the Soviet economy couldn’t regenerate them fast enough! Because Gorbachev was also tightening enforcement, which is like — Jim, work with me here — it’s like cutting the planarian and also suppressing the wound-healing signals—”

“The economy isn’t a flatworm, Michael!”

“The TOPOLOGY is the SAME!”

“He’s right,” says Deacon, and Scott throws his hands up.

“Fine. Fine! He removed the informal networks. And everything collapsed. Because the mess was the distributed system doing the work the central node couldn’t. Remove it, and all you’re left with is an overloaded red dot trying to coordinate an entire economy through a straw. Is everyone happy now? Can we stop talking about flatworms?”

“Planaria,” says Levin.

“I will end you.”

Silence. Even the impossible staircases seem to hold still for a moment.

“He killed the mycorrhizal network,” says Todd.

Everyone looks at him.

“I mean — the principle. He removed the distributed system because the centralised framework told him it was waste. Same as the Prussians. Same as the city planners. The Prussians killed the network to make rows. The planners killed the sprawl to make a grid. And the Soviets killed the lateral connections to make a hierarchy. Three different shapes, same operation: take a distributed system, force it through a single point, lose everything the single point can’t see.”

Sandra, in his ear, very quietly: “Yes. That’s it.”

Todd looks at the three academics. The Escher staircases have settled into something almost normal, as if the geometry is calming down along with the argument. Levin is still quietly triumphant about the planarian. Scott is pretending to be annoyed. Deacon is watching Todd with an expression that suggests he’s been waiting for this question.

“Okay,” says Todd. “So the networks matter. The distributed bit is load-bearing. Every time we centralise it or formalise it or remove it, things collapse. I get that. But—” He stops. Thinks. “But you can’t just leave it alone, can you? The old-growth forest was fine because nobody was trying to coordinate it into producing timber. But we actually need economies to produce things. We actually need cities to function. You can’t just say ‘don’t touch the network’ and walk away.”

“No,” says Scott, and he looks at Todd differently now. “You can’t.”

“So has anyone actually figured out how to do this? How to work with the distributed thing without killing it?”

The three academics exchange a look. It’s the first time they’ve agreed on something without arguing about it first.

And then Sandra does something she hasn’t done all session. She breaks in. Not in Todd’s ear — in the room, her voice coming through the VR’s spatial audio as if she’s suddenly standing among them, and there’s something in her voice that Todd has never heard. Not quite anger. Something older than anger.

“There was someone,” she says. “Someone who understood formally, mathematically, practically that you cannot govern a distributed system by centralising it, and that the answer is not to leave it alone either. There’s a third option. And I have been waiting nine years for someone in this department to ask about it.”

“Stafford Beer,” says Deacon.

“Stafford Beer.”

Project Cybersyn

Todd: “Who—”

“Management cybernetics,” says Sandra, and she’s speaking faster now, like a dam breaking. “The Viable System Model. The insight is that any viable system has the same recursive structure — autonomous units at every level, each level self-regulating, feedback loops everywhere. You don’t control it from above. But you don’t abandon it either. You create the conditions for it to regulate itself. Because no external controller can model the system’s own complexity — the system is always more complex than any model of it. That’s Ashby’s Law, 1956, the law of requisite variety, and it is the single most important idea in governance that nobody in governance has ever heard of.”

A 3d rendering of a description of Project Cybersyn’s operations room. Santiago, 1971. Designed by Stafford Beer for Salvador Allende’s government. A room built to govern a living system as a living system. It was burned in a coup two years later.

The screens are alive. And on them, Todd sees the distributed network — not collapsed into a hub-and-spoke, not funnelled through one red dot. The orange connections between nodes are intact, visible, flowing. Factory output data streaming in from the regions, but not to a central planner — to each other. Local patterns feeding into regional patterns feeding into national dynamics, with the information staying distributed, the lateral connections preserved. Beer’s control room wasn’t a command centre. It was a window onto the network.

“Beer built this,” says Sandra. “For Chile. Under Allende. Project Cybersyn. A national economic coordination system based on cybernetic principles. Real-time factory data flowing up. Policy signals flowing down. Workers maintaining autonomy at the local level. The system was designed to preserve the distributed knowledge — the informal dynamics, the local information, the lateral connections — and make them visible without centralising them. He solved the problem that Hayek said was unsolvable and the Soviets proved was unsolvable. And he did it by changing the network topology.”

“What happened?” says Todd.

“September 11th, 1973. Pinochet, CIA-backed coup. They burned the operations room.”

The control room begins to darken. The screens flicker. The orange distributed network stutters and collapses — node by node, connection by connection — until it rearranges itself into a hub-and-spoke. A different red dot this time. Not Moscow. Chicago.

“Chile got Milton Friedman’s Chicago Boys instead — free market optimisation, deregulation, treat the economy as a problem solvable by one mechanism, the price signal, routed through one kind of node, the market. It’s a different ideology but the same network topology, everything funnelled through a single coordination point.”

“That’s—”

“A different colour of hub-and-spoke. Again. We had someone who understood how to govern distributed systems as distributed systems. We burned his control room and replaced it with a different bottleneck.”

The control room goes dark.

Government-mandated bottleneck,” says Sandra, and twenty-three years of professional composure cracks, just slightly, just for a moment, before she puts it back together.

Todd takes the headset off. Conference room. Fluorescent lights. The HVAC hum.

Sandra appears in the doorway with fresh tea and a stack of highlighted papers.

“I’ve rewritten your slides,” she says.

“Of course you have.”

“Slide seven is blank.”

“Why is seven blank?”

“Because it’s the honest answer. We don’t have the science yet. That’s what you’re asking them to fund.”

Todd takes the tea. Looks at the slides. Looks at Sandra.

“Why aren’t you doing the committee presentation?”

Sandra smiles the smile of a woman who has been asked this, in various forms, for twenty-three years.

“Because they don’t listen to secretaries, Todd. They listen to men in suits. The system can’t see where its own knowledge lives.”

She pauses.

“Same problem all the way down.”

Conclusion

Todd is fictional. The problem isn’t.

We are integrating artificial intelligence into the coordination systems that run human civilisation — markets, democracies, information ecosystems, institutional decision-making — and our frameworks for evaluating the safety of this process examine components one at a time. We assess individual AI systems for alignment, capability, and risk, then assume that safe components produce safe collectives. This is the logic of Prussian forestry applied to sociotechnical systems, and the 20th century ran the experiment on what happens next.

The difficulty is that the alternative isn’t obvious. “The system is complex, leave it alone” isn’t governance. Stafford Beer understood this — Cybersyn wasn’t a policy of non-intervention, it was a proper attempt to see distributed dynamics without collapsing them into a central model. But Beer’s work was cut short, and the field never fully developed the tools he was reaching for. So the question remains open: what would it actually mean to govern a living system as a living system?

To answer that, we first have to confront something uncomfortable. The three case studies in this piece — forests, cities, economies — all display the same pattern: a collection of components that, through their interactions, become something more than a collection. The old-growth forest wasn’t just trees near each other. It was a system with its own collective behaviour, its own capacity to respond to threats, its own ability to redistribute resources where they were needed. It had, in a meaningful sense, agency — not because anyone designed that agency into it, but because it grew.

This is the deep question hiding behind all the governance talk. When does a collection of things become an agent with its own goals? A salamander’s cells, each just trying to maintain their local chemistry, somehow collectively rebuild a missing limb — and they build the right limb, correctly proportioned, properly wired. No cell has the blueprint. No cell is in charge. The limb-level goal emerges from the network of interactions between cells, from the information flowing through chemical gradients and electrical signals and mechanical pressures. The goal lives in the between.

We can watch this happen in biology, in ant colonies, in neural systems, in markets. But we cannot yet explain it. We have no general theory of how local behaviours compose into collective agency, no way to predict when it will happen, no principled account of what makes it robust versus fragile. And this gap matters enormously right now, because we are running the experiment in real time.

When AI trading agents participate in financial markets alongside humans, what is the market becoming? Not just “a market with faster traders” — the collective dynamics change qualitatively as the ratio of AI to human participants shifts. When large language models mediate human discussion, summarising arguments and surfacing consensus, the AI isn’t just transmitting information neutrally — it’s becoming part of the coordination substrate itself, reshaping what the collective can see and think. When recommendation algorithms determine what information reaches which people, they’re not just tools that individuals use — they’re agents within the collective, shaping its emergent behaviour in ways nobody designed or intended.

At what point do these hybrid systems develop their own agency? Their own goals? And if they do — and the history of every collective system suggests they will — how would we even know? Our frameworks measure individual components. The collective agency lives in the connections between them, exactly where we’re not looking.

This is where the two paradigms collide. Almost everything we know about building AI systems comes from what you might call the engineering paradigm: define your agents, specify their objectives, design the mechanism, prove properties. This works beautifully when you can determine everything in advance. But the systems we’re actually creating are growing systems — they will discover their own organisation, develop their own emergent goals, find their own boundaries. We’re using tools designed for building bridges to tend something that behaves more like a forest.

The growth paradigm — the one that developmental biologists and complex systems researchers live in — understands this. It watches how collective intelligence emerges from local interactions, how agent boundaries form and dissolve, how the whole becomes genuinely more than the sum of its parts. But it’s largely descriptive. It can tell you what happened. It struggles to tell you what to build.

What we need is something that doesn’t exist yet: a framework that’s precise enough to guide engineering but flexible enough to capture emergence. Mathematics that can answer questions like: where, in a complex system, do the real agents live? How do simple local goals — each trader pursuing profit, each algorithm optimising engagement — compose into collective goals that nobody specified and nobody controls? When does a collection become a collective, and what makes that transition stable or fragile?

We believe these to be precise, tractable questions that can be formalised through the right sets of mathematics.

Complex mechanics already gives us tools for measuring when a whole contains more than its parts. Causal Emergence theory can identify the scale at which a system’s behaviour is most predictable — and that scale is often not the level of individual components. Active Inference provides a framework for understanding agency in terms of statistical boundaries rather than programmer intentions. Category Theory offers a language for how simple operations compose into complex ones.

The pieces exist, scattered across a dozen fields that don’t talk to each other. Developmental biologists who watch collective agency emerge every day in growing embryos. Physicists who study phase transitions — the critical points where systems suddenly reorganise. Neuroscientists who understand how neural collectives become unified minds. Social scientists who observe markets and democracies developing emergent properties in the wild. Mathematicians who prove deep structural connections between apparently different frameworks.

Nobody has put these pieces together, and we don’t really know why but we think it might partly be because the question that connects them hasn’t been asked clearly enough (or at all).

Here it is, as plainly as we can state it: when AI systems join human collectives at scale, what kind of collective agents will emerge, and how do we ensure they remain ones we’d want to live inside?

That’s what slide seven is asking for. Not better evaluation of individual AI systems — we have people working on that, and they’re good at it. Not “leave the system alone and hope for the best” — Beer showed us that active governance of living systems is possible, before his control room was burned. What we need is the science of collective agency itself. The basic research that would let us understand how collections become agents, predict when it will happen, and develop the equivalent of Beer’s Cybersyn for a world where the collective includes artificial minds.


This is the first in a series on collective agent foundations. The next post goes deeper into the mathematics underlying these questions — how information theory, causal emergence, active inference, and category theory each offer different lenses on the same problem, where those lenses converge, and where they point to open questions that no single field can answer alone.

You can follow this series on our Substack (or in this LessWrong sequence), and find out more about our research at Equilibria Network.

No comments.