A Possible Future: Decentralized AGI Proliferation

When people talk about AI futures, the picture is usually centralized. Either a single aligned superintelligence replaces society with something utopian and post-scarcity, or an unaligned one destroys us, or maybe a malicious human actor uses a powerful system to cause world-ending harm.

Those futures might be possible. However there’s another shape of the future I keep coming back to, which I almost never see described. The adjectives I’d use are: decentralized, diverse, and durable. I don’t think this future is necessarily good, but I do think it’s worth planning for.

Timelines and the Short-Term Slowdown

I don’t think we’re on extremely short timelines (e.g. AGI before 2030). I expect a small slowdown in capabilties progress.

Two reasons:

  1. Training limits. Current labs know how to throw resources at problems with clear, verifiable reward signals. This improves performance on those tasks, but many of the skills that would make systems truly economically transformative are difficult to reinforce this way.

  2. Architectural limits. Transformers with in-context learning are not enough for lifelong, agentive competence. I think something more like continual learning over long-context, human-produced data will be needed.

Regardless of the specifics, I do believe these problems can be solved. However I don’t think they can be solved before the early or mid-2030s.

Proliferation as the Default

The slowdown gives time for “near-AGI” systems, hardware, and know-how to spread widely. So when the breakthroughs arrive, they don’t stay secret:

  • One lab has them first, others have them within a month or two.

  • Open-source versions appear within a year.

  • There isn’t a clean line where everyone agrees “this is AGI.”

  • No lab or government commits to a decisive “pivotal act” to prevent proliferation.

By the mid-to-late 2030s, AGI systems are proliferated much like Bitcoin: widely distributed, hard to suppress, & impossible to recall.

From Mitigation to Robustness

The early response to advanced AI will focus on mitigation: bans, treaties, corporate coordination, activist pressure. This echos how the world handled nuclear weapons: trying to contain them, limit their spread, and prevent use. For nukes, mitigation was viable because proliferation was slow and barriers to entry were high.

With AI, those conditions don’t hold. Once systems are everywhere, and once attacks (both human-directed and autonomous) become routine, the mitigation framing collapses.

With supression no longer being possible, the central question changes from “How do we stop this from happening?” to “How do we survive and adapt in a world where this happens every day?”

At this point our concerns shift from mitigation to robustness: what does a society look like when survival depends on enduring constant and uncontrollable threats?

Civilizational Adaptations

I don’t think there’s a clean picture of what the world will look like if proliferation really takes hold. It will be strange in ways that are hard to anticipate. The most likely outcome is probably not persistence at all, but extinction.

But if survival is possible, the worlds that follow may look very different from anything we’re used to. Here are two hypotheticals I find useful:

  • Redundancy, Uploading, and Resiliance. Uploaded versions of humanity running inside hardened compute structures, massive tungsten cubes orbiting the sun, replicated millions of times, most hidden from detection. Civilization continues not by control, but by sheer redundancy and difficulty of elimination.

  • Fragmented city-states. Human societies protected or directed by their own AGI systems, each operating as semi-independent polities. Some authoritarian, some libertarian, some utopian or dystopian. Robustness comes from plurality, with no single point of failure & no universal order.

I don’t think of these conclusions as predictions persay, just sketches of how survival in such a world might look like. They’re examples of the kind of weird outcomes we might find ourselves in.

The Character of The World

There isn’t one dominant system. Instead there’s a patchwork of human and AI societies. Survival depends on redundancy and adaptation. It’s a world of constant treachery and defense, but also of diversity and (in some sense) liberty from centralized control. It is less utopia or dystopia, and moreso just a mess. However, it is a vision for the future that feels realistic in the chaotic way that history often seems to really unfold.