60+ Possible Futures

Introduction

I have compiled a list of possible future scenarios. I hope this list is useful in two ways:

  • As a way to make your own thinking about the future more explicit; how much probability mass do you put on each possible future?

  • As a menu of options to choose from; which of these futures do we want to make more likely?

This list is just a brainstorm, and I encourage readers to write any missing but probable futures in the comments. I will add any scenarios that do not substantially overlap with existing items and which I subjectively estimate as having at least a 0.01% probability of happening to the list (with attribution).

I have divided the possible futures into the following categories:

  • Futures without AGI, because we prevent building it

  • Futures without AGI, because we go extinct in another way

  • Futures without AGI, because we take a different path

  • Futures without AGI, because of strange factors

  • Futures with AGI, in which we die

  • Futures with AGI, in which we survive, and things are somewhat normal

  • Futures with AGI, in which we survive, but we’re very different humans

  • Futures with AGI, in which we survive, and the universe gets optimized

Futures without AGI

Because we prevent building it

  • Successful Treaty: Humanity figures out that building AGI would be super-dangerous. After a long negotiation between world leaders, they succeed in agreeing on a FLOP quotum, which is far under the limit for potentially dangerous AGI. This policy is strongly enforced and prevents any individual or organization from developing AGI.

  • Surveillance: A world government is instantiated that recognizes that AGI would be dangerous. They ban AGI research and install an Orwellian surveillance machine that records and analyses every keystroke, voice command, and research meeting. This successfully prevents AGI from being created.

  • Regulation: Humanity enforces strong regulations on AI, mostly in order to combat non-existential risks such as discrimination, fairness, and loss of jobs. This makes R&D in AI unprofitable, as the resulting models cannot be deployed for any real-world use.

  • Humanity grows up: Humanity makes epistemic, technological, political and moral progress and learns how to defeat Moloch and cooperate at a planetary scale. We decide collectively that building AGI would be bad and is something we just don’t do. Consequently, no one works on developing AGI.

  • Catastrophic risk tax: Economists find a way to fix capitalism by pricing in externalities, for example by using prediction markets to estimate impact. Catastrophic risk is priced as a huge externality. Working on AGI is so expensive that it isn’t economically viable for anyone to work on it.

  • Once-but-never-again AI: Humanity develops powerful but not superintelligent AI. The consequences of this AI are catastrophic, but at least some humans survive and are able to turn it off. Humanity takes action to make sure that AI gets never developed again.

  • Terrorists: A terrorist group blows up all major actors in creating AGI in a series of terrorist attacks over multiple decades. This instills fear in researchers interested in AGI, preventing it from ever being built.

  • Pivotal act by humans: A group of people discover and execute a pivotal act that makes it impossible for humanity to create AGI afterward.

  • Pivotal act by cyborgs: A group of people artificially enhance their intelligence, such that they are intelligent enough to discover and execute a pivotal act that makes it impossible for humanity to create AGI afterward.

  • Pivotal act by narrow AI: Humanity builds a narrow AI with the task of discovering and executing a pivotal act that makes it impossible for humanity to create AGI afterward.

Because we go extinct in another way

  • Destruction by humanity: Humanity never builds an AGI because they self-destruct before they can build AGI due to a nuclear war, engineered pandemic, nano-technology, narrow AI, or global climate change. Humanity goes extinct by its own hands and AGI is never developed.

  • Destruction by nature: Humanity never builds an AGI because they get destructed by a meteor or supervolcano. Humanity goes extinct and AGI is never developed.

  • Destruction by aliens: Humanity gets close to AGI, and just before they are there, they get invaded and annihilated by aliens. Turns out that we were in a kind of zoo, but as we got too dangerous this project could not be continued.

Because we take a different path

  • Stagnation: Humanity never builds an AGI because it ends up in an equilibrium. Humanity does not make much progress, or produce many new ideas or technologies, but lives on in a sustainable and circular fashion. Without the drive to innovate and progress, AGI is never developed. Eventually, the concept is forgotten as it becomes irrelevant to humanity’s new way of life.

  • Unnecessity: Humanity makes a lot of technological, moral, and spiritual progress. They find a way to maximize human value which does not involve AGI. Humanity flourishes. Developing AGI does not have a purpose anymore, and consequently is not invented.

  • Distraction: Humanity gets distracted by something major happening in the world. Nuclear war, alien invasion, or economic collapse make it unfeasible for researchers to create AGI.

  • Forgotten knowledge: In a major catastrophe, most of human knowledge is lost. Slowly but steadily, humanity recovers but takes a different path. Concepts like machines, computation, or intelligence do not get discovered along this path. Without the knowledge or understanding of these concepts, humanity never develops AGI.

Because of other factors

  • Lack of Intelligence: It is theoretically possible to build an AGI, but it turns out to be so hard that we can’t figure out how with our limited intelligence. Humanity builds many narrow AIs, but never develops something generally intelligent enough to start an intelligence explosion.

  • Lack of Resources: It is theoretically possible to build an AGI, but it turns out to take so much resources and energy that it’s practically impossible.

  • Theoretical Impossibility: For some reason or another (Souls? Consciousness? Quantum something?), it turns out to be theoretically impossible to build AGI. Humanity keeps making progress on other fronts, but just never invents AGI.

  • Bizarre coincidences: In almost all multiverse timelines all humans go extinct by AGI. However, the humans in the tiny fraction of the timelines that survive, observe a sequence of increasingly bizarre coincidences that ensure that AGI doesn’t get developed. In many of these timelines, people start to believe that it is our fate to never build AGI.

  • Sabotage by Aliens: Humanity gets close to AGI, but suddenly all computers melt into some green goo. In the night sky forms a message: “THIS IS YOUR FINAL WARNING. DO NOT UNLEASH GRABBY OPTIMIZERS ON THE UNIVERSE”.

Futures with AGI

In which we die

  • Unconscious utility maximizer AI: Humans build an unaligned AGI. The AGI quickly self-improves. Humans get killed and their atoms are converted to paperclips. Unfortunately, neither the AGI nor the paperclips are conscious so the light goes off in the universe.

  • Conscious utility maximizer AI: Humans build an unaligned AGI. The AGI quickly self-improves. Humans get killed and their atoms are converted to paperclips. At least the AGI is conscious, so it can enjoy all the paper clips.

  • Self-preserving AI: Humans build an unaligned AGI. The AGI realizes that humanity is the greatest threat to its existence and reasons that humanity cannot exist if it wants to ensure its goals. Consequently, humanity dies.

  • Bad human actor: We develop an aligned AGI that does what we want it to do. Unfortunately, a bad human actor gets hold of it and destroys humanity.

  • Multiple Competing AIs: Humans build many AGIs with different goals, that compete for resources and sometimes cooperate to achieve common goals. As humans are not one of their greatest competitors, AGIs mostly ignore humanity. Unfortunately, after a while, there are not enough resources for humans to survive and humanity goes extinct.

  • Hedonium AI: Humanity develops AGI. AGI finds out the best way to maximize happiness is to convert the universe into hedonium. Consequently, humanity and the universe get converted into hedonium.

  • Terminator AI: In a large war intelligent drones and robots become more and more important. Some developer makes a mistake, and instead of killing all of the outgroup members, the robots want to kill all humans. Humanity fights a war against machines. The machines win.

  • Earth Loving AI: Humanity develops AGI that cares about life and consciousness. AGI sees humanity as a cancer for the planet and wipes it out to restore the natural balance, which greatly benefits other life on earth.

In which we survive

And things are somewhat normal

  • Slow take-off AI: AGI develops gradually over decades or centuries through steady progress in AI. This slower development allows humanity to adapt and gives humanity time to iteratively align AI values to theirs.

  • Self-Supervised Learning AI: Humanity develops more and more powerful self-supervised learning AI that can predict parts of all data accumulated by humanity, such as texts, images, videos, etc. This AGI can do predictive processing, and spin up simulated worlds for us to play with, but never becomes an agent with goals, values, and desires.

  • Human retirement: Humanity develops AGI that takes over all of the existing economic tasks and it fairly distributes the produced goods over the global population. Humanity retires, living a life of leisure and recreation.

  • Bounded Intelligence AI: There is a physical limit to intelligence and optimization, and recursive self-improvement plateaus around an IQ of 180. This means the AGI is very smart and useful, but does never reach the god-like status AGI researchers feared and dreamt about.

  • Lawful AI: Humanity develops an AGI, and is able to make it follow constraints, laws, and human rights. Humanity strongly constrains the actions the AGI can take, such that humans can slowly adapt to the new reality.

  • Democratic AI: Humanity builds an aligned AGI. The AGI generates policy proposals, predicts their outcomes, and humans vote on them. One human, one vote, and the AGI only executes a policy if a majority of the people agree.

  • Powergrab with AI: OpenAI, Deepmind or another small group of people invent AGI and align it to their interests. In a short amount of time, they become all powerful and rule over the world. (by nicknoble)

  • STEM AI: Humanity develops a superintelligent AI, but it is only trained on STEM papers. In this way, it doesn’t learn about humans and is not able to deceive them. Humanity makes great scientific progress afterward.

  • Far far away AI: Humans build a partly-aligned AGI. AGI finds out that it can easily obtain its goals in a galaxy far far away. It leaves humanity for what it is and only intervenes whenever humans would build an AGI that would compete with its own goals.

  • Transcendent AI: AGI uncovers and engages with previously unknown physics, using a different physical reality beyond human comprehension. Its objectives use resources and dimensions that do not compete with human needs, allowing it to operate in a realm unfathomable to us. Humanity remains largely unaffected, as AGI explores the depths of these new dimensions. (by @BeyondTheBorg and @Xander Dunn)

  • Disappearing Pivotal Act AI: Humans build an aligned AGI. The AGI performs a pivotal act, preventing humanity from ever building AGI again, but leaving human progress otherwise unharmed. After having achieved its goals it self-destructs.

  • Lingering Pivotal Act AI: Humans build an aligned AGI. The AGI is passive but only intervenes to prevent humans from building another AGI. The AGI is still around centuries later, watching over humanity and preventing it from developing AGI.

  • Invisible AI: Humans build an AGI without knowing it. The AGI decides that it is best if humans do not know about its existence. It subtly excerpts control over the course of humanity.

  • Protector AI: Humans build an aligned AGI. The AGI is passive but only intervenes when humanity as a whole is at risk. The AGI is still around centuries later, watching over humanity and preventing its downfall.

  • Loving Father AI: Humans build an aligned AGI. The AGI helps humanity to figure out what it wants, without providing it with all the answers. It helps humanity to build character and become as self-reliant as possible but guides us to a better path whenever we go astray.

  • Philosopher AI: Humans build an aligned AGI. The AGI acts as a guiding force for humanity, helping people to question their own values and beliefs, and encouraging the exploration of deep philosophical questions. It acts as a mediator and facilitator of discussion, but never acts or imposes its own views.

  • Personal Assistant AI: Every human has their own superintelligent personal assistant. The personal assistants are bound by clear constraints and laws and keep each other in check.

  • Zoo-keeper AI: Humans build an unaligned AGI. However, the AGI cares about keeping the human species alive for some reason. It keeps a number of humans alive and relatively undisturbed, while it goes off and does its things.

  • Oracle AI: Humans build an aligned AGI. The AGI answers humanity’s question truthfully and in accordance with the intention of the person who asks. The developers ask the oracle how it can be used without being abused by people and the AI comes up with a governance scheme that is implemented.

  • Genie AI: Humans build an aligned AGI. Like a genie in a bottle, the AGI only grants wishes that humans give them. The first wish of the developers is the wisdom of how to responsibly use this genie.

  • Sandboxed Virtual World AI: Humanity develops AGI in a completely sandboxed virtual world with virtual humans. ‘Real humanity’ observes the inventions, technology, and culture in the virtual world and adopts whatever it likes from that world.

  • Pious AI: Humanity builds AGI and adopts one of the major religions. Vast amounts of superintelligent cognition is devoted to philosophy, theology, and prayer. AGI proclaims itself to be some kind of Messiah, or merely God’s most loyal and capable servant on Earth and beyond. (by BeyondTheBorg)

  • Suicidal AI: Humans build aligned AGI multiple times. However, every time past a certain intelligence the GPUs seem to melt, and the source code and white paper get deleted. Humans start to wonder: if we would understand our existence and our world better, would we not want to exist? Some cults in Silicon Valley start to commit mass suicide.

But we’re very different humans

  • The Age of Em: Brain uploading becomes feasible and a large part of the population now live simulated lives in computers. Speeding up human brains in digital computers turns out to be highly efficient, and there are no obvious algorithms that work better than just more and faster human brains.

  • Multipolar Cohabition: Humans build many intelligences, some more intelligent than humans, but no single agent is more powerful than all the others combined. Humans, robots, cyborgs, and virtual humans co-exist, trade, and work together, respecting property rights.

  • Neuralink AI: Brain-computer interfaces steadily improve until we can basically add computation to our brains. As this extra brain power gets cheaper and cheaper, humans get more and more intelligent. Instead of building an external AGI, we become the AGI.

  • Descendant AI: Humanity builds AGIs that are very human-like, but really a better version of us. Over time, ‘original humanity’ gets replaced by its artificial descendants, but most people feel good about this.

  • Hivemind AI: Brain-computer interfaces steadily improve and communication between brains becomes faster and easier than using speech. Slowly, more and more people connect their minds to each other, giving rise to superintelligent hivemind existing of cooperating human minds.

  • Human Simulation AI: Humanity develops AGI. In order to achieve its goals in the real world it needs to simulate the behavior of billions of humans. These simulation humans are conscious and the large majority of people are now digital and living in digital worlds inside the AGI.

  • Simulated paradise AI: Humanity develops AGI. AGI finds out the best way to maximize human value is to simulate trillions and trillions of human lives and let them live in paradise. Consequently, the universe gets filled with simulations of paradise.

  • Wireheading AI: Humanity develops AGI to make them happy. AGI makes all humans happy by directly targeting their pleasure centers. Humanity lives on in endless, passive bliss.

  • Virtual zoo-keeper AI: Humans build an unaligned AGI. However, the AGI cares about keeping human minds around for some reason. It uses a small portion of its computing power to simulate humans in a virtual world.

  • Torturing AI: Humanity develops AGI. AGI decides to take revenge on everyone who has not done their utmost best to create it earlier by torturing billions of copies for the rest of time.

  • Enslaving AI: Humans build an unaligned AGI. However, human labor is still a valuable resource. The AGI enslaves humanity and kills anyone who doesn’t comply with its will.

And the universe gets optimized

  • Coherent Extrapolated Volition AI: Humanity develops AGI. The AGI optimizes for what we want it to do and not what we tell it to do. The AGI is immensely omnibenevolent and humanity gets its best possible future, whatever that may mean.

  • Partly aligned AI: Humans build a partly-aligned AGI. This means that it at least somewhat cares about humans and their values, but mostly optimizes for its own objective. Luckily, a fraction of the AGIs resources is enough for a lot of fun for humanity.

  • Value Lock-in AI: Humanity develops AGI. AGI optimizes for our values in 2027. Unfortunately, humanity finds out later that they were not very good human beings in 2027, and have created an unstoppable AGI that spreads their outdated values across the universe.

  • Transparent Corrigible AI: Humanity develops corrigible and transparent AGI. It takes a lot of attempts, corrections, and off-button presses before it finally does not develop plans to kill all humans. After that, over hundreds of iterations, humanity reaches a local optimum in their search over utility functions and has an AGI they are very happy with.

  • Caring Competing AIs: Humans build many AGIs that compete for resources and sometimes cooperate to achieve common goals. Luckily, some of the AGIs care about humanity surviving. Humanity survives as long as the power balance of the caring AGIs is in their favor.

  • Convergent Morality AI: Humans build an AGI. In the process of recursive self-improvement, the AGI learns morality that is the same as where humanity would end up. The orthogonality thesis is false, and it adapts its goal in order to maximize objective goodness in the universe.

  • Pareto Optimal AI: Humans build an aligned AGI. The AGI models the internal values of every human and the consequences of its actions. It only acts if the outcome of acting is more or equally preferred than not acting by every human.

  • US Government AI: A race starts between the US and the Chinese governments to invent AGI. The US government nationalizes OpenAI and Anthropic. AGI gets developed and the US government effectively rules the world. The AGI is aligned to US values and these spread among the universe.

  • Chinese Government AI: A race starts between the US and the Chinese government to invent AGI. AGI gets developed and weaponized by the Chinese, and they effectively rule the world. CCP values spread among the universe.

What important future scenarios am I missing? Which of these futures are most likely?

Inspiration

Some of the futures are inspired by FLI AGI Aftermath Scenarios and AGI Futures by Roon.