Pivotal Acts might Not be what You Think they are

This article is mainly for people who have not read the pivotal act article on arbital or need a refresher. If you have, the most interesting section would probably be “Omnicient ML Researchers: A Pivotal Act without a Monolithic Control Structure”.

Many people seem to match the concept of a “pivotal act” to some dystopian version of “deploy AGI to take over the world”. ‘Pivotal act’ means something much more specific, though. Something, arguably, quite different. I strongly recommend you read the original article, as I think it is a very important concept to have.

I use the term quite often, so it is frustrating when people start to say very strange things, such as “We can’t just let a powerful AI system loose on the world. That’s dangerous!” as if that were the defining feature of a pivotal act.

As the original article is quite long let me briefly summarize what I see as the most important points.

Explaining Pivotal Act

An act that puts us outside of the existential risk danger zone (especially from AI), and into a position from which humanity can flourish is a pivotal act.

Most importantly that means a pivotal act needs to prevent a misaligned AGI from being built. Taking over the world is really not required per se. If you can prevent the creation of a misaligned AGI by creating a powerful global institution that can effectively regulate AI, then that counts as a pivotal act. If I could prevent a misaligned AGI from ever being deployed, by eating 10 bananas in 60 seconds, then that would count as a pivotal act too!

Preventing Misaligned AGI Requires Control

Why then, is ‘pivotal act’ often associated with the notion of taking over the world? Preventing a misaligned AGI from being built, is a tough problem. Efficively we need to constrain the state of the world such that no misaligned AGI can arise. To successfully do this you need a lot of control over the world. There is no way around that.

Taking over the world really means putting oneself into a position of high control, and in that sense, it is necessary to take over the world, at least to a certain extent, to prevent a misaligned AGI from ever being built.

Common Confusions

Probably, one point of confusion is that “taking over the world” has a lot of negative connotations associated with it. Power is easy to abuse. Putting an entity[1] into a position of great power can certainly go sideways. But I fail to see the alternative.

What else are we supposed to do instead of controlling the world in such a way that no misaligned AGI can ever be built? The issue is that many people seem to argue, that giving an entity a lot of control over the world is a pretty terrible idea, as if there is some better alternative we can fall back onto.

And then they might start to talk about how they are more hopeful about AI regulation as if pulling off AI regulation successfully does not require an entity that has a great deal of control over the world.

Or worse, they name some alternative proposal like figuring out mechanistic interpretability, as if figuring out mechanistic interpretability is identical to putting the world into a state where no misaligned AGI can arise.[2]

Pivotal acts that don’t directly create a position of Power

There are pivotal acts that don’t require you to have a lot of control over the world. However, any pivotal acts I know of will still ultimately need to result in the creation of some powerful controlling structure. Starting a process that will ultimately result in the creation of the right controlling structure that can prevent misaligned AGI would already count as a pivotal act.

Human Upload

An example of such a pivotal act is uploading a human. Imagine you knew how to upload yourself into a computer, running 1,000,000 times faster, and being able to make copies of yourself while having perfect read-and-write access to your own brain. Then you could probably gain sufficient control over the world directly, such that you could mitigate all potential existential risks. Alternatively, you probably could just solve alignment.

In any case, uploading yourself would be a pivotal act, even though that would not directly put the world into a state where no misaligned AGI can arise.

That is because uploading yourself is enough to ensure with a very high probability that the state of the world where no misaligned AGI can arise will soon be reached. But that state will still feature some entity with a lot of control over the world. Either that entity is you in the case where you put yourself into a position of power, or an aligned AGI, in the case where you choose to solve alignment

Omnicient ML Researchers: A Pivotal Act without a Monolithic Control Structure

There are also pivotal acts that don’t result in the creation of a monolithic entity that is in control. Control may be distributed.

Imagine you could write an extremely mimeticly fit article that is easy to understand and makes everybody who reads it understand AI alignment so well that it would become practically impossible for them to build a misaligned AGI on accident. That would count as a pivotal act. Like in the “Human Upload” pivotal act, you don’t need a lot of control over the world to pull this off. Once you have the article you just need an internet connection.

Not only do you not need a lot of control over the world, but there is also no central controlling entity in this scenario. The controlling structure is distributed across the brains of all the people who read the article. All these brains together now constrain the world such that no misaligned AGI can arise. Or you could think of it as us having constrained the brains such that they will not generate a misaligned AGI. That effectively means that no misaligned AGI can arise, assuming that the only way it could arise is through being generated by one of these brains.


  1. ↩︎

    This could be an organization, a group of people, a single individual, etc.

  2. ↩︎

    Of course mechanistic interpretability might be an important piece for putting the world into a state where no misaligned AGI can arise.