“Pivotal Acts” means something specific

The term Pivotal Act was written up on Arbital in 2015. I only started hearing it discussed in 2020, and then it rapidly started seeing more traction when the MIRI 2021 Conversations were released.

I think people mostly learned the word via context-clues, and never actually read the article.

I have some complaints about how Eliezer defined the term, but I think if you’re arguing with MIRI-cluster people and haven’t read the article in full you may have a confusing time.

The arbital page for Pivotal Act begins:

The term ‘pivotal act’ in the context of AI alignment theory is a guarded term to refer to actions that will make a large positive difference a billion years later.

And then, almost immediately afterwards, the article goes on to reiterate “this is a guarded term”, and explains why. i.e. this is a jargon term that people are going to be very tempted to stretch the definition of, but which it’s really important not to stretch the definition of.

The article notes:

Reason for guardedness

Guarded definitions are deployed where there is reason to suspect that a concept will otherwise be over-extended. The case for having a guarded definition of ‘pivotal act’ (and another for ‘existential catastrophe’) is that, after it’s been shown that event X is maybe not as important as originally thought, one side of that debate may be strongly tempted to go on arguing that, wait, really it could be “relevant” (by some strained line of possibility).

It includes a bunch of examples (you really should go read the full article), and then notes:

Discussion: Many strained arguments for X being a pivotal act have a step where X is an input into a large pool of goodness that also has many other inputs. A ZF provability oracle would advance mathematics, and mathematics can be useful for alignment research, but there’s nothing obviously game-changing about a ZF oracle that’s specialized for advancing alignment work, and it’s unlikely that the effect on win probabilities would be large relative to the many other inputs into total mathematical progress.

Similarly, handling trucker disemployment would only be one factor among many in world economic growth.

By contrast, a genie that uploaded human researchers putatively would not be producing merely one upload among many; it would be producing the only uploads where the default was otherwise no uploads. In turn, these uploads could do decades or centuries of unrushed serial research on the AI alignment problem, where the alternative was rushed research over much shorter timespans; and this can plausibly make the difference by itself between an AI that achieves ~100% of value versus an AI that achieves ~0% of value. At the end of the extrapolation where we ask what difference everything is supposed to make, we find a series of direct impacts producing events qualitatively different from the default, ending in a huge percentage difference in how much of all possible value gets achieved.

By having narrow and guarded definitions of ‘pivotal acts’ and ‘existential catastrophes’, we can avoid bait-and-switch arguments for the importance of research proposals, where the ‘bait’ is raising the apparent importance of ‘AI safety’ by discussing things with large direct impacts on astronomical stakes (like a paperclip maximizer or Friendly sovereign) and the ‘switch’ is to working on problems of dubious astronomical impact that are inputs into large pools with many other inputs.

I see people stretching “pivotal act” to mean “things that delay AGI for a few years or decades”, which isn’t what the term is meant to mean.

Full article here.