More on disambiguating “discontinuity”

There have already been numerous posts and discussions related to disambiguating the term “discontinuity”. Here is my attempt.

For the purposes of the following discussion I’m going to distinguish between (a) continuous vs. discontinuous progress in AI research, where discontinuity refers specifically to a sharp jump or change in the AI research progress curve relative to the previous curve; (b) slow vs. fast rate of progress, referring to the steepness of the progress curve slope, regardless of whether or not it’s discontinuous; and (c) long vs. short clock time – i.e., whether progress takes a long or short time relative to absolute time and not relative to previous trend lines. What exactly counts as discontinuous /​ fast /​ short will depend on what purpose we are using them for, as below.

There seem to be three or four primary AI-risk-related issues that depend on whether or not there will be a discontinuity /​ fast takeoff speed:

  1. Will we see AGI (or CAIS or TAI or whatever you want to call it) coming far enough ahead of time such that we will be able to respond appropriately at that point? This question in turn breaks down into two sub-questions: (a) Will we see AGI coming before it arrives? (I.e., will there be a “fire alarm for AGI” as Eliezer calls it.) (b) If we do see it coming, will we have enough time to react before it’s too late?

  2. Will the feedback loops during the development of AGI be long enough that we will be able to correct course as we go?

  3. Is it likely that one company /​ government /​ other entity could gain enough first-mover advantage such that it will not be controllable or stoppable by other entities?

Let’s deal with each of these individually:

  • Question 1/​a: Will we see AGI coming before it arrives? This seems to depend on all three types of discontinuity:

    • If there’s discontinuous progress relative to the previous curve, then presumably that jump will act as a fire alarm (although it might be too late to do anything about it by then).

    • If there’s continuous but sufficiently fast rate of progress and/​or a sufficiently short clock time in the lead up to AGI, then that might act as a fire alarm in the sense that people will see the world start going crazy due to sufficiently advanced AI and that will be a wake-up call.

    • If progress is continuous AND sufficiently slow AND takes a sufficiently long time, then it seems quite plausible that people will get used to all the changes as they come, and they might not notice the progress that AI is making until it is too late.

  • Question 1/​b: If we do see it coming, will we have enough time to react before it’s too late?

    • If the absolute (clock) time between the “fire alarm” and the first potentially-dangerous AGI is too short, then we will likely not be able to react in time, whereas if it’s long enough then we will probably be able to react in time.

    • However, if progress is sufficiently continuous and/​or slow such that there are other very advanced AIs to help with our research, then we could perhaps use those almost-AGIs to help do a ton of research in a short amount of absolute time.

  • Question 2: Will the feedback loops during the development of AGI be long enough that we will be able to correct course as we go?

    • If absolute clock time is very short, then humans will probably not be able to react fast enough.

    • However, once again if progress is sufficiently continuous and/​or slow, then there will likely be powerful almost-AGIs which could plausibly allow us to correct course very quickly.

  • Question 3: Is it likely that one company /​ government /​ other entity gain enough first-mover advantage that it will not be controllable or stoppable by other entities?

    • If AI progress is discontinuous in the jump to AGI or immediately preceding that, and/​or if it is discontinuous from AGI in the sense that the first AGI might recursively self-improve and go FOOM, then presumably yes. (However, if the discontinuity is a bit earlier in the lead-up to AGI, then that’s mostly irrelevant to this question.)

    • If AI progress is continuous both to and from AGI but short in an absolute sense, then the answer is maybe since companies or governments can presumably keep things secret at least for a few months and thereby gain a sufficient head start.

    • If AI progress is continuous and long enough, then probably not because there will be nearly as powerful AIs that can help stop it.

Thanks especially to Sammy Martin and Issa Rice for discussions of this post and for helping me to clarify my thinking on this.