This is an issue I referenced in the intro, though I did kind of skip past it. What I would say is that continuous/discontinuous is a high-level and vague description of the territory—is what is happening a continuation of already existing trends? Since that’s how we define it, it makes much more sense as a way to think about predictions, than a way to understand the past.
One way of knowing if progress was discontinuous is to actually look at the inner workings of the AGI during the takeoff. If this
some systems “fizzle out” when they try to design a better AI, generating a few improvements before running out of steam, while others are able to autonomously generate more and more improvements
is what, in fact, happens as you try to build better and better AI then we have a discontinuity, so we have discontinuous progress.
In your scenario, the fact that we went from a world like now to a godlike superintelligence swallowing up the whole Earth with tiny self-replicating bots feeding on sunlight or something means the progress was discontinuous, because it meant that quote I gave above was probably a correct description of reality.
If there was some acceleration of progress that then blew up—like, near-human systems that could automate most tasks suddenly started showing up over a year or two and getting scarily smart over a couple of weeks before the end, and then all of a sudden a godlike superintelligence annihilates the Earth and starts flinging von neumman probes to other stars, then… maybe progress was continuous? It would depend on more detailed facts (not facts about if the AGI halted to do garbage collection, but facts about the dynamics of its capability gain). Continuous/discontinous and fast/slow are two (not entirely independent) axes you could use to describe various AI takeoff trajectories—a qualitative description.
There is an additional wrinkle in that what you call continuous might depend on your own reading of historical trends—are we on hyperbolic growth or not? Here’s Scott Alexander:
In other words, the singularity got cancelled because we no longer have a surefire way to convert money into researchers. The old way was more money = more food = more population = more researchers. The new way is just more money = send more people to college, and screwallthat.
But AI potentially offers a way to convert money into researchers. Money = build more AIs = more research.
If this were true, then once AI comes around – even if it isn’t much smarter than humans – then as long as the computational power you can invest into researching a given field increases with the amount of money you have, hyperbolic growth is back on. Faster growth rates means more money means more AIs researching new technology means even faster growth rates, and so on to infinity.
Presumably you would eventually hit some other bottleneck, but things could get very strange before that happens.
If he is right about that, then ‘return to hyperbolic growth’ looks like part of an already existing trend, otherwise not so much.
This is an issue I referenced in the intro, though I did kind of skip past it. What I would say is that continuous/discontinuous is a high-level and vague description of the territory—is what is happening a continuation of already existing trends? Since that’s how we define it, it makes much more sense as a way to think about predictions, than a way to understand the past.
One way of knowing if progress was discontinuous is to actually look at the inner workings of the AGI during the takeoff. If this
is what, in fact, happens as you try to build better and better AI then we have a discontinuity, so we have discontinuous progress.
In your scenario, the fact that we went from a world like now to a godlike superintelligence swallowing up the whole Earth with tiny self-replicating bots feeding on sunlight or something means the progress was discontinuous, because it meant that quote I gave above was probably a correct description of reality.
If there was some acceleration of progress that then blew up—like, near-human systems that could automate most tasks suddenly started showing up over a year or two and getting scarily smart over a couple of weeks before the end, and then all of a sudden a godlike superintelligence annihilates the Earth and starts flinging von neumman probes to other stars, then… maybe progress was continuous? It would depend on more detailed facts (not facts about if the AGI halted to do garbage collection, but facts about the dynamics of its capability gain). Continuous/discontinous and fast/slow are two (not entirely independent) axes you could use to describe various AI takeoff trajectories—a qualitative description.
There is an additional wrinkle in that what you call continuous might depend on your own reading of historical trends—are we on hyperbolic growth or not? Here’s Scott Alexander:
If he is right about that, then ‘return to hyperbolic growth’ looks like part of an already existing trend, otherwise not so much.