Maybe we can shift the reference class to make incremental progress less ubiquitous?
How about things like height of tallest man-made structure in world? Highest elevation achieved by a human? Maximum human speed (relative to nearest point on earth)? Maximum speed on land? Largest known prime number?
Net annual transatlantic shipping tonnage? Watts of electricity generated? Lumens of artificial light generated? Highest temperature achieved on Earth’s surface? Lowest temperature?
The above are obviously cherry-picked, but the point is what they have in common: at a certain point a fundamentally different approach kicked in. This is what superintelligence predictions claim will happen.
The objection might be raised that the AI approach is already under way so we shouldn’t expect any jumps. I can think of two replies: one is that narrow AI is to AGI as the domestication of the horse is to the internal combustion engine. The other is that current AI is to human intelligence as the Killingsworth Locomotive, which wikipedia cites as going 4 mph, was to the horse.
It would probably help to be clearer by what we mean by incremental. The height of the tallest man-made structure is jumpy, but the jumps seem to usually be around 10% of existing height, except the last one, which is about 60% taller than its predecessor of 7 years earlier. I think of these as pretty much incremental, but I take it you do not?
At least in the AI case, when we talk about discontinuous progress, I think people are imagining something getting more than 100 times better on some relevant metric over a short period, but I could be wrong about this. For instance, going from not valuable at all, to at least more useful than a human, and perhaps more useful than a large number of humans.
Maybe we can shift the reference class to make incremental progress less ubiquitous?
How about things like height of tallest man-made structure in world? Highest elevation achieved by a human? Maximum human speed (relative to nearest point on earth)? Maximum speed on land? Largest known prime number?
Net annual transatlantic shipping tonnage? Watts of electricity generated? Lumens of artificial light generated? Highest temperature achieved on Earth’s surface? Lowest temperature?
The above are obviously cherry-picked, but the point is what they have in common: at a certain point a fundamentally different approach kicked in. This is what superintelligence predictions claim will happen.
The objection might be raised that the AI approach is already under way so we shouldn’t expect any jumps. I can think of two replies: one is that narrow AI is to AGI as the domestication of the horse is to the internal combustion engine. The other is that current AI is to human intelligence as the Killingsworth Locomotive, which wikipedia cites as going 4 mph, was to the horse.
It would probably help to be clearer by what we mean by incremental. The height of the tallest man-made structure is jumpy, but the jumps seem to usually be around 10% of existing height, except the last one, which is about 60% taller than its predecessor of 7 years earlier. I think of these as pretty much incremental, but I take it you do not?
At least in the AI case, when we talk about discontinuous progress, I think people are imagining something getting more than 100 times better on some relevant metric over a short period, but I could be wrong about this. For instance, going from not valuable at all, to at least more useful than a human, and perhaps more useful than a large number of humans.
I had a longer progression in mind (http://en.wikipedia.org/wiki/History_of_the_tallest_buildings_in_the_world#1300.E2.80.93present), the idea being that steel and industry were a similar discontinuity.
Though it looks like these examples are really just pointing to the idea of agriculture and industry as the two big discontinuities of known history.