Here’s an argument why (at least somewhat) sudden takeoff is (at least somewhat) plausible.
Supposing:
(1) At some point P, AI will be as good as humans at AI programming (grandprogramming, great-grandprogramming, …) by some reasonable standard, and less than a month later, a superintelligence will exist.
(2) Getting to point P requires AI R&D effort roughly comparable to total past AI R&D effort.
(3) In an economy growing quickly because of AI, AI R&D effort increases by at least the same factor as general economic growth.
Then:
(4) Based on (3), if there’s a four year period during which economic growth is ten times normal because of AI (roughly corresponding to a four year doubling period), then AI R&D effort during that period is also at least ten times normal.
(5) Because 4*10=40 and because of additional R&D effort between now and the start of the four year period, total AI R&D effort between now and the end of such a period would be at least roughly comparable to total AI R&D effort until now.
(6) Therefore, based on (2) and (1), at most a month after the end of the first four year doubling period, a superintelligence will exist.
I think (1) is probable and (2) is plausible (but plausibly false). I’m more confused about (3), but it doesn’t seem wrong.
There’s a lot of room to doubt as well as sharpen this argument, but I hope the intuition is clear. Something like this comes out if I introspect on why it feels easier to coherently imagine a sudden than a gradual takeoff.
If there’s a hard takeoff claim I’m 90% sure of, though, the claim is more like (1) than (6); more like “superintelligence comes soon after an AI is a human-level programmer/researcher” than like “superintelligence comes soon after AI (or some other technology) causes drastic change”. So as has been said, the difference of opinion isn’t as big as it might at first seem.
If you doubled total AGI investment, I think it’s quite unlikely that you’d have a superintelligence. So I don’t believe (2), though the argument can still be partly salvaged.
(2) was only meant as a claim about AGI effort needed to reach seed AI (perhaps meaning “something good enough to count as an upper bound on what it would take to originate a stage of the intelligence explosion that we agree will be very fast because of recursive self-improvement and copying”). Then between seed AI and superintelligence, a lot of additional R&D (mostly by AI) could happen in little calendar time without contradicting (2). We can analyze the plausibility of (2) separately from the question of what its consequences would be. (My guess is you’re already taking all this into account and still think (2) is unlikely.)
Maybe I should have phrased the intuition as: “If you predict sufficiently many years of sufficiently fast AI acceleration, the total amount of pressure on the AGI problem starts being greater than I might naively expect is needed to solve it completely.”
(For an extreme example, consider a prediction that the world will have a trillion ems living in it, but no strongly superhuman AI until years later. I don’t think there’s any plausible indirect historical evidence or reasoning based on functional forms of growth that could convince me of that prediction, simply because it’s hard to see how you can have millions of Von Neumanns in a box without them solving the relevant problems in less than a year.)
Here’s an argument why (at least somewhat) sudden takeoff is (at least somewhat) plausible.
Supposing:
(1) At some point P, AI will be as good as humans at AI programming (grandprogramming, great-grandprogramming, …) by some reasonable standard, and less than a month later, a superintelligence will exist.
(2) Getting to point P requires AI R&D effort roughly comparable to total past AI R&D effort.
(3) In an economy growing quickly because of AI, AI R&D effort increases by at least the same factor as general economic growth.
Then:
(4) Based on (3), if there’s a four year period during which economic growth is ten times normal because of AI (roughly corresponding to a four year doubling period), then AI R&D effort during that period is also at least ten times normal.
(5) Because 4*10=40 and because of additional R&D effort between now and the start of the four year period, total AI R&D effort between now and the end of such a period would be at least roughly comparable to total AI R&D effort until now.
(6) Therefore, based on (2) and (1), at most a month after the end of the first four year doubling period, a superintelligence will exist.
I think (1) is probable and (2) is plausible (but plausibly false). I’m more confused about (3), but it doesn’t seem wrong.
There’s a lot of room to doubt as well as sharpen this argument, but I hope the intuition is clear. Something like this comes out if I introspect on why it feels easier to coherently imagine a sudden than a gradual takeoff.
If there’s a hard takeoff claim I’m 90% sure of, though, the claim is more like (1) than (6); more like “superintelligence comes soon after an AI is a human-level programmer/researcher” than like “superintelligence comes soon after AI (or some other technology) causes drastic change”. So as has been said, the difference of opinion isn’t as big as it might at first seem.
If you doubled total AGI investment, I think it’s quite unlikely that you’d have a superintelligence. So I don’t believe (2), though the argument can still be partly salvaged.
(2) was only meant as a claim about AGI effort needed to reach seed AI (perhaps meaning “something good enough to count as an upper bound on what it would take to originate a stage of the intelligence explosion that we agree will be very fast because of recursive self-improvement and copying”). Then between seed AI and superintelligence, a lot of additional R&D (mostly by AI) could happen in little calendar time without contradicting (2). We can analyze the plausibility of (2) separately from the question of what its consequences would be. (My guess is you’re already taking all this into account and still think (2) is unlikely.)
Maybe I should have phrased the intuition as: “If you predict sufficiently many years of sufficiently fast AI acceleration, the total amount of pressure on the AGI problem starts being greater than I might naively expect is needed to solve it completely.”
(For an extreme example, consider a prediction that the world will have a trillion ems living in it, but no strongly superhuman AI until years later. I don’t think there’s any plausible indirect historical evidence or reasoning based on functional forms of growth that could convince me of that prediction, simply because it’s hard to see how you can have millions of Von Neumanns in a box without them solving the relevant problems in less than a year.)