If you doubled total AGI investment, I think it’s quite unlikely that you’d have a superintelligence. So I don’t believe (2), though the argument can still be partly salvaged.
(2) was only meant as a claim about AGI effort needed to reach seed AI (perhaps meaning “something good enough to count as an upper bound on what it would take to originate a stage of the intelligence explosion that we agree will be very fast because of recursive self-improvement and copying”). Then between seed AI and superintelligence, a lot of additional R&D (mostly by AI) could happen in little calendar time without contradicting (2). We can analyze the plausibility of (2) separately from the question of what its consequences would be. (My guess is you’re already taking all this into account and still think (2) is unlikely.)
Maybe I should have phrased the intuition as: “If you predict sufficiently many years of sufficiently fast AI acceleration, the total amount of pressure on the AGI problem starts being greater than I might naively expect is needed to solve it completely.”
(For an extreme example, consider a prediction that the world will have a trillion ems living in it, but no strongly superhuman AI until years later. I don’t think there’s any plausible indirect historical evidence or reasoning based on functional forms of growth that could convince me of that prediction, simply because it’s hard to see how you can have millions of Von Neumanns in a box without them solving the relevant problems in less than a year.)
If you doubled total AGI investment, I think it’s quite unlikely that you’d have a superintelligence. So I don’t believe (2), though the argument can still be partly salvaged.
(2) was only meant as a claim about AGI effort needed to reach seed AI (perhaps meaning “something good enough to count as an upper bound on what it would take to originate a stage of the intelligence explosion that we agree will be very fast because of recursive self-improvement and copying”). Then between seed AI and superintelligence, a lot of additional R&D (mostly by AI) could happen in little calendar time without contradicting (2). We can analyze the plausibility of (2) separately from the question of what its consequences would be. (My guess is you’re already taking all this into account and still think (2) is unlikely.)
Maybe I should have phrased the intuition as: “If you predict sufficiently many years of sufficiently fast AI acceleration, the total amount of pressure on the AGI problem starts being greater than I might naively expect is needed to solve it completely.”
(For an extreme example, consider a prediction that the world will have a trillion ems living in it, but no strongly superhuman AI until years later. I don’t think there’s any plausible indirect historical evidence or reasoning based on functional forms of growth that could convince me of that prediction, simply because it’s hard to see how you can have millions of Von Neumanns in a box without them solving the relevant problems in less than a year.)