In general it would be helpful to have a range of estimates.
I think the range is as follows:
Estimates based on looking at how fast humans can do things (e.g. WW2 industrial scaleup) and then modifying somewhat upwards (e.g. 5x) in an attempt to account for superintelligence… should be the lower bound, at least for the scenario where superintelligence is involved at every level of the process.
The upper bound is the Yudkowsky bathtub nanotech scenario, or something similarly fast that we haven’t thought of yet. Where the comparison point for the estimate is more about the laws of physics and/or biology.
Oh yes—to the extent we have significantly greater-than-human intelligence involved, adapting existing capacities becomes less of an issue. It only really remains an issue if there’s a fairly or very slow takeoff.
This is increasingly what I expect; I think the current path toward AGI is fortunate in one more way: LLMs probably have naturally decreasing returns because they are mostly imitating human intelligence. Scaffolding and chain of thought will continue to provide routes forward even if that turns out to be true. The evidence loosely suggests it is; see Thane Ruthenis’s recent argument and my response.
The other reason to find slow takeoff plausible is if AGI doesn’t proliferate, and its controllers (probably the US and Chinese governments, hopefully not too many more) are deliberately limiting the rate of change, as they probably would be wise to do—if they can simultaneously prevent others from developing new AGI and putting the pedal to the metal.
I think the range is as follows:
Estimates based on looking at how fast humans can do things (e.g. WW2 industrial scaleup) and then modifying somewhat upwards (e.g. 5x) in an attempt to account for superintelligence… should be the lower bound, at least for the scenario where superintelligence is involved at every level of the process.
The upper bound is the Yudkowsky bathtub nanotech scenario, or something similarly fast that we haven’t thought of yet. Where the comparison point for the estimate is more about the laws of physics and/or biology.
Oh yes—to the extent we have significantly greater-than-human intelligence involved, adapting existing capacities becomes less of an issue. It only really remains an issue if there’s a fairly or very slow takeoff.
This is increasingly what I expect; I think the current path toward AGI is fortunate in one more way: LLMs probably have naturally decreasing returns because they are mostly imitating human intelligence. Scaffolding and chain of thought will continue to provide routes forward even if that turns out to be true. The evidence loosely suggests it is; see Thane Ruthenis’s recent argument and my response.
The other reason to find slow takeoff plausible is if AGI doesn’t proliferate, and its controllers (probably the US and Chinese governments, hopefully not too many more) are deliberately limiting the rate of change, as they probably would be wise to do—if they can simultaneously prevent others from developing new AGI and putting the pedal to the metal.