This is highly useful, thank you! It will be my reference article for this pretty critical point for world modeling the near future.
If you want to tinker with estimates at all:
You shouldn’t have all auto factories converting; there will still be demand for cars, and more if there’s less production.
In general it would be helpful to have a range of estimates.
Kilogram estimates of car-robot are fine but it seems like there should be a large adjustment for robots having more different motors and joints than a whole car.
In general it would be helpful to have a range of estimates.
I think the range is as follows:
Estimates based on looking at how fast humans can do things (e.g. WW2 industrial scaleup) and then modifying somewhat upwards (e.g. 5x) in an attempt to account for superintelligence… should be the lower bound, at least for the scenario where superintelligence is involved at every level of the process.
The upper bound is the Yudkowsky bathtub nanotech scenario, or something similarly fast that we haven’t thought of yet. Where the comparison point for the estimate is more about the laws of physics and/or biology.
Oh yes—to the extent we have significantly greater-than-human intelligence involved, adapting existing capacities becomes less of an issue. It only really remains an issue if there’s a fairly or very slow takeoff.
This is increasingly what I expect; I think the current path toward AGI is fortunate in one more way: LLMs probably have naturally decreasing returns because they are mostly imitating human intelligence. Scaffolding and chain of thought will continue to provide routes forward even if that turns out to be true. The evidence loosely suggests it is; see Thane Ruthenis’s recent argument and my response.
The other reason to find slow takeoff plausible is if AGI doesn’t proliferate, and its controllers (probably the US and Chinese governments, hopefully not too many more) are deliberately limiting the rate of change, as they probably would be wise to do—if they can simultaneously prevent others from developing new AGI and putting the pedal to the metal.
Note that it you convert only half the car factories, you can still produce 0.5 billion robots per year, so it doesn’t change the basics picture that much. (It’s all order of magnitude stuff.)
I talk a little about some other estimates—a standard trajectory would be 20-30 years on the long end. ASI enabled could be even faster than 5yr. I agree it would be nice to flesh these out more.
Also agree it would be good to figure out the conversion efficiency better. One factor on the other side is robots involve lighter parts, which apparently makes it easier. Ideally we’d also check for other input factors that could bottleneck production -eg lithium for batteries at over 100m.
This is highly useful, thank you! It will be my reference article for this pretty critical point for world modeling the near future.
If you want to tinker with estimates at all:
You shouldn’t have all auto factories converting; there will still be demand for cars, and more if there’s less production.
In general it would be helpful to have a range of estimates.
Kilogram estimates of car-robot are fine but it seems like there should be a large adjustment for robots having more different motors and joints than a whole car.
I think the range is as follows:
Estimates based on looking at how fast humans can do things (e.g. WW2 industrial scaleup) and then modifying somewhat upwards (e.g. 5x) in an attempt to account for superintelligence… should be the lower bound, at least for the scenario where superintelligence is involved at every level of the process.
The upper bound is the Yudkowsky bathtub nanotech scenario, or something similarly fast that we haven’t thought of yet. Where the comparison point for the estimate is more about the laws of physics and/or biology.
Oh yes—to the extent we have significantly greater-than-human intelligence involved, adapting existing capacities becomes less of an issue. It only really remains an issue if there’s a fairly or very slow takeoff.
This is increasingly what I expect; I think the current path toward AGI is fortunate in one more way: LLMs probably have naturally decreasing returns because they are mostly imitating human intelligence. Scaffolding and chain of thought will continue to provide routes forward even if that turns out to be true. The evidence loosely suggests it is; see Thane Ruthenis’s recent argument and my response.
The other reason to find slow takeoff plausible is if AGI doesn’t proliferate, and its controllers (probably the US and Chinese governments, hopefully not too many more) are deliberately limiting the rate of change, as they probably would be wise to do—if they can simultaneously prevent others from developing new AGI and putting the pedal to the metal.
Thanks, and fair points!
Note that it you convert only half the car factories, you can still produce 0.5 billion robots per year, so it doesn’t change the basics picture that much. (It’s all order of magnitude stuff.)
I talk a little about some other estimates—a standard trajectory would be 20-30 years on the long end. ASI enabled could be even faster than 5yr. I agree it would be nice to flesh these out more.
Also agree it would be good to figure out the conversion efficiency better. One factor on the other side is robots involve lighter parts, which apparently makes it easier. Ideally we’d also check for other input factors that could bottleneck production -eg lithium for batteries at over 100m.