This post seems systematically too slow to me, and to underrate the capabilities of superintelligence. One particular point of disagreement:
It seems reasonable to use days or weeks as an upper bound on how fast robot doublings could become, based on biological analogies. This is very fast indeed.20
When I read this, I thought this would say “lower bound”. Why would you expect evolution to find globally optimal doubling times? This reads to me a bit like saying that the speed of a Cheetah or the size of an Blue Whale will be an upper bound on the speed/size of a robot. Why???
The case for lower bound seems clear: biology did it, probably a superintelligence could design a more functional robot than biology.
It’s not clear it’s a lower bound bc it’s unclear whether fruit flies have the physical and (especially) cognitive capabilities to reconstruct the whole economy. It’s not enough to double quickly. You need to be able to make the robots that make the robots… that make anything.
But I agree that we might do way better than evolution. We might design things that double faster than fruit flies and can reconstruct the whole economy. So I agree i was wrong to describe this as an upper bound.
Seems to me more like an estimate of the upper bound that could be biased in either direction. The upper bound might be faster bc we outperform evolution. Or it might be slower if fruit flies lack the capabilities to reconstruct the whole economy.
Keen to hear about other areas where you think we’re being too conservative. It’s definitely possible to point to particular assumptions that seem too conservative. But there’s often counter-considerations. To give one quick example, only 20% of output today is reinvested, 80% is consumed. If this keeps happening during robot doublings, they’ll happen 5X slower than our analysis. Our analysis implicitly assumes 100% reinvestment.
I’m very confused by this response—if we’re talking about strong quality superintelligence, as opposed to cooperative and/or speed superintelligence, then the entire idea of needing an industrial explosion is wrong, since (by assumption) the superintelligent AI system is able to do things that seem entirely magical to us.
How strong a superintelligence are you assuming, and what path did it follow? If it’s already taken over mass production of chips to the extent that it can massively build out its own capabilities, we’re past the point of industrial explosion. And if not, where did these (evidently far stronger than even the collective abilities of humanity, given the presumed capabilities,) capabilities emerge from?
Don’t think i follow this. My last comment was about the ultimate limits to (nano)robot doubling times, after lots of time to experiment/iterate, not imagining AI designing this stuff a priori.
The post assumes abundant AI cognitive labour on the level of top humans but nothing stronger.
Yeah, I think Thomas was arguing the opposite direction, and he argued that you “underrate the capabilities of superintelligence,” and I was responding to why that wasn’t addressing the same scenario as your original post.
This post seems systematically too slow to me, and to underrate the capabilities of superintelligence. One particular point of disagreement:
When I read this, I thought this would say “lower bound”. Why would you expect evolution to find globally optimal doubling times? This reads to me a bit like saying that the speed of a Cheetah or the size of an Blue Whale will be an upper bound on the speed/size of a robot. Why???
The case for lower bound seems clear: biology did it, probably a superintelligence could design a more functional robot than biology.
It’s not clear it’s a lower bound bc it’s unclear whether fruit flies have the physical and (especially) cognitive capabilities to reconstruct the whole economy. It’s not enough to double quickly. You need to be able to make the robots that make the robots… that make anything.
But I agree that we might do way better than evolution. We might design things that double faster than fruit flies and can reconstruct the whole economy. So I agree i was wrong to describe this as an upper bound.
Seems to me more like an estimate of the upper bound that could be biased in either direction. The upper bound might be faster bc we outperform evolution. Or it might be slower if fruit flies lack the capabilities to reconstruct the whole economy.
Keen to hear about other areas where you think we’re being too conservative. It’s definitely possible to point to particular assumptions that seem too conservative. But there’s often counter-considerations. To give one quick example, only 20% of output today is reinvested, 80% is consumed. If this keeps happening during robot doublings, they’ll happen 5X slower than our analysis. Our analysis implicitly assumes 100% reinvestment.
I’m very confused by this response—if we’re talking about strong quality superintelligence, as opposed to cooperative and/or speed superintelligence, then the entire idea of needing an industrial explosion is wrong, since (by assumption) the superintelligent AI system is able to do things that seem entirely magical to us.
How strong a superintelligence are you assuming, and what path did it follow? If it’s already taken over mass production of chips to the extent that it can massively build out its own capabilities, we’re past the point of industrial explosion. And if not, where did these (evidently far stronger than even the collective abilities of humanity, given the presumed capabilities,) capabilities emerge from?
Don’t think i follow this. My last comment was about the ultimate limits to (nano)robot doubling times, after lots of time to experiment/iterate, not imagining AI designing this stuff a priori.
The post assumes abundant AI cognitive labour on the level of top humans but nothing stronger.
Yeah, I think Thomas was arguing the opposite direction, and he argued that you “underrate the capabilities of superintelligence,” and I was responding to why that wasn’t addressing the same scenario as your original post.