So, in this world, you have a post FOOM superintelligent AI.
What does it take such an AI to bootstrap nanotech? If, as I suspect, the answer is 1 lab and a few days, then the rest of this analysis is mostly irrelevant.
The doubling time of nanotech is so fast that the AI only wants macroscopic robots to the extent that they speed up the nanotech, or fulfill the AI’s terminal values.
Thus the AI’s strategy, if it somehow can’t make nanotech quickly, will depend on what the bottleneck is. Time? Compute? Lab equipment?
Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.
Without scaling industry first you can’t get much more compute. And if you can’t immediately design far future tech without much more compute, then in the meantime you’d have to get by with hired human labor and clunky robots, building more compute, thus speeding up the next phase of the process.
Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.
Imagine you have clunky nanotech. Sure it has it’s downsides. It needs to run at liquid nitrogen temperatures and/or in high vacuum conditions. It needs high purity lab supplies. It’s energy inefficient. It is full of rare elements. But if, being nanotech, it can make a wide range of molecularly precise designs in a day or less, and having self replicated to fill the beaker, can try ~10^9 different experiments at once. With experiment power like that, you don’t really need compute.
So I suspect any compute bottleneck needs to happen before even clunky nanotech. And that would require even clunky nanotech to be Really hard to design.
So, in this world, you have a post FOOM superintelligent AI.
What does it take such an AI to bootstrap nanotech? If, as I suspect, the answer is 1 lab and a few days, then the rest of this analysis is mostly irrelevant.
The doubling time of nanotech is so fast that the AI only wants macroscopic robots to the extent that they speed up the nanotech, or fulfill the AI’s terminal values.
Thus the AI’s strategy, if it somehow can’t make nanotech quickly, will depend on what the bottleneck is. Time? Compute? Lab equipment?
Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.
Without scaling industry first you can’t get much more compute. And if you can’t immediately design far future tech without much more compute, then in the meantime you’d have to get by with hired human labor and clunky robots, building more compute, thus speeding up the next phase of the process.
Imagine you have clunky nanotech. Sure it has it’s downsides. It needs to run at liquid nitrogen temperatures and/or in high vacuum conditions. It needs high purity lab supplies. It’s energy inefficient. It is full of rare elements. But if, being nanotech, it can make a wide range of molecularly precise designs in a day or less, and having self replicated to fill the beaker, can try ~10^9 different experiments at once. With experiment power like that, you don’t really need compute.
So I suspect any compute bottleneck needs to happen before even clunky nanotech. And that would require even clunky nanotech to be Really hard to design.