Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.
Without scaling industry first you can’t get much more compute. And if you can’t immediately design far future tech without much more compute, then in the meantime you’d have to get by with hired human labor and clunky robots, building more compute, thus speeding up the next phase of the process.
Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.
Imagine you have clunky nanotech. Sure it has it’s downsides. It needs to run at liquid nitrogen temperatures and/or in high vacuum conditions. It needs high purity lab supplies. It’s energy inefficient. It is full of rare elements. But if, being nanotech, it can make a wide range of molecularly precise designs in a day or less, and having self replicated to fill the beaker, can try ~10^9 different experiments at once. With experiment power like that, you don’t really need compute.
So I suspect any compute bottleneck needs to happen before even clunky nanotech. And that would require even clunky nanotech to be Really hard to design.
Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.
Without scaling industry first you can’t get much more compute. And if you can’t immediately design far future tech without much more compute, then in the meantime you’d have to get by with hired human labor and clunky robots, building more compute, thus speeding up the next phase of the process.
Imagine you have clunky nanotech. Sure it has it’s downsides. It needs to run at liquid nitrogen temperatures and/or in high vacuum conditions. It needs high purity lab supplies. It’s energy inefficient. It is full of rare elements. But if, being nanotech, it can make a wide range of molecularly precise designs in a day or less, and having self replicated to fill the beaker, can try ~10^9 different experiments at once. With experiment power like that, you don’t really need compute.
So I suspect any compute bottleneck needs to happen before even clunky nanotech. And that would require even clunky nanotech to be Really hard to design.