Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.
Imagine you have clunky nanotech. Sure it has it’s downsides. It needs to run at liquid nitrogen temperatures and/or in high vacuum conditions. It needs high purity lab supplies. It’s energy inefficient. It is full of rare elements. But if, being nanotech, it can make a wide range of molecularly precise designs in a day or less, and having self replicated to fill the beaker, can try ~10^9 different experiments at once. With experiment power like that, you don’t really need compute.
So I suspect any compute bottleneck needs to happen before even clunky nanotech. And that would require even clunky nanotech to be Really hard to design.
Imagine you have clunky nanotech. Sure it has it’s downsides. It needs to run at liquid nitrogen temperatures and/or in high vacuum conditions. It needs high purity lab supplies. It’s energy inefficient. It is full of rare elements. But if, being nanotech, it can make a wide range of molecularly precise designs in a day or less, and having self replicated to fill the beaker, can try ~10^9 different experiments at once. With experiment power like that, you don’t really need compute.
So I suspect any compute bottleneck needs to happen before even clunky nanotech. And that would require even clunky nanotech to be Really hard to design.