CES is almost as much of an oversimplification as Cobb-Douglas, and any value under σ=1 means labor and capital can each bottleneck output to some (fairly small) finite value if the other goes to infinity. E.g. if σ=0.8 and labor and capital are equally important, then output will only 16x if labor goes to infinity and capital is unchanged.
For physical capital in the form of computers it seems reasonable to me that AIs much better at coding than current AIs will get basically unlimited value from existing computers, just with diminishing marginal returns. For other physical capital, probably we need an increase in quality, though maybe not an increase in quantity. E.g. a new type of AFM capable of serving as a first-stage nanofactory could be designed, which would be 10,000x more valuable for nanoscale manufacturing research than current models, and therefore represent 10,000x the capital, but is the same size and so would not visibly result in an industrial explosion.
CES is almost as much of an oversimplification as Cobb-Douglas, and any value under σ=1 means labor and capital can each bottleneck output to some (fairly small) finite value if the other goes to infinity. E.g. if σ=0.8 and labor and capital are equally important, then output will only 16x if labor goes to infinity and capital is unchanged.
For physical capital in the form of computers it seems reasonable to me that AIs much better at coding than current AIs will get basically unlimited value from existing computers, just with diminishing marginal returns. For other physical capital, probably we need an increase in quality, though maybe not an increase in quantity. E.g. a new type of AFM capable of serving as a first-stage nanofactory could be designed, which would be 10,000x more valuable for nanoscale manufacturing research than current models, and therefore represent 10,000x the capital, but is the same size and so would not visibly result in an industrial explosion.