Ah, by the “software feedback loop” I mean: “At the point of time at which AI has automated AI R&D, does a doubling of cognitive effort result in more than a doubling of output? If yes, there’s a software feedback loop—you get (for a time, at least) accelerating rates of algorithmic efficiency progress, rather than just a one-off gain from automation.”
I see now why you could understand “RSI” to mean “AI improves itself at all over time”. But even so, the claim would still hold—even if (implausibly) AI gets no smarter than human-level, you’d still get accelerated tech development, because the quantity of AI research effort would increase at a growth rate much faster than the quantity of human research effort.
Ah, by the “software feedback loop” I mean: “At the point of time at which AI has automated AI R&D, does a doubling of cognitive effort result in more than a doubling of output? If yes, there’s a software feedback loop—you get (for a time, at least) accelerating rates of algorithmic efficiency progress, rather than just a one-off gain from automation.”
I see now why you could understand “RSI” to mean “AI improves itself at all over time”. But even so, the claim would still hold—even if (implausibly) AI gets no smarter than human-level, you’d still get accelerated tech development, because the quantity of AI research effort would increase at a growth rate much faster than the quantity of human research effort.