The following seems a bit unclear to me, and might warrant an update–if I am not alone in the assessment:
Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), [...], then we should still expect very rapid technological development [...] once AI meaningfully substitutes for human researchers.
I might just be taking issue with the word “without” and taking it in a very literal sense, but to me “AI meaningfully substituting for human researchers” implies at least a weak form of recursive self-improvement. That is, I would be quite surprised if the world allowed for AI to become as smart as human researchers but no smarter afterwards.
I interpreted this as “even without a software feedback loop, there will be very rapid technological development; this gives a lower bound on the actual pace of technological development, since there will almost certainly be some feedback loop”
Ah, by the “software feedback loop” I mean: “At the point of time at which AI has automated AI R&D, does a doubling of cognitive effort result in more than a doubling of output? If yes, there’s a software feedback loop—you get (for a time, at least) accelerating rates of algorithmic efficiency progress, rather than just a one-off gain from automation.”
I see now why you could understand “RSI” to mean “AI improves itself at all over time”. But even so, the claim would still hold—even if (implausibly) AI gets no smarter than human-level, you’d still get accelerated tech development, because the quantity of AI research effort would increase at a growth rate much faster than the quantity of human research effort.
The following seems a bit unclear to me, and might warrant an update–if I am not alone in the assessment:
I might just be taking issue with the word “without” and taking it in a very literal sense, but to me “AI meaningfully substituting for human researchers” implies at least a weak form of recursive self-improvement.
That is, I would be quite surprised if the world allowed for AI to become as smart as human researchers but no smarter afterwards.
I interpreted this as “even without a software feedback loop, there will be very rapid technological development; this gives a lower bound on the actual pace of technological development, since there will almost certainly be some feedback loop”
Yes.
Ah, by the “software feedback loop” I mean: “At the point of time at which AI has automated AI R&D, does a doubling of cognitive effort result in more than a doubling of output? If yes, there’s a software feedback loop—you get (for a time, at least) accelerating rates of algorithmic efficiency progress, rather than just a one-off gain from automation.”
I see now why you could understand “RSI” to mean “AI improves itself at all over time”. But even so, the claim would still hold—even if (implausibly) AI gets no smarter than human-level, you’d still get accelerated tech development, because the quantity of AI research effort would increase at a growth rate much faster than the quantity of human research effort.