Another possibility, in principle, for why automating AI R&D doesn’t lead to an intelligence explosion is because a very large percentage of the progress (at that part of development trajectory) is driven by scaling relative to algorithmic progress.
This is actually happening today, so the real question is why algorithmic progress returns will increase once we attempt to fully automate AI R&D, rather than why we won’t get an intelligence explosion.
More specifically, the algorithmic progress that has happened is basically all downstream of more compute going into AI, and algorithmic efficiency is dependent on compute scale being larger and larger to reap the gains of better algorithms.
The nuance was in saying that their framework can’t predict whether or not data or compute scaling made the majority of improvements, nor can they separate out data and compute improvements, but the core finding of algorithmic efficiency being almost all compute-scaling dependent still holds, so if we had a fixed stock of compute now, we would essentially have 0 improvements in AI forever.
This is actually happening today, so the real question is why algorithmic progress returns will increase once we attempt to fully automate AI R&D, rather than why we won’t get an intelligence explosion.
More specifically, the algorithmic progress that has happened is basically all downstream of more compute going into AI, and algorithmic efficiency is dependent on compute scale being larger and larger to reap the gains of better algorithms.
FWIW the coauthor of the paper you linked provides more nuance here.
The nuance was in saying that their framework can’t predict whether or not data or compute scaling made the majority of improvements, nor can they separate out data and compute improvements, but the core finding of algorithmic efficiency being almost all compute-scaling dependent still holds, so if we had a fixed stock of compute now, we would essentially have 0 improvements in AI forever.