I am not concerned about this scenario. It does not matter if this is feasible or not (it might be theoretically feasible, but other things will almost certainly happen first).
The labs are laser-focused on algorithmic improvements, and the rate of algorithmic improvements is very fast (algorithmic improvements contribute more than hardware improvements at the moment).
The AIs are being optimized to do productive software engineering and to productively assist in AI research, and soon to perform productive AI research almost autonomously.
So the scenario I tend to ponder is software-only intelligence explosion based on non-saturating recursive self-improvement within a fixed hardware configuration (this is, in some sense, a scenario which is dual to the scenario described in this post; although, of course, they all are trying to scale hardware as well because they are in a race and every bit of advantage matters if one wants to reach an ASI level before other labs do that; that race situation is also quite unfortunate from the existential safety angle).
Today we have finally got the lmarena results for the new R1, they are quite impressive overall and in coding, less so in math.
https://x.com/lmarena_ai/status/1934650635657367671