Maybe distracting technicality:
This seems to make the simplifying assumption that the R&D automation is applied to a large fraction of all the compute that was previously driving algorithmic progress right?
If we imagine that a company only owns 10% of the compute being used to drive algorithmic progress pre-automation (and is only responsible for say 30% of its own algorithmic progress, with the rest coming from other labs/academia/open-source), and this company is the only one automating their AI R&D, then the effect on overall progress might be reduced (the 15X multiplier only applies to 30% of the relevant algorithmic progress).
In practice I would guess that either the leading actor has enough of a lead that they are already responsible for most of their algorithmic progress, or other groups are close behind and will thus automate their own AI R&D around the same time anyway. But I could imagine this slowing down the impact of initial AI R&D automation a little bit (and it might make a big difference for questions like “how much would it accelerate a non-frontier lab that stole the model weights and tried to do rsi”).
I do threat modeling and ‘risk assessment’ at METR, and often get asked what threat models I’m most focused on. I recently wrote a quick tweet thread with some rough thoughts which may be of interest to people here:
https://x.com/HjalmarWijk/status/1988070278149353894
Note that, as mentioned in the thread, I expect many of my colleagues disagree with my perspectives here.