The first thing that comes to mind is to beg the question of what proportion of human-generated papers are publishing-worthier (since a lot of them are slop), but let’s not forget that publication matters little for catastrophic risk, it’s actually getting results that would be important. So I recommend not updating at all on AI risk based on Sakana’s results (or updating negatively if you expected that R&D automation would come faster, or that this might slow down human augmentation).
The first thing that comes to mind is to beg the question of what proportion of human-generated papers are publishing-worthier (since a lot of them are slop), but let’s not forget that publication matters little for catastrophic risk, it’s actually getting results that would be important.
So I recommend not updating at all on AI risk based on Sakana’s results (or updating negatively if you expected that R&D automation would come faster, or that this might slow down human augmentation).