One year is already a long time in AI, but during the intelligence explosion it is so long that it means irrelevance.
This may seem obvious to you, but it is not at all obvious to many AI researchers, myself included. Can you share references to any surveys of AI researchers or research papers formally arguing this claim? I have heard some researchers make this claim, but despite having read pretty widely in AI research I have not seen anything like a serious empirical or theoretical attempt to justify or defend it.
As a follow-on I would ask you: what is the mechanism for this “irrelevance” and why does it not appear in your scenario? In your scenario we are meant to be terrified of early 2028-frontier models going rogue, but by early 2029 (based on a one-year lag) models with those same capabilities would be in the hands of the general public and widely deployed (presumably many with no guardrails at all, or even overtly dangerous goals). And yet in your scenario there is no military first strike on the owners of these open models by OpenBrain, or again, even mention of these open models at all.