One year is already a long time in AI, but during the intelligence explosion it is so long that it means irrelevance.
This may seem obvious to you, but it is not at all obvious to many AI researchers, myself included. Can you share references to any surveys of AI researchers or research papers formally arguing this claim? I have heard some researchers make this claim, but despite having read pretty widely in AI research I have not seen anything like a serious empirical or theoretical attempt to justify or defend it.
As a follow-on I would ask you: what is the mechanism for this “irrelevance” and why does it not appear in your scenario? In your scenario we are meant to be terrified of early 2028-frontier models going rogue, but by early 2029 (based on a one-year lag) models with those same capabilities would be in the hands of the general public and widely deployed (presumably many with no guardrails at all, or even overtly dangerous goals). And yet in your scenario there is no military first strike on the owners of these open models by OpenBrain, or again, even mention of these open models at all.
We had very limited space. I think realistically in the Race ending they would be doing first strikes on various rivals, both terrorist groups and other companies, insofar as those groups seemed to be a real threat, which most of them wouldn’t be and probably all of them wouldn’t be. It didn’t seem worth talking about.
I think it depends on takeoff speeds? It seems a fairly natural consequence of the takeoff speed we describe in AI 2027, so I guess my citation would be the Research page of AI-2027.com. I don’t have a survey of takeoff speeds opinions, sorry, but I wouldn’t trust such a survey anyway since hardly anyone has thought seriously about the topic.
One year is already a long time in AI, but during the intelligence explosion it is so long that it means irrelevance.
This may seem obvious to you, but it is not at all obvious to many AI researchers, myself included. Can you share references to any surveys of AI researchers or research papers formally arguing this claim? I have heard some researchers make this claim, but despite having read pretty widely in AI research I have not seen anything like a serious empirical or theoretical attempt to justify or defend it.
As a follow-on I would ask you: what is the mechanism for this “irrelevance” and why does it not appear in your scenario? In your scenario we are meant to be terrified of early 2028-frontier models going rogue, but by early 2029 (based on a one-year lag) models with those same capabilities would be in the hands of the general public and widely deployed (presumably many with no guardrails at all, or even overtly dangerous goals). And yet in your scenario there is no military first strike on the owners of these open models by OpenBrain, or again, even mention of these open models at all.
We had very limited space. I think realistically in the Race ending they would be doing first strikes on various rivals, both terrorist groups and other companies, insofar as those groups seemed to be a real threat, which most of them wouldn’t be and probably all of them wouldn’t be. It didn’t seem worth talking about.
I think it depends on takeoff speeds? It seems a fairly natural consequence of the takeoff speed we describe in AI 2027, so I guess my citation would be the Research page of AI-2027.com. I don’t have a survey of takeoff speeds opinions, sorry, but I wouldn’t trust such a survey anyway since hardly anyone has thought seriously about the topic.