It certainly signals that the authors have a competent grasp of the AI industry and its mainstream models of what’s happening. But is it actually competent AI-policy work, even under the e/acc agenda?
My impression is that no, it’s not. It seems to live in an e/acc fanfic about a competent US racing to AGI, not in reality. It vaguely recommends doing a thousand things that would be nontrivial to execute if the Eye of Sauron were looking directly at them, and the Eye is very much not doing that. On the contrary, the wider Trump administration is doing things that directly contradict the most key recommendations here (energy, chips, science, talent, “American allies”), and this document seems to pretend this isn’t happening. A politically effective version of this document would have been written in a very different way; this one seems to be written mainly for entertainment/fantasizing purposes.
Like, it demonstrates that the people tasked with thinking about AI in the Trump administration have a solid enough understanding of the AI industry to recognize which policies would accelerate capability research. But that understanding hadn’t translated into capability-positive policy decisions so far. Is there reason to think this plan’s publication is going to turn that around...?
Is my take here wrong? I don’t have much experience here, this is a strong opinion weakly held. (Addressing that question to @Zvi as well.)
I believe that this is a competently executed plan from the perspective of those executing the plan, which is different from the entire policy of the White House being generally competent in ways that those in charge of the plan lacked the power to do anything about (e.g. immigration, attacks on solar power, trade and alliances in general...)
Sorry, by executed I meant “competently written”. It does seem to me that this piece of policy is more grounded in important bottlenecks and detailed knowledge for AI progress than previous similar things.
I find it plausible that it might fail on implementation because it’s modeling the Trump administration and America as too much of a thing that could actually execute this plan. I agree with you that it is not unlikely likely it will fall flat on those grounds, and that does give me some solace.
Is it?
It certainly signals that the authors have a competent grasp of the AI industry and its mainstream models of what’s happening. But is it actually competent AI-policy work, even under the e/acc agenda?
My impression is that no, it’s not. It seems to live in an e/acc fanfic about a competent US racing to AGI, not in reality. It vaguely recommends doing a thousand things that would be nontrivial to execute if the Eye of Sauron were looking directly at them, and the Eye is very much not doing that. On the contrary, the wider Trump administration is doing things that directly contradict the most key recommendations here (energy, chips, science, talent, “American allies”), and this document seems to pretend this isn’t happening. A politically effective version of this document would have been written in a very different way; this one seems to be written mainly for entertainment/fantasizing purposes.
Like, it demonstrates that the people tasked with thinking about AI in the Trump administration have a solid enough understanding of the AI industry to recognize which policies would accelerate capability research. But that understanding hadn’t translated into capability-positive policy decisions so far. Is there reason to think this plan’s publication is going to turn that around...?
Is my take here wrong? I don’t have much experience here, this is a strong opinion weakly held. (Addressing that question to @Zvi as well.)
I believe that this is a competently executed plan from the perspective of those executing the plan, which is different from the entire policy of the White House being generally competent in ways that those in charge of the plan lacked the power to do anything about (e.g. immigration, attacks on solar power, trade and alliances in general...)
Sorry, by executed I meant “competently written”. It does seem to me that this piece of policy is more grounded in important bottlenecks and detailed knowledge for AI progress than previous similar things.
I find it plausible that it might fail on implementation because it’s modeling the Trump administration and America as too much of a thing that could actually execute this plan. I agree with you that it is not unlikely likely it will fall flat on those grounds, and that does give me some solace.