The rhetoric is terrible, although not as terrible as my median expectation was for it, because of what is not said.
Hmm, the rhetoric seems really quite bad. Like, “this is the worst policy document I have read so far in terms of rhetoric relating to very powerful AI systems” bad. I think I would have even preferred it explicitly arguing or claiming that there is no long-term risk from AI, or explicitly dismiss x-risk, because that would have given it a factual grounding that would have implied some future opportunity to change course, but instead it seems like it fully occupies a fantasy land in which there are no catastrophic risks from AI that are not caused by foreign adversaries.
And also, this is in the reference class of an Executive Order, not a bill. The rhetoric is the central substance. Most of the provisions and suggestions in this plan are vague and up for interpretation and the rhetoric is the thing giving the central direction in which that vagueness will be interpreted. The main effect of this plan is the rhetoric. It has approximately no force beyond that. If something in this plan later turns out to be in conflict with the central vibe of the document, it will have little problem being changed or dropped, and if someone comes up with something new that is very aligned with the vibe, it will have little problem being added. That’s my model of the nature of this kind of plan.
And I don’t think this is anything like the level of ‘full economic mobilization.’
I think it’s clearly calling for full economic mobilization. I don’t expect that to be the result, but the opening paragraph really sounds to me like it’s saying that that would be the ideal response. “Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people” is some really strong “this is basically the most important thing” rhetoric.
Setting aside competence level, it is hard to think of how this report could have been better given who was in charge of directing and approving the report.
I have no issue imagining how it could have been better? Like, I am not fully sure what you mean by “given who was in charge of directing and approving the report”, but the administration has published tons of reports with really quite random policy positions, so I don’t think the overton window is particularly narrow. Below I do a more thorough analysis on each heading in the plan, and it really doesn’t seem hard for me to imagine how it could have been better.
But setting that aside, I mean, if your headline summary of this policy would have been “this is absolutely terrible AI policy. Like, among the worst AI policies that I think one could choose. Worse than approximately anything previously said on this topic, but still somehow better than anything else I could have imagined coming out of this administration”, then sure, IDK, that seems like a fine take.
But that’s not what you said! You said this was a good plan, without basically any caveats. Of course it would have been better for there to be no plan. Of course any previous administration would have published a less bad plan. Saying something is a “good plan” is not normally meant to be interpreted as “a terrible plan that nonetheless is less destructive than most other plans that the people writing it could have come up with”.
I feel like you are trying to pull an Everybody Knows move here. No, not everybody knows that this administration is so fucked that this level of destructiveness is the best we can hope for, and also, I get the vibe that you yourself are not internally clear about this. This seems vastly worse than e.g. the Biden executive order. Indeed, it seems to get just enough right to be really bad (understanding that AI is a huge deal and has large military implication), but not enough right to dig itself out of that hole.
If you think things are so bad that the primary thing you want on realistic margins from America’s AI policy is incompetent execution
I don’t primarily want incompetent execution! I want competent execution on things that help, which marginally means:
Reducing not increasing investment in AI chip manufacturing
Not destroying the single global bottleneck for semiconductor manufacturing which provides the most obvious point of leverage for future global coordination
Framing AI takeoff as a coordination challenge with the central challenge being de-escalation, and especially pause-capability around critical capability thresholds, which are not too far in the future
Recognition and understanding that ASI and needs to be modeled more like an autonomous adversary with its own goals and aims
Recognition that we have little traction on aligning or controlling superintelligent systems, and further research is unlikely to provide any reasonable safety guarantees in time
All of these are within the Overton window right now. I think the first and second one the least, but I think it would have still been totally possible for a report like this to just straightforwardly say “we think it’s super reckless to race towards AI especially under adversarial conditions and our top priority is diplomacy on preventing a race”. Even this administration has in other contexts played up its diplomacy and peace-negotiation aspects extensively. Yes, of course many things have gone wrong on the way to this, but it’s clearly within the realm of possibility if you imagine a few things happening different just a few months back.
And look, I am not saying I want a plan that says all of these things, or even any of these things. I would have been fine with a plan that doesn’t really talk about these things at all and mostly ignored the topic. Or a plan that focuses on some random open-source AI dimension which is a bit bad, but isn’t committing you to a race. Or a plan that really had any other framing besides “an adversarial race towards AGI with the explicit stance that unfathomable riches and complete destruction of their enemies will await anyone who gets their first”. That really is just so close to the worst possible stance to take on these things that I can imagine.
On international treaties, I fail to see how anything here makes that situation any worse than baseline, including the rhetoric, given what has already been said and done, you shouldn’t be crying about that more than you were last week.
No, this action plan is a big deal. Every month is an opportunity to do better, and there has been no previous document with as much buy-in that framed things as explicitly as a race towards extremely powerful AI systems. This made a very meaningful difference.
Again, the framing of “AGI is a super big deal, by which we mean if you get there first you win, you just get extremely rich, you can disempower all of your enemies, and everything is amazing and perfect” is not an overdetermined position! It is a position that appears almost chosen to be pessimal from the perspective of minimizing negative outcomes of AI race dynamics. I am not aware of really anyone in this administration, or any other administration across the world taking a stance like this on this topic.
On the substance, this was much better than expectations except that we agree it had unexpectedly competent execution
I am assuming by the substance being better here that you think the actual things it advocates for doing are good in terms of outcomes, not just good in terms of execution. I don’t buy this. Let’s go through the Table of Contents and talk about what is actually being called for here:
Pillar I: Accelerate AI Innovation
Remove Red Tape and Onerous Regulation
This is bad.
Ensure that Frontier AI Protects Free Speech and American Values
This is irrelevant
Encourage Open-Source and Open-Weight AI
This is bad.
Enable AI Adoption
This is bad
Empower American Workers in the Age of AI
This is bad
Support Next-Generation Manufacturing
This is very bad.
Invest in AI-Enabled Science
This is ok, maybe good, depends on how much it will be about accelerating AI supply chain things
Build World-Class Scientific Datasets
This could be pretty good, especially the genomic sequencing stuff
Advance the Science of AI
This is bad (this is just calling for more capability research)
Invest in AI Interpretability, Control, and Robustness Breakthroughs
This is kind of good! Or like, IDK, I don’t think interpretability or robustness help much. But control is kind of nice. I am glad to see this.
Build an AI Evaluations Ecosystem
This (as framed) is bad! This is not risk evaluations. These are just “we need to get better metrics to goodheart on to make progress go faster”. Not all evals are good, especially if you built them with the purpose of iterating on them to make things go faster.
Accelerate AI Adoption in Government
This could be good. Possibly the best thing in the whole bill. I think the government somehow experiencing AI uplift and making better decisions is one of the few ways things could go well. It’s plausible this pulls everything into the green, though I am currently far from believing that.
Drive Adoption of AI within the Department of Defense
This is bad! The department of defense should not experience uplift because we do not want a military arms race.
Protect Commercial and Government AI Innovations
This could be good, could be bad. Not a clear take. Security can be good when you want a more responsible actor to have a lead that might want to enforce a pause, but is bad if you are trying to reduce economic incentives to race.
Combat Synthetic Media in the Legal System
This is irrelevant
Pillar II: Build American AI Infrastructure
Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing Facilities, and Energy Infrastructure while Guaranteeing Security
This is very bad
Develop a Grid to Match the Pace of AI Innovation
This is very bad
Restore American Semiconductor Manufacturing
This is very very bad, basically the worst, again the semiconductor supply chain bottleneck is approximately our biggest hope for future coordination
Build High-Security Data Centers for Military and Intelligence Community Usage
Eh, this could be fine, same as the security point above.
Train a Skilled Workforce for AI Infrastructure
This is bad
Bolster Critical Infrastructure Cybersecurity
This is mostly irrelevant
Promote Secure-By-Design AI Technologies and Applications
This is irrelevant
Promote Mature Federal Capacity for AI Incident Response
This could be quite good! It’s framed around accidents, and while it really does not mention any autonomous AI systems causing issues, I do quite like having “AI incident response” as a priority.
Pillar III: Lead in International AI Diplomacy and Security
Export American AI to Allies and Partners
This could be good, could be bad, unsure. My guess is bad.
Counter Chinese Influence in International Governance Bodies
This is bad
Strengthen AI Compute Export Control Enforcement
I don’t have a strong take on export controls. Seems like it has very badly damaged diplomatic efforts and indeed set us off on this whole path, but it could be quite good. I think overall probably good given where we are at.
Plug Loopholes in Existing Semiconductor Manufacturing Export Controls
Same as above, probably good
Align Protection Measures Globally
This is bad? It explicitly is like “let’s really de-emphasize coordinating things globally, but instead try to focus conversations on the U.S. and its allies”, i.e. it’s intentionally calling for de-emphasis on US-China negotiations
Ensure that the U.S. Government is at the Forefront of Evaluating National Security Risks in Frontier Models
Eh, maybe good. It’s all about understanding the military offensive nature of AI systems, which I do think is good to think about, but also seems like it will exacerbate arms races (but maybe not!)
Invest in Biosecurity
Mostly irrelevant on my lights, but could be good. Mostly concerned about bioterrorism.
This is a lot of “bads” and “really bads”! And again, given the overall framing of the whole document, I expect many more of the “goods” to be dropped, and many more “bads” to be added as a result of this plan. I agree with you there are some good things here, but all the things that are bad are the ones that matter the most, and basically all the things that are good are “nice to haves” that don’t really matter, IMO.
This was very helpful to me and we had a good talk about things.
I do think it is a correct criticism of my post to say that I should have emphasized more that I think the rhetoric used here and the administration’s overall policy path is terrible. After seeing everyone else’s responses be so positive, and after seeing Oliver put so much emphasis on the rhetoric versus the proposals, I’m sad about that, and plan to address that going forward, likely in the weekly (given reading patterns it would not do much to try and edit the post now).
Zvi, I disagree. I think there is no positive value to you taking a position on Trump admin rhetoric being good or bad. There are a zillion people commenting on that.
Your comparative advantage is focusing on specifics; what does the AI action plan instruct federal agencies to do? How can the most important priorities be implemented most effectively? Who is thinking clearly about this and has a plan for e.g. making sure the US has an excellent federal response plan when warning shots happen? Signal boosting smart effective policies/policy thinkers is how you can best use your platform. Don’t contaminate your message with partisanship.
Wading into “Trump is good/bad” discourse isn’t going to change who wins the election. People you know already spent ridculous amounts of money on this, and were crushed. Time to ignore politics and focus on policy: specifically, complicated tech policy that most people don’t understand, that you have the ability to shape.
I agree it’s clear the AI Action Plan doesn’t reflect most of your priorities. A naive observer might say that’s because the accelerationists spent hundreds of millions of dollars building relationships with the group of people in charge of Congress and the White House, while the safety crowd spent their money on Biden and failed computer science research (i.e. aligning AI to “human values”).
From this position of approximately no political capital, one interpretation of the AI action plan is that it’s fantastic safety concerns got any concessions at all.
Here’s what a cynical observer might say: the government is basically incompetent at doing most things. So the majority of the accelerationist priorities in the plan will not be competently implemented. It’s not like the people who understand what’s going with AI would choose slogging through the interagency process over making tons of money in industry. And even if they did, who’s going to be more effective at accelerating capabilities? Researchers making $100mil at Meta (who would be doing this work regardless of the Action Plan), or government employees (well known for being able to deliver straightforward priorities like broadband access to rural America)?
The cynical observer might go on to say: and that’s why you should also be pessimistic about the more interesting priorities being implemented effectively.
But here is where optimistic do-gooders can step in: if people who understand AI spend their precious free time thinking as hard as possible about how to do important things—like making sure the US has an excellent system to forecast risks from AI—then there’s a possibility that good ideas generated by think tanks/civil society will be implemented by the US Government. (Hey, maybe these smart altruistic AI people could even work for the government!) I really think this is a place people might be able to make a difference in US policy, quite quickly.
Hmm, the rhetoric seems really quite bad. Like, “this is the worst policy document I have read so far in terms of rhetoric relating to very powerful AI systems” bad. I think I would have even preferred it explicitly arguing or claiming that there is no long-term risk from AI, or explicitly dismiss x-risk, because that would have given it a factual grounding that would have implied some future opportunity to change course, but instead it seems like it fully occupies a fantasy land in which there are no catastrophic risks from AI that are not caused by foreign adversaries.
And also, this is in the reference class of an Executive Order, not a bill. The rhetoric is the central substance. Most of the provisions and suggestions in this plan are vague and up for interpretation and the rhetoric is the thing giving the central direction in which that vagueness will be interpreted. The main effect of this plan is the rhetoric. It has approximately no force beyond that. If something in this plan later turns out to be in conflict with the central vibe of the document, it will have little problem being changed or dropped, and if someone comes up with something new that is very aligned with the vibe, it will have little problem being added. That’s my model of the nature of this kind of plan.
I think it’s clearly calling for full economic mobilization. I don’t expect that to be the result, but the opening paragraph really sounds to me like it’s saying that that would be the ideal response. “Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people” is some really strong “this is basically the most important thing” rhetoric.
I have no issue imagining how it could have been better? Like, I am not fully sure what you mean by “given who was in charge of directing and approving the report”, but the administration has published tons of reports with really quite random policy positions, so I don’t think the overton window is particularly narrow. Below I do a more thorough analysis on each heading in the plan, and it really doesn’t seem hard for me to imagine how it could have been better.
But setting that aside, I mean, if your headline summary of this policy would have been “this is absolutely terrible AI policy. Like, among the worst AI policies that I think one could choose. Worse than approximately anything previously said on this topic, but still somehow better than anything else I could have imagined coming out of this administration”, then sure, IDK, that seems like a fine take.
But that’s not what you said! You said this was a good plan, without basically any caveats. Of course it would have been better for there to be no plan. Of course any previous administration would have published a less bad plan. Saying something is a “good plan” is not normally meant to be interpreted as “a terrible plan that nonetheless is less destructive than most other plans that the people writing it could have come up with”.
I feel like you are trying to pull an Everybody Knows move here. No, not everybody knows that this administration is so fucked that this level of destructiveness is the best we can hope for, and also, I get the vibe that you yourself are not internally clear about this. This seems vastly worse than e.g. the Biden executive order. Indeed, it seems to get just enough right to be really bad (understanding that AI is a huge deal and has large military implication), but not enough right to dig itself out of that hole.
I don’t primarily want incompetent execution! I want competent execution on things that help, which marginally means:
Reducing not increasing investment in AI chip manufacturing
Not destroying the single global bottleneck for semiconductor manufacturing which provides the most obvious point of leverage for future global coordination
Framing AI takeoff as a coordination challenge with the central challenge being de-escalation, and especially pause-capability around critical capability thresholds, which are not too far in the future
Recognition and understanding that ASI and needs to be modeled more like an autonomous adversary with its own goals and aims
Recognition that we have little traction on aligning or controlling superintelligent systems, and further research is unlikely to provide any reasonable safety guarantees in time
All of these are within the Overton window right now. I think the first and second one the least, but I think it would have still been totally possible for a report like this to just straightforwardly say “we think it’s super reckless to race towards AI especially under adversarial conditions and our top priority is diplomacy on preventing a race”. Even this administration has in other contexts played up its diplomacy and peace-negotiation aspects extensively. Yes, of course many things have gone wrong on the way to this, but it’s clearly within the realm of possibility if you imagine a few things happening different just a few months back.
And look, I am not saying I want a plan that says all of these things, or even any of these things. I would have been fine with a plan that doesn’t really talk about these things at all and mostly ignored the topic. Or a plan that focuses on some random open-source AI dimension which is a bit bad, but isn’t committing you to a race. Or a plan that really had any other framing besides “an adversarial race towards AGI with the explicit stance that unfathomable riches and complete destruction of their enemies will await anyone who gets their first”. That really is just so close to the worst possible stance to take on these things that I can imagine.
No, this action plan is a big deal. Every month is an opportunity to do better, and there has been no previous document with as much buy-in that framed things as explicitly as a race towards extremely powerful AI systems. This made a very meaningful difference.
Again, the framing of “AGI is a super big deal, by which we mean if you get there first you win, you just get extremely rich, you can disempower all of your enemies, and everything is amazing and perfect” is not an overdetermined position! It is a position that appears almost chosen to be pessimal from the perspective of minimizing negative outcomes of AI race dynamics. I am not aware of really anyone in this administration, or any other administration across the world taking a stance like this on this topic.
I am assuming by the substance being better here that you think the actual things it advocates for doing are good in terms of outcomes, not just good in terms of execution. I don’t buy this. Let’s go through the Table of Contents and talk about what is actually being called for here:
Pillar I: Accelerate AI Innovation
Remove Red Tape and Onerous Regulation
This is bad.
Ensure that Frontier AI Protects Free Speech and American Values
This is irrelevant
Encourage Open-Source and Open-Weight AI
This is bad.
Enable AI Adoption
This is bad
Empower American Workers in the Age of AI
This is bad
Support Next-Generation Manufacturing
This is very bad.
Invest in AI-Enabled Science
This is ok, maybe good, depends on how much it will be about accelerating AI supply chain things
Build World-Class Scientific Datasets
This could be pretty good, especially the genomic sequencing stuff
Advance the Science of AI
This is bad (this is just calling for more capability research)
Invest in AI Interpretability, Control, and Robustness Breakthroughs
This is kind of good! Or like, IDK, I don’t think interpretability or robustness help much. But control is kind of nice. I am glad to see this.
Build an AI Evaluations Ecosystem
This (as framed) is bad! This is not risk evaluations. These are just “we need to get better metrics to goodheart on to make progress go faster”. Not all evals are good, especially if you built them with the purpose of iterating on them to make things go faster.
Accelerate AI Adoption in Government
This could be good. Possibly the best thing in the whole bill. I think the government somehow experiencing AI uplift and making better decisions is one of the few ways things could go well. It’s plausible this pulls everything into the green, though I am currently far from believing that.
Drive Adoption of AI within the Department of Defense
This is bad! The department of defense should not experience uplift because we do not want a military arms race.
Protect Commercial and Government AI Innovations
This could be good, could be bad. Not a clear take. Security can be good when you want a more responsible actor to have a lead that might want to enforce a pause, but is bad if you are trying to reduce economic incentives to race.
Combat Synthetic Media in the Legal System
This is irrelevant
Pillar II: Build American AI Infrastructure
Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing
Facilities, and Energy Infrastructure while Guaranteeing Security
This is very bad
Develop a Grid to Match the Pace of AI Innovation
This is very bad
Restore American Semiconductor Manufacturing
This is very very bad, basically the worst, again the semiconductor supply chain bottleneck is approximately our biggest hope for future coordination
Build High-Security Data Centers for Military and Intelligence Community Usage
Eh, this could be fine, same as the security point above.
Train a Skilled Workforce for AI Infrastructure
This is bad
Bolster Critical Infrastructure Cybersecurity
This is mostly irrelevant
Promote Secure-By-Design AI Technologies and Applications
This is irrelevant
Promote Mature Federal Capacity for AI Incident Response
This could be quite good! It’s framed around accidents, and while it really does not mention any autonomous AI systems causing issues, I do quite like having “AI incident response” as a priority.
Pillar III: Lead in International AI Diplomacy and Security
Export American AI to Allies and Partners
This could be good, could be bad, unsure. My guess is bad.
Counter Chinese Influence in International Governance Bodies
This is bad
Strengthen AI Compute Export Control Enforcement
I don’t have a strong take on export controls. Seems like it has very badly damaged diplomatic efforts and indeed set us off on this whole path, but it could be quite good. I think overall probably good given where we are at.
Plug Loopholes in Existing Semiconductor Manufacturing Export Controls
Same as above, probably good
Align Protection Measures Globally
This is bad? It explicitly is like “let’s really de-emphasize coordinating things globally, but instead try to focus conversations on the U.S. and its allies”, i.e. it’s intentionally calling for de-emphasis on US-China negotiations
Ensure that the U.S. Government is at the Forefront of Evaluating National Security
Risks in Frontier Models
Eh, maybe good. It’s all about understanding the military offensive nature of AI systems, which I do think is good to think about, but also seems like it will exacerbate arms races (but maybe not!)
Invest in Biosecurity
Mostly irrelevant on my lights, but could be good. Mostly concerned about bioterrorism.
This is a lot of “bads” and “really bads”! And again, given the overall framing of the whole document, I expect many more of the “goods” to be dropped, and many more “bads” to be added as a result of this plan. I agree with you there are some good things here, but all the things that are bad are the ones that matter the most, and basically all the things that are good are “nice to haves” that don’t really matter, IMO.
This was very helpful to me and we had a good talk about things.
I do think it is a correct criticism of my post to say that I should have emphasized more that I think the rhetoric used here and the administration’s overall policy path is terrible. After seeing everyone else’s responses be so positive, and after seeing Oliver put so much emphasis on the rhetoric versus the proposals, I’m sad about that, and plan to address that going forward, likely in the weekly (given reading patterns it would not do much to try and edit the post now).
Zvi, I disagree. I think there is no positive value to you taking a position on Trump admin rhetoric being good or bad. There are a zillion people commenting on that.
Your comparative advantage is focusing on specifics; what does the AI action plan instruct federal agencies to do? How can the most important priorities be implemented most effectively? Who is thinking clearly about this and has a plan for e.g. making sure the US has an excellent federal response plan when warning shots happen? Signal boosting smart effective policies/policy thinkers is how you can best use your platform. Don’t contaminate your message with partisanship.
Wading into “Trump is good/bad” discourse isn’t going to change who wins the election. People you know already spent ridculous amounts of money on this, and were crushed. Time to ignore politics and focus on policy: specifically, complicated tech policy that most people don’t understand, that you have the ability to shape.
Appreciate your take here, Habryka.
I agree it’s clear the AI Action Plan doesn’t reflect most of your priorities. A naive observer might say that’s because the accelerationists spent hundreds of millions of dollars building relationships with the group of people in charge of Congress and the White House, while the safety crowd spent their money on Biden and failed computer science research (i.e. aligning AI to “human values”).
From this position of approximately no political capital, one interpretation of the AI action plan is that it’s fantastic safety concerns got any concessions at all.
Here’s what a cynical observer might say: the government is basically incompetent at doing most things. So the majority of the accelerationist priorities in the plan will not be competently implemented. It’s not like the people who understand what’s going with AI would choose slogging through the interagency process over making tons of money in industry. And even if they did, who’s going to be more effective at accelerating capabilities? Researchers making $100mil at Meta (who would be doing this work regardless of the Action Plan), or government employees (well known for being able to deliver straightforward priorities like broadband access to rural America)?
The cynical observer might go on to say: and that’s why you should also be pessimistic about the more interesting priorities being implemented effectively.
But here is where optimistic do-gooders can step in: if people who understand AI spend their precious free time thinking as hard as possible about how to do important things—like making sure the US has an excellent system to forecast risks from AI—then there’s a possibility that good ideas generated by think tanks/civil society will be implemented by the US Government. (Hey, maybe these smart altruistic AI people could even work for the government!) I really think this is a place people might be able to make a difference in US policy, quite quickly.