Here’s more on why I think classification of information is likely. Copy pasted from a different document.
US intelligence circles will significantly underestimate national security implications of AI, lots of information about AI companies will not become classified—Disagree
I think AI will be the number one national security issue of the US by 2026 or 2027 and lots of important information will get classified soon after.
I’m putting more evidence here because this argument got upvoted.
Says: major priority for her is US election interference using generative AI in social media by Russia and China
US policy efforts on GPU sanctioning China
Significant US policy efforts already in place to sanction GPU exports to other countries.
China is currently bypassing export controls, which will lead US intelligence circles to devise measures to tighten export controls.
Export controls are a standard lever that US policymaking and intelligence circles pull on many technologies, not just AI. This ensure US remains in frontier R&D of most science, technology and engineering.
Attention of Big Tech companies
Leaders of Big Tech companies, including Jensen Huang, Satya Nadella, Larry Ellison, Reid Hoffman, Mark Zuckerberg, Elon Musk and Bill Gates have made public statements that their major focus is AI competitiveness.
Elon Musk
Elon Musk is explicitly interested in influencing US govt policy on tech.
As of 2025-05, Elon Musk likely owns the world’s largest GPU datacenter.
Has publicly spoken about AI risk on multiple podcasts
Mark Zuckerberg
As of 2025-05, open sources latest AI models
Has previously interacted with multiple members of Senate and Congress
Has publicly spoken about AI risk on multiple pdocasts
People who understand nothing about AI will follow lagging indicators like capital and attention. This includes people within US govt.
Capital inflow to AI industry and AI risk
OpenAI, Deepmind and Anthropic have posted total annual revenue of $10B in 2024. This implies total market cap between $100B and $1T as of 2024. For reference, combined market cap of Apple, Google, Microsoft and Facebook is $10T as of 2025-05.
All Big Tech companies have experience handling US classified information. Amazon and Microsoft manage significant fraction of US government datacentres.
If you believe AI in 2027 will be signifcantly better than AI in 2024, you can make corresponding estimates of revenue and market cap.
Attention inflow to AI industry
OpenAI claims 400 million weekly active users. This is 5% of world population. For reference, estimated 67% of world population has ever used the internet.
As of 2025-05, Geoffrey Hinton speaking about AI risk has been covered by mainstream news channels across the world, which has significant increased the fraction of humanity that is aware of AI risk. (You can test this hypothesis by speaking to strangers outside of your friends-of-friends bubble.)
It’s not that I don’t think people will want to classify information, probably at least as much about how AI is being applied as about the basic technology.
It’s that:
Those people have a keen appreciation of the agility costs, which are very large. If they’re smart, they’ll want to do it smoothly and selectively.
There are countervailing political forces, and it will take time to overcome those even if a win is assured in the end.
It takes a long time to actually put that stuff in place, especially across multiple large, preexisting private organizations in peacetime. And even longer for it to become actually effective.
To move any large fraction of what an organization is doing under a classification umbrella (or under a comparable private), you have to--
Convince the (or coerce) right people to do it. You’ll hit resistance, because it’s an expensive pain in the butt. If anybody has to be strongarmed, then that will involve convincing those who are in a position to do the strongarming, which also takes time.
Set up a program and find people to run it. Or convince somebody else’s already overworked program to do it for you.
Define the scope and do the planning.
Possibly significantly restructure your organization and operations.
Probably cut loose most of any international collaborations.
Set up compliance, auditing, and reporting procedures, which are not simple or lighweight.
Clear your people. This takes time. You’ll have to work through an outside office. Individual investigations take months, and there’s a waiting list. Some of your people won’t pass. You will have to replace those people, and perhaps find something to do in some unclassified “rump” of your organization. Expect this to cause pushback, and not just from the affected individuals.
Train your people.
Set up technical security measures and facilities (usually meaning several large semi-independent projects with their own delays). The buildings have to change.
Wait a while for all the bugs to shake out. You will be relatively leaky while that’s going on.
Obviously it’s faster to do it for a relatively small fraction of your information, but it’s not what you’d want to call fast no matter what. And if the classified part is very small, it tends to have limited influence over what the rest of the organization is doing, as well as finding it relatively hard to maintain its own internal security.
I’ve never actually done any of this, but I’ve been close enough to it to see, at least through gossip, the outlines of how big a deal it is.
Also copied from another document. Sorry, I may need to publish all my work more clearly first, before soliciting expert feedback. Keen on your thoughts.
AI capability increases will outpace ability of US intelligence circles to adapt. Lots of information won’t become classified. - Weakly disagree
I have low (but not zero) probability we get ASI by 2027. If we get ASI by 2030, I think there’s enough time for them to adapt.
Classifying information is possible without significant changes in org structure or operational practices of AI labs. This means it can be done very quickly.
Classification is a legal tool.
The actual operational practices to defend information can take multiple years to implement, but this can come after information is already marked classified in terms of legality.
US govt can retroactively classify information after it has already been leaked.
This allows the US govt to pursue a legal case against the whistleblower under Espionage Act and prevent them from presenting evidence in court because it is now classified information.
The specific detail of whether information was classified at the time of leaking is less important than whether it poses a national security threat as deemed by US intelligence circles. (Law does not matter when it collides with incentives, basically.)
Case studies
Mark Klein’s 2006 leak of AT&T wiretapping—retroactively classified
Hillary Clinton 2016 email leak—retroactively classified
Abu Ghraib abuse photographs 2004 - retroactively classified
The actual operational practices to defend information can take multiple years to implement, but this can come after information is already marked classified in terms of legality.
Well, yeah, but normally if you declare something classified, you’re supposed to shut it all down until those practices are in place. That’s a huge cost in this context, one that the people making the decisions may not be willing to accept.
… and if you do declare it classified but don’t actually protect it, that means that the whistleblower is dealing with a different landscape. If you’re planning to disclose anonymously, the actual protections, not the legal status, are what matter. Of course, if you’re not anonymous, the converse applies.
US govt can retroactively classify information after it has already been leaked.
… and it’s getting nothing but more lawless in doing that.
If you’re dealing with a lawless enough government, it doesn’t even matter if what you did is actually illegal. Your life can be totally destroyed even if you win some kind of court victory 30 years later.
Here’s more on why I think classification of information is likely. Copy pasted from a different document.
US intelligence circles will significantly underestimate national security implications of AI, lots of information about AI companies will not become classified—Disagree
I think AI will be the number one national security issue of the US by 2026 or 2027 and lots of important information will get classified soon after.
I’m putting more evidence here because this argument got upvoted.
Attention of govt and intelligence circles
Paul Nakasone
ex-Director of NSA, now on OpenAI board
Recent talk by Paul Nakasone
Says: Cybersecurity, AI, and protecting US intellectual property (including AI model weights) as the primary focus for the NSA.
Likely a significant reason why he was hired by Sam Altman.
Timothy Haugh
ex-Director of NSA, fired by Trump in 2025
Recent talk by Timothy Haugh (12:00 onwards)
Says: Cybersecurity and AI are top challenges of US govt.
Says: AI-enabled cyberwarfare such as automated penetration testing.
Says: Over 7000 NSA analsysts are now using LLMs in their toolkit.
William Burns
ex-Director of CIA
Recent talk by William Burns (22:00 onwards)
Says: Ability of CIA to adapt to emerging technologies including large language models is number one criteria of success of CIA.
Says: Analysts use LLMs to process large volumes of data, process biometric data and city-level surveillance data.
Says: Aware of ASI risk as a theoretical possibility.
Says: CIA uses social media to identify and recruit potential Russian agents.
Avril Haines
ex-Director of National Intelligence, ex-Deputy Director of CIA, ex-National Security Advisor
Recent talk by Avril Haines
Says: major priority for her is US election interference using generative AI in social media by Russia and China
US policy efforts on GPU sanctioning China
Significant US policy efforts already in place to sanction GPU exports to other countries.
China is currently bypassing export controls, which will lead US intelligence circles to devise measures to tighten export controls.
Export controls are a standard lever that US policymaking and intelligence circles pull on many technologies, not just AI. This ensure US remains in frontier R&D of most science, technology and engineering.
Attention of Big Tech companies
Leaders of Big Tech companies, including Jensen Huang, Satya Nadella, Larry Ellison, Reid Hoffman, Mark Zuckerberg, Elon Musk and Bill Gates have made public statements that their major focus is AI competitiveness.
Elon Musk
Elon Musk is explicitly interested in influencing US govt policy on tech.
As of 2025-05, Elon Musk likely owns the world’s largest GPU datacenter.
Has publicly spoken about AI risk on multiple podcasts
Mark Zuckerberg
As of 2025-05, open sources latest AI models
Has previously interacted with multiple members of Senate and Congress
Has publicly spoken about AI risk on multiple pdocasts
People who understand nothing about AI will follow lagging indicators like capital and attention. This includes people within US govt.
Capital inflow to AI industry and AI risk
OpenAI, Deepmind and Anthropic have posted total annual revenue of $10B in 2024. This implies total market cap between $100B and $1T as of 2024. For reference, combined market cap of Apple, Google, Microsoft and Facebook is $10T as of 2025-05.
All Big Tech companies have experience handling US classified information. Amazon and Microsoft manage significant fraction of US government datacentres.
If you believe AI in 2027 will be signifcantly better than AI in 2024, you can make corresponding estimates of revenue and market cap.
Attention inflow to AI industry
OpenAI claims 400 million weekly active users. This is 5% of world population. For reference, estimated 67% of world population has ever used the internet.
As of 2025-05, Geoffrey Hinton speaking about AI risk has been covered by mainstream news channels across the world, which has significant increased the fraction of humanity that is aware of AI risk. (You can test this hypothesis by speaking to strangers outside of your friends-of-friends bubble.)
It’s not that I don’t think people will want to classify information, probably at least as much about how AI is being applied as about the basic technology.
It’s that:
Those people have a keen appreciation of the agility costs, which are very large. If they’re smart, they’ll want to do it smoothly and selectively.
There are countervailing political forces, and it will take time to overcome those even if a win is assured in the end.
It takes a long time to actually put that stuff in place, especially across multiple large, preexisting private organizations in peacetime. And even longer for it to become actually effective.
To move any large fraction of what an organization is doing under a classification umbrella (or under a comparable private), you have to--
Convince the (or coerce) right people to do it. You’ll hit resistance, because it’s an expensive pain in the butt. If anybody has to be strongarmed, then that will involve convincing those who are in a position to do the strongarming, which also takes time.
Set up a program and find people to run it. Or convince somebody else’s already overworked program to do it for you.
Define the scope and do the planning.
Possibly significantly restructure your organization and operations.
Probably cut loose most of any international collaborations.
Set up compliance, auditing, and reporting procedures, which are not simple or lighweight.
Clear your people. This takes time. You’ll have to work through an outside office. Individual investigations take months, and there’s a waiting list. Some of your people won’t pass. You will have to replace those people, and perhaps find something to do in some unclassified “rump” of your organization. Expect this to cause pushback, and not just from the affected individuals.
Train your people.
Set up technical security measures and facilities (usually meaning several large semi-independent projects with their own delays). The buildings have to change.
Wait a while for all the bugs to shake out. You will be relatively leaky while that’s going on.
Obviously it’s faster to do it for a relatively small fraction of your information, but it’s not what you’d want to call fast no matter what. And if the classified part is very small, it tends to have limited influence over what the rest of the organization is doing, as well as finding it relatively hard to maintain its own internal security.
I’ve never actually done any of this, but I’ve been close enough to it to see, at least through gossip, the outlines of how big a deal it is.
I think the biggest meta input I’ve gotten from your feedback, is that I need to publish a redteaming document for the theory of change of this work.
Also copied from another document. Sorry, I may need to publish all my work more clearly first, before soliciting expert feedback. Keen on your thoughts.
AI capability increases will outpace ability of US intelligence circles to adapt. Lots of information won’t become classified. - Weakly disagree
I have low (but not zero) probability we get ASI by 2027. If we get ASI by 2030, I think there’s enough time for them to adapt.
Classifying information is possible without significant changes in org structure or operational practices of AI labs. This means it can be done very quickly.
Classification is a legal tool.
The actual operational practices to defend information can take multiple years to implement, but this can come after information is already marked classified in terms of legality.
US govt can retroactively classify information after it has already been leaked.
This allows the US govt to pursue a legal case against the whistleblower under Espionage Act and prevent them from presenting evidence in court because it is now classified information.
The specific detail of whether information was classified at the time of leaking is less important than whether it poses a national security threat as deemed by US intelligence circles. (Law does not matter when it collides with incentives, basically.)
Case studies
Mark Klein’s 2006 leak of AT&T wiretapping—retroactively classified
Hillary Clinton 2016 email leak—retroactively classified
Abu Ghraib abuse photographs 2004 - retroactively classified
Sgt. Bowe Bergdahl 15-6 investigation file 2016 - retroactively classified
Well, yeah, but normally if you declare something classified, you’re supposed to shut it all down until those practices are in place. That’s a huge cost in this context, one that the people making the decisions may not be willing to accept.
… and if you do declare it classified but don’t actually protect it, that means that the whistleblower is dealing with a different landscape. If you’re planning to disclose anonymously, the actual protections, not the legal status, are what matter. Of course, if you’re not anonymous, the converse applies.
… and it’s getting nothing but more lawless in doing that.
If you’re dealing with a lawless enough government, it doesn’t even matter if what you did is actually illegal. Your life can be totally destroyed even if you win some kind of court victory 30 years later.
I’m saying this has happened multiple times in the past already, and has high probability of happening again
We seem to be mostly on the same page here honestly.