It’s not that I don’t think people will want to classify information, probably at least as much about how AI is being applied as about the basic technology.
It’s that:
Those people have a keen appreciation of the agility costs, which are very large. If they’re smart, they’ll want to do it smoothly and selectively.
There are countervailing political forces, and it will take time to overcome those even if a win is assured in the end.
It takes a long time to actually put that stuff in place, especially across multiple large, preexisting private organizations in peacetime. And even longer for it to become actually effective.
To move any large fraction of what an organization is doing under a classification umbrella (or under a comparable private), you have to--
Convince the (or coerce) right people to do it. You’ll hit resistance, because it’s an expensive pain in the butt. If anybody has to be strongarmed, then that will involve convincing those who are in a position to do the strongarming, which also takes time.
Set up a program and find people to run it. Or convince somebody else’s already overworked program to do it for you.
Define the scope and do the planning.
Possibly significantly restructure your organization and operations.
Probably cut loose most of any international collaborations.
Set up compliance, auditing, and reporting procedures, which are not simple or lighweight.
Clear your people. This takes time. You’ll have to work through an outside office. Individual investigations take months, and there’s a waiting list. Some of your people won’t pass. You will have to replace those people, and perhaps find something to do in some unclassified “rump” of your organization. Expect this to cause pushback, and not just from the affected individuals.
Train your people.
Set up technical security measures and facilities (usually meaning several large semi-independent projects with their own delays). The buildings have to change.
Wait a while for all the bugs to shake out. You will be relatively leaky while that’s going on.
Obviously it’s faster to do it for a relatively small fraction of your information, but it’s not what you’d want to call fast no matter what. And if the classified part is very small, it tends to have limited influence over what the rest of the organization is doing, as well as finding it relatively hard to maintain its own internal security.
I’ve never actually done any of this, but I’ve been close enough to it to see, at least through gossip, the outlines of how big a deal it is.
Also copied from another document. Sorry, I may need to publish all my work more clearly first, before soliciting expert feedback. Keen on your thoughts.
AI capability increases will outpace ability of US intelligence circles to adapt. Lots of information won’t become classified. - Weakly disagree
I have low (but not zero) probability we get ASI by 2027. If we get ASI by 2030, I think there’s enough time for them to adapt.
Classifying information is possible without significant changes in org structure or operational practices of AI labs. This means it can be done very quickly.
Classification is a legal tool.
The actual operational practices to defend information can take multiple years to implement, but this can come after information is already marked classified in terms of legality.
US govt can retroactively classify information after it has already been leaked.
This allows the US govt to pursue a legal case against the whistleblower under Espionage Act and prevent them from presenting evidence in court because it is now classified information.
The specific detail of whether information was classified at the time of leaking is less important than whether it poses a national security threat as deemed by US intelligence circles. (Law does not matter when it collides with incentives, basically.)
Case studies
Mark Klein’s 2006 leak of AT&T wiretapping—retroactively classified
Hillary Clinton 2016 email leak—retroactively classified
Abu Ghraib abuse photographs 2004 - retroactively classified
The actual operational practices to defend information can take multiple years to implement, but this can come after information is already marked classified in terms of legality.
Well, yeah, but normally if you declare something classified, you’re supposed to shut it all down until those practices are in place. That’s a huge cost in this context, one that the people making the decisions may not be willing to accept.
… and if you do declare it classified but don’t actually protect it, that means that the whistleblower is dealing with a different landscape. If you’re planning to disclose anonymously, the actual protections, not the legal status, are what matter. Of course, if you’re not anonymous, the converse applies.
US govt can retroactively classify information after it has already been leaked.
… and it’s getting nothing but more lawless in doing that.
If you’re dealing with a lawless enough government, it doesn’t even matter if what you did is actually illegal. Your life can be totally destroyed even if you win some kind of court victory 30 years later.
It’s not that I don’t think people will want to classify information, probably at least as much about how AI is being applied as about the basic technology.
It’s that:
Those people have a keen appreciation of the agility costs, which are very large. If they’re smart, they’ll want to do it smoothly and selectively.
There are countervailing political forces, and it will take time to overcome those even if a win is assured in the end.
It takes a long time to actually put that stuff in place, especially across multiple large, preexisting private organizations in peacetime. And even longer for it to become actually effective.
To move any large fraction of what an organization is doing under a classification umbrella (or under a comparable private), you have to--
Convince the (or coerce) right people to do it. You’ll hit resistance, because it’s an expensive pain in the butt. If anybody has to be strongarmed, then that will involve convincing those who are in a position to do the strongarming, which also takes time.
Set up a program and find people to run it. Or convince somebody else’s already overworked program to do it for you.
Define the scope and do the planning.
Possibly significantly restructure your organization and operations.
Probably cut loose most of any international collaborations.
Set up compliance, auditing, and reporting procedures, which are not simple or lighweight.
Clear your people. This takes time. You’ll have to work through an outside office. Individual investigations take months, and there’s a waiting list. Some of your people won’t pass. You will have to replace those people, and perhaps find something to do in some unclassified “rump” of your organization. Expect this to cause pushback, and not just from the affected individuals.
Train your people.
Set up technical security measures and facilities (usually meaning several large semi-independent projects with their own delays). The buildings have to change.
Wait a while for all the bugs to shake out. You will be relatively leaky while that’s going on.
Obviously it’s faster to do it for a relatively small fraction of your information, but it’s not what you’d want to call fast no matter what. And if the classified part is very small, it tends to have limited influence over what the rest of the organization is doing, as well as finding it relatively hard to maintain its own internal security.
I’ve never actually done any of this, but I’ve been close enough to it to see, at least through gossip, the outlines of how big a deal it is.
I think the biggest meta input I’ve gotten from your feedback, is that I need to publish a redteaming document for the theory of change of this work.
Also copied from another document. Sorry, I may need to publish all my work more clearly first, before soliciting expert feedback. Keen on your thoughts.
AI capability increases will outpace ability of US intelligence circles to adapt. Lots of information won’t become classified. - Weakly disagree
I have low (but not zero) probability we get ASI by 2027. If we get ASI by 2030, I think there’s enough time for them to adapt.
Classifying information is possible without significant changes in org structure or operational practices of AI labs. This means it can be done very quickly.
Classification is a legal tool.
The actual operational practices to defend information can take multiple years to implement, but this can come after information is already marked classified in terms of legality.
US govt can retroactively classify information after it has already been leaked.
This allows the US govt to pursue a legal case against the whistleblower under Espionage Act and prevent them from presenting evidence in court because it is now classified information.
The specific detail of whether information was classified at the time of leaking is less important than whether it poses a national security threat as deemed by US intelligence circles. (Law does not matter when it collides with incentives, basically.)
Case studies
Mark Klein’s 2006 leak of AT&T wiretapping—retroactively classified
Hillary Clinton 2016 email leak—retroactively classified
Abu Ghraib abuse photographs 2004 - retroactively classified
Sgt. Bowe Bergdahl 15-6 investigation file 2016 - retroactively classified
Well, yeah, but normally if you declare something classified, you’re supposed to shut it all down until those practices are in place. That’s a huge cost in this context, one that the people making the decisions may not be willing to accept.
… and if you do declare it classified but don’t actually protect it, that means that the whistleblower is dealing with a different landscape. If you’re planning to disclose anonymously, the actual protections, not the legal status, are what matter. Of course, if you’re not anonymous, the converse applies.
… and it’s getting nothing but more lawless in doing that.
If you’re dealing with a lawless enough government, it doesn’t even matter if what you did is actually illegal. Your life can be totally destroyed even if you win some kind of court victory 30 years later.
I’m saying this has happened multiple times in the past already, and has high probability of happening again
We seem to be mostly on the same page here honestly.