robustness to state-backed hacking programs was unachievable
How do you reconcile that with the fact that Claude has recently been used by the US Government to process classified information? Presumably they have a special version on special servers for that but still, this looks like some degree of robustness which might be achieved with a model not served to a wide audience
I believe these things are connected with each other: if the server and the software system in general is safe enough to work with lots of classified information on a regular basis, it’s safe to store the weights as well
Didn’t top secret US government networks have breaches by the Chinese before when the stakes where probably lower? Are you thinking that those networks are much more secure now than they were a decade ago?
The deployment for top secret government networks probably is reasonably secure. The problem is that they also store those weights in a bunch of other data centers that are necessarily connected to the internet, and the only way to not do that would be to shut down their consumer product and lose 99% of their income.
I agree that the models served to civilian customers over API can’t be realistically secured from the state adversaries, but if we are speaking about advanced AI R&D in the future like in AI 2027, than it looks feasible to conduct it on protected servers. Maybe I misunderstood author’s opinion
No, it’s not at the moment feasible, without building infrastructure that would halt frontier training progress at the relevant company for months, if not years, to keep weights limited to servers with nation-state level robustness.
Nation states steal classified info on a fairly regular basis, right? I’m not familiar enough with the field to have a definite opinion, but it’s not obvious to me that ASL-4 security is achievable with the controls we normally use on Secret or even Top Secret info.
How do you reconcile that with the fact that Claude has recently been used by the US Government to process classified information? Presumably they have a special version on special servers for that but still, this looks like some degree of robustness which might be achieved with a model not served to a wide audience
I think this is referring to protecting Claude’s weights from being stolen by a state-backed hacker, not about making Claude usable by governments.
I believe these things are connected with each other: if the server and the software system in general is safe enough to work with lots of classified information on a regular basis, it’s safe to store the weights as well
Didn’t top secret US government networks have breaches by the Chinese before when the stakes where probably lower? Are you thinking that those networks are much more secure now than they were a decade ago?
The deployment for top secret government networks probably is reasonably secure. The problem is that they also store those weights in a bunch of other data centers that are necessarily connected to the internet, and the only way to not do that would be to shut down their consumer product and lose 99% of their income.
I agree that the models served to civilian customers over API can’t be realistically secured from the state adversaries, but if we are speaking about advanced AI R&D in the future like in AI 2027, than it looks feasible to conduct it on protected servers. Maybe I misunderstood author’s opinion
No, it’s not at the moment feasible, without building infrastructure that would halt frontier training progress at the relevant company for months, if not years, to keep weights limited to servers with nation-state level robustness.
Nation states steal classified info on a fairly regular basis, right? I’m not familiar enough with the field to have a definite opinion, but it’s not obvious to me that ASL-4 security is achievable with the controls we normally use on Secret or even Top Secret info.