By top 3 priority, I mean “among the top 3 most prioritized cyber attacks of that year”. Precisely, I’m discussing robustness against OC5 as defined in the RAND report linked above:
OC5 Top-priority operations by the top cyber-capable institutions
Operations roughly less capable than or comparable to 1,000 individuals who have experience and expertise years ahead of the (public) state of the art in a variety of relevant professions (cybersecurity, human intelligence gathering, physical operations, etc.) spending years with a total budget of up to $1 billion on the specific operation, with state-level infrastructure and access developed over decades and access to state resources such as legal cover, interception of communication infrastructure, and more.
This includes the handful of operations most prioritized by the world’s most capable nation-states.
OK, sorry. That’s slightly below “top 3 priorities for the spies”, I think, but I still don’t think it’s reasonable to expect to protect a file that’s in use against it for 2 years.
@jbash What do you think would be a better strategy/more reasonable? Should there be more focus on mitigating risks after potential model theft? Or a much stronger effort to convince key actors to implement unprecedentedly strict security for AI?
Sorry; I’m not in the habit of reading the notifications, so I didn’t see the “@” tag.
I don’t have a good answer (which doesn’t change the underlying bad prospects for securing the data). I think I’d tend to prefer to “mitigating risks after potential model theft”, because I believe “convince key actors” is fundamentally futile. The kind of security you’d need, if it’s possible, would basically shut them down. Which is equivalent to abandoning the “key actor” role to whoever does not implement that kind of security.
Unfortunately, “key actors” would also have to be convinced to “mitigate risks”, which they’re unlikely to do because that would require them to accept that their preventative measures are probably going to fail. So even the relatively mild “go ahead and do it, but don’t expect it to work” is probably not going to happen.
By top 3 priority, I mean “among the top 3 most prioritized cyber attacks of that year”. Precisely, I’m discussing robustness against OC5 as defined in the RAND report linked above:
Emphasis mine.
OK, sorry. That’s slightly below “top 3 priorities for the spies”, I think, but I still don’t think it’s reasonable to expect to protect a file that’s in use against it for 2 years.
@jbash What do you think would be a better strategy/more reasonable? Should there be more focus on mitigating risks after potential model theft? Or a much stronger effort to convince key actors to implement unprecedentedly strict security for AI?
Sorry; I’m not in the habit of reading the notifications, so I didn’t see the “@” tag.
I don’t have a good answer (which doesn’t change the underlying bad prospects for securing the data). I think I’d tend to prefer to “mitigating risks after potential model theft”, because I believe “convince key actors” is fundamentally futile. The kind of security you’d need, if it’s possible, would basically shut them down. Which is equivalent to abandoning the “key actor” role to whoever does not implement that kind of security.
Unfortunately, “key actors” would also have to be convinced to “mitigate risks”, which they’re unlikely to do because that would require them to accept that their preventative measures are probably going to fail. So even the relatively mild “go ahead and do it, but don’t expect it to work” is probably not going to happen.
Account settings let you set mentions to notify you by email :)