I was in fact associating sophisticated insiders with actually having authorized access to model weights, and I’m not sure (even after asking around) why this is worded the way it is.
I don’t really understand your comment here: “I don’t understand the relevance of this. Of course almost no one at the partners has “authorized” access to model weights. This is in the cybersecurity section of the RSP.” How many people have authorized access to a given piece of sensitive info can vary enormously (making this # no bigger than necessary is among the challenges of cybersecurity), and people can have authorized access to things that they are nevertheless not able to exfiltrate for usage elsewhere. It is possible to have very good protection against people with authorized access to model weights, and possible to have very little protection against this.
My guess is that it is quite difficult for the people you’re gesturing at (e.g., people who can log in on the same machines but don’t have authorized access to model weights) to exfiltrate model weights, though I’m not personally confident of that.
I was in fact associating sophisticated insiders with actually having authorized access to model weights, and I’m not sure (even after asking around) why this is worded the way it is.
Ok, cool. This then of course makes my top level post very relevant again, since I think a large number of database executives and other people high up at Google seem likely able to exfiltrate model weights without too much of an issue (I am not that confident of this, but it’s my best guess from having thought about the topic for a few dozen hours now, and having decent familiarity with cybersecurity considerations).
This I think puts Anthropic in violation of its RSP, even given your clarifications, since you have now clarified those would not be considered “sophisticated insiders” and so are not exempt (with some uncertainties I go into at the end of this comment).
How many people have authorized access to a given piece of sensitive info can vary enormously (making this # no bigger than necessary is among the challenges of cybersecurity), and people can have authorized access to things that they are nevertheless not able to exfiltrate for usage elsewhere.
Sorry, I should have phrased things more clearly here. Let me try again:
I am describing a cybersecurity attack surface. Of course for the purpose of those attacks, we can assume that the attacker is willing to do things they are not authorized to do. People being willing to commit cybercrimes is one of the basic premises of the cybersecurity section of the RSP.
I am here describing an attack vector where anyone who has physical access to those systems is likely capable of exfiltrating model weights, at least as long as they get some amount of executive buy-in to circumvent supervision. It is extremely unlikely for most of the people who fit this description to have authorized access to model weights. As such, it is unclear what the relevance of people having “authorized access” is.
I also am honestly surprised that anyone at Google or Amazon or Microsoft is considered to have authorized access to weights. That itself is an update to me! I would have assumed nobody at those companies was allowed to look at the weights, or make inferences about e.g. the underlying architecture, etc.
I am now thinking you believe something like this: Yes, there are many people with physical access, but even if they succeed at exfiltrating the weights, realistically in order for them to do anything with the weights, word would reach the highest executive levels at these tech companies, and the highest executive levels at these tech companies all have authorized access to model weights. I.e. currently Satya or Demis have authorized access to model weights (not because you want to give them authorized access, but just because giving them authorized access is a necessity for using their compute infrastructure), and as such are considered sophisticated insiders.
Honestly, I find the idea of considering Satya or Demis “sophisticated insiders with authorized access to model weights” very confusing. Like, at least taken on its own that has pretty big implications about race dynamics and technical diffusion between frontier labs, since apparently Anthropic wouldn’t consider it a security incident if Satya or Demis were to download Claude weights to a personal computer of theirs and reverse-engineer architectural details from them (since I am inferring them both to have “authorized access”).
My guess is reality is complicated in a bunch of messy ways that is hard to capture with the RSP here. I do appreciate you taking the time to clarify things.
There are a number of things you say here that don’t seem right to me and/or aren’t capturing the intent of what I said. I prefer not to get into all of it, but just a couple of notes:
My current impression is that we are “highly protected against most attackers’ attempts at stealing model weights,” specifically highly protected against the groups listed as “in scope” (which I think of as including employees at partner orgs who have physical access to machines but not authorized access to weights), and broadly in line with the letter and spirit of the ASL-3 Security Standard. This isn’t my call and I am not up on all of the details of how we’ve vetted the security controls for partners, but it’s my impression.
An attacker being out of scope for the ASL-3 Security Standard does not meant “Anthropic wouldn’t consider it a security incident” if they stole (i.e., exfiltrated/improperly used) important assets.
In particular the first bullet point seems important and clear. I currently think this is unlikely to be true (assuming that e.g. most people in datacenter management and executives at these companies do not have authorized access to weights), but I don’t really know how to progress from here. I might write more if I happen to talk to more people in the field about it.
An attacker being out of scope for the ASL-3 Security Standard does not meant “Anthropic wouldn’t consider it a security incident” if they stole (i.e., exfiltrated/improperly used) important assets.
That makes sense, though to be clear I was not trying to equate those two. I was saying “Anthropic wouldn’t consider it a security incident if someone with authorized model access were to use those weights how they see fit”. I.e. I was equating authorized access with Anthropic wouldn’t consider it a security incident if they did stuff with the weights.
But thinking more about it, it does seem like there is a natural difference between “authorized access to model weights” and “authorized to transfer model weights to new machines” or “authorized to perform operations on model weights without extensive logging”, and it makes sense to treat the latter as a security breach even if someone is authorized to access model weights in some sense.
This still leaves me in a kind of confused spot with regards to the security model here. From my perspective this still leaves hundreds of people[1] in the world who have both opportunity and motive to gain access to Anthropic model weights, with a bunch of people clearly outside of Anthropic and with misaligned interests to Anthropic being labeled “sophisticated insiders” and therefore excluded from the threat model in a way that really isn’t obvious from reading the RSP.
And it’s not like I have no sympathy for the difficulty of getting this all right, but the attack surface here feels very different than the one I was expecting to be covered when reading the RSP.
Overall, again, thanks for you taking the time to clarify things here. Given the first point it does seem like we have a disagreement about whether Anthropic is currently meeting its commitments, but it’s not super clear whether it’s worth either of our time to dig into it more.
Maybe only tens, since I don’t actually know who you currently consider to have authorized access to model weights at these other companies, which I think would be less concerning, though doesn’t change things that much if it e.g. includes all the top-level executives at these other companies who have the biggest motive.
These systems are designed to resist individual operators subverting controls—competently built cloud infrastructure doesn’t allow subversion of access controls to production systems even with physical access to data halls. I’ll speak to AWS’s controls in particular as an example, but I want to emphasize that this is a metonym for any competently run CSP.
AWS’s Nitro System is specifically architected with “zero operator access”—there is no mechanism for any AWS personnel, including those with the highest privileges, to access customer data. These are designed and tested technical restrictions built into the hardware itself, not policy controls that can be overridden. The system uses tamper-resistant TPMs with hardware roots of trust, and there is no equivalent of a “root” user or administrative bypass—even for maintenance.
This has been independently validated by NCC Group, who found “no gaps in the Nitro System that would compromise these security claims” and “no indication that a cloud service provider employee can obtain such access...to any host.” You may also enjoy as a bonus a quick read through the Mantle whitepaper.
The assumption that datacenter executives could “just walk up to” machines and exfiltrate data conflates physical proximity with system access. Physical access to a server room doesn’t necessarily grant access to customer data.
You can’t just walk up, but there is an extremely long history of easily available exploits given unlimited hardware access to systems, and the database center hardware stack is not up to the task (yet). Indeed, Anthropic themselves published a whitepaper outlining what would be necessary for datacenters to actually promise security even with physical hardware violations, which IMO clearly implies they do not think current data-centers meet that requirement!
Like, this is not an impossible problem to solve, but based on having engaged with the literature here a good amount, and having talked to a bunch of people with experience in the space, my strong sense is that if you gave me unlimited hardware access to the median rack that has Anthropic model weights on it while it is processing them, it would only require a mildly sophisticated cybersecurity team to access the weights unencrypted.
I was in fact associating sophisticated insiders with actually having authorized access to model weights, and I’m not sure (even after asking around) why this is worded the way it is.
I don’t really understand your comment here: “I don’t understand the relevance of this. Of course almost no one at the partners has “authorized” access to model weights. This is in the cybersecurity section of the RSP.” How many people have authorized access to a given piece of sensitive info can vary enormously (making this # no bigger than necessary is among the challenges of cybersecurity), and people can have authorized access to things that they are nevertheless not able to exfiltrate for usage elsewhere. It is possible to have very good protection against people with authorized access to model weights, and possible to have very little protection against this.
My guess is that it is quite difficult for the people you’re gesturing at (e.g., people who can log in on the same machines but don’t have authorized access to model weights) to exfiltrate model weights, though I’m not personally confident of that.
Ok, cool. This then of course makes my top level post very relevant again, since I think a large number of database executives and other people high up at Google seem likely able to exfiltrate model weights without too much of an issue (I am not that confident of this, but it’s my best guess from having thought about the topic for a few dozen hours now, and having decent familiarity with cybersecurity considerations).
This I think puts Anthropic in violation of its RSP, even given your clarifications, since you have now clarified those would not be considered “sophisticated insiders” and so are not exempt (with some uncertainties I go into at the end of this comment).
Sorry, I should have phrased things more clearly here. Let me try again:
I am describing a cybersecurity attack surface. Of course for the purpose of those attacks, we can assume that the attacker is willing to do things they are not authorized to do. People being willing to commit cybercrimes is one of the basic premises of the cybersecurity section of the RSP.
I am here describing an attack vector where anyone who has physical access to those systems is likely capable of exfiltrating model weights, at least as long as they get some amount of executive buy-in to circumvent supervision. It is extremely unlikely for most of the people who fit this description to have authorized access to model weights. As such, it is unclear what the relevance of people having “authorized access” is.
I also am honestly surprised that anyone at Google or Amazon or Microsoft is considered to have authorized access to weights. That itself is an update to me! I would have assumed nobody at those companies was allowed to look at the weights, or make inferences about e.g. the underlying architecture, etc.
I am now thinking you believe something like this: Yes, there are many people with physical access, but even if they succeed at exfiltrating the weights, realistically in order for them to do anything with the weights, word would reach the highest executive levels at these tech companies, and the highest executive levels at these tech companies all have authorized access to model weights. I.e. currently Satya or Demis have authorized access to model weights (not because you want to give them authorized access, but just because giving them authorized access is a necessity for using their compute infrastructure), and as such are considered sophisticated insiders.
Honestly, I find the idea of considering Satya or Demis “sophisticated insiders with authorized access to model weights” very confusing. Like, at least taken on its own that has pretty big implications about race dynamics and technical diffusion between frontier labs, since apparently Anthropic wouldn’t consider it a security incident if Satya or Demis were to download Claude weights to a personal computer of theirs and reverse-engineer architectural details from them (since I am inferring them both to have “authorized access”).
My guess is reality is complicated in a bunch of messy ways that is hard to capture with the RSP here. I do appreciate you taking the time to clarify things.
There are a number of things you say here that don’t seem right to me and/or aren’t capturing the intent of what I said. I prefer not to get into all of it, but just a couple of notes:
My current impression is that we are “highly protected against most attackers’ attempts at stealing model weights,” specifically highly protected against the groups listed as “in scope” (which I think of as including employees at partner orgs who have physical access to machines but not authorized access to weights), and broadly in line with the letter and spirit of the ASL-3 Security Standard. This isn’t my call and I am not up on all of the details of how we’ve vetted the security controls for partners, but it’s my impression.
An attacker being out of scope for the ASL-3 Security Standard does not meant “Anthropic wouldn’t consider it a security incident” if they stole (i.e., exfiltrated/improperly used) important assets.
Thank you for the clarification!
In particular the first bullet point seems important and clear. I currently think this is unlikely to be true (assuming that e.g. most people in datacenter management and executives at these companies do not have authorized access to weights), but I don’t really know how to progress from here. I might write more if I happen to talk to more people in the field about it.
That makes sense, though to be clear I was not trying to equate those two. I was saying “Anthropic wouldn’t consider it a security incident if someone with authorized model access were to use those weights how they see fit”. I.e. I was equating authorized access with Anthropic wouldn’t consider it a security incident if they did stuff with the weights.
But thinking more about it, it does seem like there is a natural difference between “authorized access to model weights” and “authorized to transfer model weights to new machines” or “authorized to perform operations on model weights without extensive logging”, and it makes sense to treat the latter as a security breach even if someone is authorized to access model weights in some sense.
This still leaves me in a kind of confused spot with regards to the security model here. From my perspective this still leaves hundreds of people[1] in the world who have both opportunity and motive to gain access to Anthropic model weights, with a bunch of people clearly outside of Anthropic and with misaligned interests to Anthropic being labeled “sophisticated insiders” and therefore excluded from the threat model in a way that really isn’t obvious from reading the RSP.
And it’s not like I have no sympathy for the difficulty of getting this all right, but the attack surface here feels very different than the one I was expecting to be covered when reading the RSP.
Overall, again, thanks for you taking the time to clarify things here. Given the first point it does seem like we have a disagreement about whether Anthropic is currently meeting its commitments, but it’s not super clear whether it’s worth either of our time to dig into it more.
Maybe only tens, since I don’t actually know who you currently consider to have authorized access to model weights at these other companies, which I think would be less concerning, though doesn’t change things that much if it e.g. includes all the top-level executives at these other companies who have the biggest motive.
Nothing more to add for now, thanks for the response!
These systems are designed to resist individual operators subverting controls—competently built cloud infrastructure doesn’t allow subversion of access controls to production systems even with physical access to data halls. I’ll speak to AWS’s controls in particular as an example, but I want to emphasize that this is a metonym for any competently run CSP.
AWS’s Nitro System is specifically architected with “zero operator access”—there is no mechanism for any AWS personnel, including those with the highest privileges, to access customer data. These are designed and tested technical restrictions built into the hardware itself, not policy controls that can be overridden. The system uses tamper-resistant TPMs with hardware roots of trust, and there is no equivalent of a “root” user or administrative bypass—even for maintenance. This has been independently validated by NCC Group, who found “no gaps in the Nitro System that would compromise these security claims” and “no indication that a cloud service provider employee can obtain such access...to any host.” You may also enjoy as a bonus a quick read through the Mantle whitepaper.
The assumption that datacenter executives could “just walk up to” machines and exfiltrate data conflates physical proximity with system access. Physical access to a server room doesn’t necessarily grant access to customer data.
You can’t just walk up, but there is an extremely long history of easily available exploits given unlimited hardware access to systems, and the database center hardware stack is not up to the task (yet). Indeed, Anthropic themselves published a whitepaper outlining what would be necessary for datacenters to actually promise security even with physical hardware violations, which IMO clearly implies they do not think current data-centers meet that requirement!
Like, this is not an impossible problem to solve, but based on having engaged with the literature here a good amount, and having talked to a bunch of people with experience in the space, my strong sense is that if you gave me unlimited hardware access to the median rack that has Anthropic model weights on it while it is processing them, it would only require a mildly sophisticated cybersecurity team to access the weights unencrypted.