That really seems more like a question for governments than for Anthropic
+1. I do want governments to take this question seriously. It seems plausible to me that Anthropic (and other labs) could play an important role in helping governments improve its ability to detect/process information about AI risks, though.
it’s not clear why the government would get involved in a matter of voluntary commitments by a private organization
Makes sense. I’m less interested in a reporting system that’s like “tell the government that someone is breaking an RSP” and more interested in a reporting system that’s like “tell the government if you are worried about an AI-related national security risk, regardless of whether or not this risk is based on a company breaking its voluntary commitments.”
My guess is that existing whistleblowing programs are the best bet right now, but it’s unclear to me whether they are staffed by people who understand AI risks well enough to know how to interpret/process/escalate such information (assuming the information ought to be escalated).
+1. I do want governments to take this question seriously. It seems plausible to me that Anthropic (and other labs) could play an important role in helping governments improve its ability to detect/process information about AI risks, though.
Makes sense. I’m less interested in a reporting system that’s like “tell the government that someone is breaking an RSP” and more interested in a reporting system that’s like “tell the government if you are worried about an AI-related national security risk, regardless of whether or not this risk is based on a company breaking its voluntary commitments.”
My guess is that existing whistleblowing programs are the best bet right now, but it’s unclear to me whether they are staffed by people who understand AI risks well enough to know how to interpret/process/escalate such information (assuming the information ought to be escalated).