If there had been common knowledge (which I think there probably would have in the absence of Anthropic’s RSP and associated recruitment/marketing/comms efforts) that achieving nation-state robust cybersecurity was not achievable unless very drastic actions were taken, I do think this would have caused some people to change strategies substantially.
My sense is most efforts which seem to be aimed at nation-state robust cybersecurity for AI (outside labs) have been driven by things like the RAND report and some theory of change downstream of pieces like Situational Awareness which argue that governments may push for TS/SCI classified AI development at some point (for some applications)—not necessarily downstream of Anthropic’s RSP (I’ve never heard anyone mention it directly in, for example, any discussion around why SL5 security is important).
That said (a) I agree with you that this goal is impractial for general AI development and it was foolish of Anthropic to commit to something close to security against state-backed attacks (b) I have heard critique of Jason Clinton’s PoV from parts of the AI cybersecurity community that, having never worked in an Intelligence Community cybersecurity role, he lacks information that would update him on the difficulties (c) I remain confused why some still consider SL5 security for AI model weights a tractable or important goal in the field.
That’s fair! I am mostly thinking of the AI safety community and the parts of it interested in cybersecurity. I had a lot of discussions with people around the funding ecosystem/government AI safety-interested people/AI policy thinktanks around a year ago about the merits of attempting SL5 and never heard a mention of Anthropic’s RSP specifically, although it seems plausible it was a contributing factor for decisions to pursue that direction.