[speaking for me, not the Astra fellows from whom takes were sampled]
One of the updates for me from the report was just how difficult SL-4 is. I kind of knew SL-5 was very very difficult, but I didn’t realize how hard it was to get to SL-4 until the report came out (at which point I should’ve stopped trusting that the RSP would hold up in any major way).
So I guess the relevant audience is people that hadn’t thought about the practicalities of frontier lab security very deeply!
Makes sense! Agree that SL-4 is already extremely difficult and indeed seeing that as the target already made it very clear as soon as it came out that the RSP would have to be substantially changed at some point.
I presume it refers to RAND’s “Securing AI Model Weights” report from May ’24, which Holden names and links to in his recent post.
Huh, OK. I am confused what audience would have been convinced by that report, but sure, any time is a good time to update in the correct direction.
[speaking for me, not the Astra fellows from whom takes were sampled]
One of the updates for me from the report was just how difficult SL-4 is. I kind of knew SL-5 was very very difficult, but I didn’t realize how hard it was to get to SL-4 until the report came out (at which point I should’ve stopped trusting that the RSP would hold up in any major way).
So I guess the relevant audience is people that hadn’t thought about the practicalities of frontier lab security very deeply!
Makes sense! Agree that SL-4 is already extremely difficult and indeed seeing that as the target already made it very clear as soon as it came out that the RSP would have to be substantially changed at some point.