If they weren’t ready to deploy these safeguards and thought that proceeding outweighed the (expected) cost in human lives, they should have publicly acknowledged the level of fatalities and explained why they thought weakening their safety policies and incurring these expected fatalities was net good.[1]
Public acknowledgements of the capabilities could be net negative in itself, especially if they resulted in media attention. I expect bringing awareness to the (possible) fact that the AI can assist with CBRN tasks likely increases the chance that people try to use it for CBRN tasks. I could even imagine someone trying to use these capabilities without malicious intent (e.g. just to see for themselves if it’s possible), but this still would be risky. Also, knowing which tasks it can help with might make it easier to use for harm.
Given that AI companies have a strong conflict of interest, I would at least want them to report this to a third party and let that third party determine whether they should publicly acknowledge the capabilities.
Public acknowledgements of the capabilities could be net negative in itself, especially if they resulted in media attention. I expect bringing awareness to the (possible) fact that the AI can assist with CBRN tasks likely increases the chance that people try to use it for CBRN tasks. I could even imagine someone trying to use these capabilities without malicious intent (e.g. just to see for themselves if it’s possible), but this still would be risky. Also, knowing which tasks it can help with might make it easier to use for harm.
Given that AI companies have a strong conflict of interest, I would at least want them to report this to a third party and let that third party determine whether they should publicly acknowledge the capabilities.