What would be more persuasive is some evidence that AI is relatively more useful for making bioweapons than it is for doing things in general.
I see little reason to use that comparison rather than “will [category of AI models under consideration] improve offense (in bioterrorism, say) relative to defense?”
I agree it would be nice to have strong categories or formalism pinning down which future systems would be safe to open source, but it seems an asymmetry in expected evidence to treat a non-consensus on systems which don’t exist yet as a pro-open-sourcing position. I think it’s fair to say there is enough of a consensus that we don’t know which future systems would be safe and so need more work to determine this before irreversible proliferation.