And of course the right answer is “absolutely everyone”. It should be fully public. If your setup is such that it even makes sense to ask this question of “who should be allowed to know what cryptographic algorithm we use”, then your security system is a complete failure and nobody should trust you with so much as their mother’s award-winning recipe for potato salad, much less any truly sensitive data.
This makes sense for computer security, but for biosecurity it doesn’t work, because it’s a lot harder to ship a patch to people’s bodies than to people’s computers. The biggest reason there has never been a terrorist attack with a pandemic-capable virus is that, with few exceptions (such as smallpox), we don’t know what they are.
A: My understanding is that the U.S. Government is currently funding research programs to identify new potential pandemic-level viruses.
K: Unfortunately, yes. The U.S. government thinks we need to learn about these viruses so we can build defenses — in this case vaccines and antivirals. Of course, vaccines are what have gotten us out of COVID, more or less. Certainly they’ve saved a ton of lives. And antivirals like Paxlovid are helping. So people naturally think, that’s that’s the answer, right?
But it’s not. In the first place, learning whether a virus is pandemic capable does not help you develop a vaccine against it in any way, nor does it help create antivirals. Second, knowing about a pandemic-capable virus in advance doesn’t speed up research in vaccines or antivirals. You can’t run a clinical trial in humans on a new virus of unknown lethality, especially one which has never infected a human — and might never. And given that we can design vaccines in one day, you don’t save much time in knowing what the threat is in advance.
The problem is there are around three to four pandemics per century that cause a million or more deaths, just judging from the last ones — 1889, 1918, 1957, 1968 and 2019. There’s probably at least 100 times as many pandemic-capable viruses in nature — it’s just that most of them never get exposed to humans, and if they do, they don’t infect another human soon enough to spread. They just get extinguished.
What that means is if you identify one pandemic-capable virus, even if you can perfectly prevent it from spilling over and there’s zero risk of accidents, you’ve prevented 1⁄100 of a pandemic. But if there’s a 1% chance per year that someone will assemble that virus and release it, then you’ve caused one full pandemic in expectation. In other words, you’ve just killed more than 100 times as many people as you saved.
This makes sense for computer security, but for biosecurity it doesn’t work, because it’s a lot harder to ship a patch to people’s bodies than to people’s computers. The biggest reason there has never been a terrorist attack with a pandemic-capable virus is that, with few exceptions (such as smallpox), we don’t know what they are.
See also: