it has become clear the implications [of Covid’s origin] are important
I’m inclined to get more precise about what’s important and what isn’t. (For the record, I’d put the lab leak hypothesis around 75%.)
Suppose that, after learning everything about bats, wet markets, bio labs, safety precautions, etc., we conclude that, in a typical year like 2019, there’s a 1% chance of a novel pandemic-causing virus coming from “nature”, and a 1% chance of a novel pandemic-causing virus getting leaked from a lab, but that all the evidence that would let us decide which actually occurred in 2019 seems to have been burned by the CCP or whatever. At that point, does it really matter which thing actually happened?
It seems there would be two main uses of such information. One is to decide how or whether to punish the CCP, or specific researchers, or the research institutions they belong to, or some kind of oversight organizations. I don’t have the impression that the particular researchers, institutions, or projects were unusually careless, negligent, or mad-sciencey. If they were, I doubt punishing a few individuals will help, nor will imposing a massive fine on a nation. (The main thing I’d like to see punished is the coverup. Also, at least in my programming experience it’s considered good practice, in a disaster, to not punish the one person who screwed up, but rather to ask why you have a system where one person’s mistake can cause such terrible consequences. And not punishing that person makes error-finding much more honest and easy.) Sanctioning institutions with bad biosecurity practices might help, though the more important part of such a thing would be “and we’ll check back in future years to ensure your practices are good”, which brings me to the next point:
The other use of the info is deciding what should be done in the future. (Things like banning gain-of-function research. Also, although I don’t necessarily recommend it, “wiping out the bat population” is a possible measure against “natural origins”.) For that, the probability of future catastrophes is what matters, and what specifically happened in the past makes no difference, except insofar as people use that one data point to inform their models. Which, ok, is a decent starting strategy if you have no good data or models, and I could see people squabbling and being unable to agree on anything other than that data point.
But I would hope for people to make serious investigations into bio lab precautions and produce some leak probability estimates. I imagine such investigations involving, say, putting some harmless but contagious viruses into the labs and measuring how often they leak (could be risky); putting a chemical on the outside of gloves that turns skin black so you can see how many people actually remove their gloves properly; putting aerosols in the air that are optically invisible but highly infrared-visible, to measure aerosol leakage; etc. Video recordings of everything inside the hazard area, and the entry and exit points, would likely be invaluable for counting protocol violations. Construct a model, try to estimate its parameters, and calculate away. (That or just say “given the historical record of lab leaks, assume leak likelihood is 100%”.)
If investigations of “what happened at WIV 2019” turned up an exact trail of “Researcher X neglected to sterilize piece of equipment Y, then touched it, and was insufficiently meticulous when handwashing later”, or “The sterilizing machinery was old and no longer heated the entire relevant area to hundreds of degrees C, and no one regularly checked this”, or “The process for filtering aerosols out of the air was never effective in the first place”, then that would be quite interesting and a nice case study. However, given lab leak history and experience with humans, I’m confident that there are multiple serious problems in many labs, and just because this instance involved one problem and not the others doesn’t mean that the others aren’t at least as serious. It seems any successful effort to drop the lab leak frequency by an order of magnitude or more would have to discover many different failure modes, and knowing one of them in advance wouldn’t help much.
The bit of information “it came from a lab” is likely useful for political reasons, to get people to agree to “we need to take biosecurity seriously” and “certain kinds of research are intolerably dangerous until we’ve done the former” (although I’d support those statements even if “it came from nature”). Its suppression is also a good indicator of how dysfunctional certain institutions are. But I don’t think the bit’s truth value is very important for understanding the world (unless you think lab leaks are extremely rare), and I think it’s worth bearing that in mind.
I’m inclined to get more precise about what’s important and what isn’t. (For the record, I’d put the lab leak hypothesis around 75%.)
Suppose that, after learning everything about bats, wet markets, bio labs, safety precautions, etc., we conclude that, in a typical year like 2019, there’s a 1% chance of a novel pandemic-causing virus coming from “nature”, and a 1% chance of a novel pandemic-causing virus getting leaked from a lab, but that all the evidence that would let us decide which actually occurred in 2019 seems to have been burned by the CCP or whatever. At that point, does it really matter which thing actually happened?
It seems there would be two main uses of such information. One is to decide how or whether to punish the CCP, or specific researchers, or the research institutions they belong to, or some kind of oversight organizations. I don’t have the impression that the particular researchers, institutions, or projects were unusually careless, negligent, or mad-sciencey. If they were, I doubt punishing a few individuals will help, nor will imposing a massive fine on a nation. (The main thing I’d like to see punished is the coverup. Also, at least in my programming experience it’s considered good practice, in a disaster, to not punish the one person who screwed up, but rather to ask why you have a system where one person’s mistake can cause such terrible consequences. And not punishing that person makes error-finding much more honest and easy.) Sanctioning institutions with bad biosecurity practices might help, though the more important part of such a thing would be “and we’ll check back in future years to ensure your practices are good”, which brings me to the next point:
The other use of the info is deciding what should be done in the future. (Things like banning gain-of-function research. Also, although I don’t necessarily recommend it, “wiping out the bat population” is a possible measure against “natural origins”.) For that, the probability of future catastrophes is what matters, and what specifically happened in the past makes no difference, except insofar as people use that one data point to inform their models. Which, ok, is a decent starting strategy if you have no good data or models, and I could see people squabbling and being unable to agree on anything other than that data point.
But I would hope for people to make serious investigations into bio lab precautions and produce some leak probability estimates. I imagine such investigations involving, say, putting some harmless but contagious viruses into the labs and measuring how often they leak (could be risky); putting a chemical on the outside of gloves that turns skin black so you can see how many people actually remove their gloves properly; putting aerosols in the air that are optically invisible but highly infrared-visible, to measure aerosol leakage; etc. Video recordings of everything inside the hazard area, and the entry and exit points, would likely be invaluable for counting protocol violations. Construct a model, try to estimate its parameters, and calculate away. (That or just say “given the historical record of lab leaks, assume leak likelihood is 100%”.)
If investigations of “what happened at WIV 2019” turned up an exact trail of “Researcher X neglected to sterilize piece of equipment Y, then touched it, and was insufficiently meticulous when handwashing later”, or “The sterilizing machinery was old and no longer heated the entire relevant area to hundreds of degrees C, and no one regularly checked this”, or “The process for filtering aerosols out of the air was never effective in the first place”, then that would be quite interesting and a nice case study. However, given lab leak history and experience with humans, I’m confident that there are multiple serious problems in many labs, and just because this instance involved one problem and not the others doesn’t mean that the others aren’t at least as serious. It seems any successful effort to drop the lab leak frequency by an order of magnitude or more would have to discover many different failure modes, and knowing one of them in advance wouldn’t help much.
The bit of information “it came from a lab” is likely useful for political reasons, to get people to agree to “we need to take biosecurity seriously” and “certain kinds of research are intolerably dangerous until we’ve done the former” (although I’d support those statements even if “it came from nature”). Its suppression is also a good indicator of how dysfunctional certain institutions are. But I don’t think the bit’s truth value is very important for understanding the world (unless you think lab leaks are extremely rare), and I think it’s worth bearing that in mind.