Responsible virologists should publish fake research purportedly showing how to create existential risk viruses so as to hide behind a body of lies the papers which give genuine insight into how to actually make such viruses.
It strikes me that having scientists deliberately lie as policy would have terribly, terribly bad secondary effects—enough people distrust sound science as it is. Therefore, I very much hope you were being sarcastic or something.
I was being serious, although if (and I have no reason to doubt) Logos01 is correct then my idea is probably a bad one.
enough people distrust sound science as it is.
True but not enough people distrust unsound science or political advocates who pose as scientists.
I thing there’s some sort of reasoning error in that last bit, but I can’t think of how to express it. My counter-argument would be something about how the benefit of reducing trust in bad science by having scientists lie would be less than the harm caused by the increased distrust in good science.
This kind of spoofing is supposedly done to make it more difficult to find nuclear designs, aside from the huge numbers of fake (law enforcement) buyers and sellers of WMD.
The same knowledge necessary to manufacturing such diseases is also necessary to knowing how to combat them or attenuate them. Besides; synthetic biology is far more capable of producing “superkillers” than this.
For already existing viral strains, that’s to be expected. I don’t know if you’ve ever had discussions with synthetic biology students but… as Hugh Hixon at Alcor once said to me, “that is the stuff of nightmares, assuming you can even sleep afterwards.” Fully-novel genetic constructs, hybridization of various unlike genomes, or even more potentially exotic constructs such as fungal spores that upon contact with human secretions re-express (medusa-like) into something akint o Toxoplasmosis gondii only inducing schizophrenia, hyperaggression, and so on. (Why kill a population with a disease when you can make a nearly-unkillable environmental ‘bloom’ toxin that passes gene screening, has no observable external symptoms, and causes an entire society to turn into batshit-crazy homocidal axe-killers?)
Human “swine” flu is some scary shit. But compared to what synth-bio could achieve, I’m less worried about it. Especially considering we’re already in the range of introducing, say, the biotoxin of the Irukandji to airborne molds. That right there would be capable of killing just about all animal life within the blooming pattern of the organism.
We are quite literally, I believe, less than three or four—five at the most -- 20-year generations away from synthetic biology students’ being capable of creating bioweapons capable of wiping out the human species. If not the entirety of all mammalian life.
I agree that synbio has some very nasty and rapidly emerging capabilities. However, with respect to your last paragraph are you also assuming that defenses don’t improve? Fancy biotech enables better detectors and rapid creation of tailored countermeasures (including counter-organisms). Surveillance tech restricts what students can get away with, sterilization and isolation of environments becomes easier, etc.
However, with respect to your last paragraph are you also assuming that defenses don’t improve?
The statements I made were agnostic as to the likelihood of a given event, as opposed to the raw capability of the devices—that is, beyond saying that it would become a non-zero percent chance. Furthermore; it is generally true that defense is “harder” than offense when it comes to weapons-tech.
Even if better technology means defenses can improve, does that mean they will improve at a fast enough pace? I don’t understand why your same logic wouldn’t also imply the belief that it will be easier to make AI friendly when we understand more about AGI.
I don’t understand why your same logic wouldn’t also imply the belief that it will be easier to make AI friendly when we understand more about AGI.
Ceteris paribus, that argument does go through: for any given project, success is easier with more AGI understanding. That doesn’t mean that we should expect AI to be safe, or that interventions to shift the curves don’t matter. Likewise, the considerations I mentioned with respect to synbio make us safer to some extent, and I was curious as to Logos’ evaluation of their magnitudes.
Okay, thanks for the clarification. If we would expect the magnitudes for synbio to be significantly higher (or lower) than for AGI, then I would be curious as to what differentiates the two situations (I could easily imagine that there is a difference, I just think it would be a good exercise to characterize it as precisely as possible).
ETA: Actually, I think there are some plausible arguments as to why AGI progress would be less relevant to AGI safety than one would expect naievely (due to the decoupling of beliefs and utility functions in Bayesian decision theory—being an AGI hinges mostly on the belief part, whereas being an FAI hinges mostly on the utility function part). But I currently have a non-trivial degree of uncertainty over how correct these arguments are.
Responsible virologists should publish fake research purportedly showing how to create existential risk viruses so as to hide behind a body of lies the papers which give genuine insight into how to actually make such viruses.
It strikes me that having scientists deliberately lie as policy would have terribly, terribly bad secondary effects—enough people distrust sound science as it is. Therefore, I very much hope you were being sarcastic or something.
I was being serious, although if (and I have no reason to doubt) Logos01 is correct then my idea is probably a bad one.
True but not enough people distrust unsound science or political advocates who pose as scientists.
I thing there’s some sort of reasoning error in that last bit, but I can’t think of how to express it. My counter-argument would be something about how the benefit of reducing trust in bad science by having scientists lie would be less than the harm caused by the increased distrust in good science.
This kind of spoofing is supposedly done to make it more difficult to find nuclear designs, aside from the huge numbers of fake (law enforcement) buyers and sellers of WMD.
The same knowledge necessary to manufacturing such diseases is also necessary to knowing how to combat them or attenuate them. Besides; synthetic biology is far more capable of producing “superkillers” than this.
According to the article, they were originally trying biotech methods, but then shifted to straightforward selective breeding, which worked better.
For already existing viral strains, that’s to be expected. I don’t know if you’ve ever had discussions with synthetic biology students but… as Hugh Hixon at Alcor once said to me, “that is the stuff of nightmares, assuming you can even sleep afterwards.” Fully-novel genetic constructs, hybridization of various unlike genomes, or even more potentially exotic constructs such as fungal spores that upon contact with human secretions re-express (medusa-like) into something akint o Toxoplasmosis gondii only inducing schizophrenia, hyperaggression, and so on. (Why kill a population with a disease when you can make a nearly-unkillable environmental ‘bloom’ toxin that passes gene screening, has no observable external symptoms, and causes an entire society to turn into batshit-crazy homocidal axe-killers?)
Human “swine” flu is some scary shit. But compared to what synth-bio could achieve, I’m less worried about it. Especially considering we’re already in the range of introducing, say, the biotoxin of the Irukandji to airborne molds. That right there would be capable of killing just about all animal life within the blooming pattern of the organism.
We are quite literally, I believe, less than three or four—five at the most -- 20-year generations away from synthetic biology students’ being capable of creating bioweapons capable of wiping out the human species. If not the entirety of all mammalian life.
I agree that synbio has some very nasty and rapidly emerging capabilities. However, with respect to your last paragraph are you also assuming that defenses don’t improve? Fancy biotech enables better detectors and rapid creation of tailored countermeasures (including counter-organisms). Surveillance tech restricts what students can get away with, sterilization and isolation of environments becomes easier, etc.
The statements I made were agnostic as to the likelihood of a given event, as opposed to the raw capability of the devices—that is, beyond saying that it would become a non-zero percent chance. Furthermore; it is generally true that defense is “harder” than offense when it comes to weapons-tech.
Even if better technology means defenses can improve, does that mean they will improve at a fast enough pace? I don’t understand why your same logic wouldn’t also imply the belief that it will be easier to make AI friendly when we understand more about AGI.
Ceteris paribus, that argument does go through: for any given project, success is easier with more AGI understanding. That doesn’t mean that we should expect AI to be safe, or that interventions to shift the curves don’t matter. Likewise, the considerations I mentioned with respect to synbio make us safer to some extent, and I was curious as to Logos’ evaluation of their magnitudes.
Okay, thanks for the clarification. If we would expect the magnitudes for synbio to be significantly higher (or lower) than for AGI, then I would be curious as to what differentiates the two situations (I could easily imagine that there is a difference, I just think it would be a good exercise to characterize it as precisely as possible).
ETA: Actually, I think there are some plausible arguments as to why AGI progress would be less relevant to AGI safety than one would expect naievely (due to the decoupling of beliefs and utility functions in Bayesian decision theory—being an AGI hinges mostly on the belief part, whereas being an FAI hinges mostly on the utility function part). But I currently have a non-trivial degree of uncertainty over how correct these arguments are.