The intuition here is that AI is unlike normal software in that it’s a national (indeed, world) security threat. Governments historically have had monopolies on weapons of mass destruction and have been the primary developers thereof.
AI is somewhat different in being inherently dual-use and a goal that many people eventually want to happen in some form (whereas nobody prefers for nuclear, chemical, or biological weapons to be developed except for their strategic utility).
People are glad that there is such a thing as nuclear power, so nuclear technology should probably also be classified as dual-use. However, your example of chemical and biological weapons as things no one wants still stand.
It’s possible to create simple chemical weapons fairly easily with the resources of a high-school chemistry lab; that’s about as dual-use as it gets. Nerve agents and the like are more complicated, but still feasible without exotic infrastructure if you can get your hands on the precursors.
The problem is more that they don’t actually work all that well; the de-facto moratorium on their use has as much to do with practical problems as moral. Aum Shinrikyo’s 1995 sarin attack on the Tokyo subway, for example, caused about the casualties of a small to medium-sized bombing and took far more coordination and technical expertise.
Biological weapons hitherto have been in a different category, but that might not last as cheap bioengineering tools become available; I don’t know enough about that field to comment authoritatively, though. On the other hand, I expect nuclear technology to grow less dual-use in the near future, as more reactor designs come online that require less fuel enrichment and don’t generate plutonium.
Yes, though people also want better living through chemistry and better health through biotech.
I guess my thought was that with AI, there’s not obviously a distinction at all between the military vs. civilian forms. A civilian AI is almost necessarily also a world-security hazard just by its existence, whereas nuclear power plants need some work to be converted to bombs.
The distinction also feels very thin with some biotech research: consider e.g. the various debates of whether to publish the genome of various diseases. Arguably, there it might be easier to use that information to do damage than to do good: to do damage, you only need to synthesize the pathogen, whereas figuring out how to use the genome to defend against it better takes more effort.
My point was that the technology for nuclear weapons was inexorably tied with the technology for civilian nuclear power. You can’t have the technology for one without the other. (I will admit that this is not exactly the same thing as not being able to have one without another, but it’s pretty close.)
And you do make a good point on the topic of chemistry and biotech also having ties in that direction.
:)
The intuition here is that AI is unlike normal software in that it’s a national (indeed, world) security threat. Governments historically have had monopolies on weapons of mass destruction and have been the primary developers thereof.
AI is somewhat different in being inherently dual-use and a goal that many people eventually want to happen in some form (whereas nobody prefers for nuclear, chemical, or biological weapons to be developed except for their strategic utility).
People are glad that there is such a thing as nuclear power, so nuclear technology should probably also be classified as dual-use. However, your example of chemical and biological weapons as things no one wants still stand.
It’s possible to create simple chemical weapons fairly easily with the resources of a high-school chemistry lab; that’s about as dual-use as it gets. Nerve agents and the like are more complicated, but still feasible without exotic infrastructure if you can get your hands on the precursors.
The problem is more that they don’t actually work all that well; the de-facto moratorium on their use has as much to do with practical problems as moral. Aum Shinrikyo’s 1995 sarin attack on the Tokyo subway, for example, caused about the casualties of a small to medium-sized bombing and took far more coordination and technical expertise.
Biological weapons hitherto have been in a different category, but that might not last as cheap bioengineering tools become available; I don’t know enough about that field to comment authoritatively, though. On the other hand, I expect nuclear technology to grow less dual-use in the near future, as more reactor designs come online that require less fuel enrichment and don’t generate plutonium.
Yes, though people also want better living through chemistry and better health through biotech.
I guess my thought was that with AI, there’s not obviously a distinction at all between the military vs. civilian forms. A civilian AI is almost necessarily also a world-security hazard just by its existence, whereas nuclear power plants need some work to be converted to bombs.
The distinction also feels very thin with some biotech research: consider e.g. the various debates of whether to publish the genome of various diseases. Arguably, there it might be easier to use that information to do damage than to do good: to do damage, you only need to synthesize the pathogen, whereas figuring out how to use the genome to defend against it better takes more effort.
True.
My point was that the technology for nuclear weapons was inexorably tied with the technology for civilian nuclear power. You can’t have the technology for one without the other. (I will admit that this is not exactly the same thing as not being able to have one without another, but it’s pretty close.)
And you do make a good point on the topic of chemistry and biotech also having ties in that direction.