Bruce Schneier held the movie plot contest for ideas of how terrorists might do a lot of damage for relatively little resources. There are plenty of ideas there that a terrorist that wanted to maximize damage could use without needing to build bioweapons. Yet, we don’t see that kind of damage maximizing terrorism in the real world.
I agree the risk is reduced substantially because there are few potential bioterrorists. As I say:
This estimate is quite uncertain as it depends a lot on the number of at-least-slightly-competent bioterrorists. The salience of AI-enabled bioterrorism to potential terrorists might have a large effect on the level of fatalities and it’s possible this salience could increase greatly in the future due to some early incidents which get lots of media attention (potentially escalating into a widespread bioterrorism meme resulting in lots of bioterrorism in the same way we see a variety of different mass shootings in the US).
I think it’s hard to be confident that the number of scope sensitive bioterrorists is low enough that there won’t be a small number of slightly competent attempts. And this suffices for the (low) probabilities I’m talking about. (After adding in the possibility of this becoming a salient meme etc.)
When AIs can aid in novel bioweapons R&D, this also opens up another set of risks, though this mostly isn’t relevant to my point in the post.
Bruce Schneier held the movie plot contest for ideas of how terrorists might do a lot of damage for relatively little resources. There are plenty of ideas there that a terrorist that wanted to maximize damage could use without needing to build bioweapons. Yet, we don’t see that kind of damage maximizing terrorism in the real world.
I agree the risk is reduced substantially because there are few potential bioterrorists. As I say:
I think it’s hard to be confident that the number of scope sensitive bioterrorists is low enough that there won’t be a small number of slightly competent attempts. And this suffices for the (low) probabilities I’m talking about. (After adding in the possibility of this becoming a salient meme etc.)
When AIs can aid in novel bioweapons R&D, this also opens up another set of risks, though this mostly isn’t relevant to my point in the post.