[Question] Why not constrain wetlabs instead of AI?

Most of the object-level stories about how misaligned AI goes wrong involve either nanotechnology or bio-risk or both. Certainly I can (and have, and will again) tell a story about AI x-risk that doesn’t involve anything at the molecular level. A sufficient amount of (macroscale) robotics would be enough to end humanity. But the typical story that we hear, particularly from EY, involves specifically nanotechnology. So let me ask a Robin Hanson-style question: Why not try to constrain wetlabs instead of AI? By “wetlabs” I mean any capability involving DNA, molecular biology or nanotechnology.

Some arguments:

  1. Governments around the world are already in the business of regulating all kinds of chemistry, such as the production of legal and illegal drugs.

  2. Governments (at least in the West) are not yet in the business of regulating information technology, and basically nobody thinks they will do a good job of it.

  3. The pandemic has set the stage for new thinking around regulating wetlabs, especially now that the lab leak hypothesis is considered mainstream.

  4. The cat might already be out of the bag with regards to AI. I’m referring to the Alpaca and Llama models. Information is hard to constrain.

  5. “You can’t just pay someone over the internet to print any DNA/​chemical you want” seems like a reasonable law. In fact it’s somewhat surprising that it’s not already a law. By comparison, “You can’t just run arbitrary software on your own computer without government permission” would be an extraordinary social change and is well outside the Overton window.

  6. Something about pivotal acts which… I probably shouldn’t even go there.

No comments.