In a world where there is widely available knowledge about how to cheaply and easily create self-replicating weapons of mass destruction (e.g. bioweapons, nanotech), it is necessary that the governments of the world coordinate to detect and prevent any such weapons from being created and used.
This is an importantly different kind of investigation than police investigating a house for a specific crime. Police are responsible for enforcing lots of laws on lots of different areas of life, thus we have rules that the Police can’t just perform arbitrary unjustified searches without reason. In the case of the specific topic of high stakes self-replicating weapons, we could have an investigation force that only enforced this specific ban and nothing else. This could then justify a broader scope of monitoring, so long as these investigators were under strict roles not to leak information about any other topic. This is hard to do with a human investigator, because you can’t literally remove off-topic information from their memories, so they will forever after constitute an information leak risk. With an AI-based investigator however, you can wipe memories and thus make sure nothing but the official report on the chosen topic is released.
I’d say it’s hard to do at least as much because the claim ‘we are doing these arbitrary searches only in order to stop bioweapons’ is untrustworthy by default, and even if it starts out true, once the precedent is there it can be used (and is tempting to use) for other things. Possibly an AI could be developed and used in a transparent enough way to mitigate this.
Yes, work is being done by some to explore the idea of decentralized peer-to-peer consensual inspections. For things like biolabs that want to reassure each other that none of their student volunteers is up to bad stuff.
In a world where there is widely available knowledge about how to cheaply and easily create self-replicating weapons of mass destruction (e.g. bioweapons, nanotech), it is necessary that the governments of the world coordinate to detect and prevent any such weapons from being created and used. This is an importantly different kind of investigation than police investigating a house for a specific crime. Police are responsible for enforcing lots of laws on lots of different areas of life, thus we have rules that the Police can’t just perform arbitrary unjustified searches without reason. In the case of the specific topic of high stakes self-replicating weapons, we could have an investigation force that only enforced this specific ban and nothing else. This could then justify a broader scope of monitoring, so long as these investigators were under strict roles not to leak information about any other topic. This is hard to do with a human investigator, because you can’t literally remove off-topic information from their memories, so they will forever after constitute an information leak risk. With an AI-based investigator however, you can wipe memories and thus make sure nothing but the official report on the chosen topic is released.
I’d say it’s hard to do at least as much because the claim ‘we are doing these arbitrary searches only in order to stop bioweapons’ is untrustworthy by default, and even if it starts out true, once the precedent is there it can be used (and is tempting to use) for other things. Possibly an AI could be developed and used in a transparent enough way to mitigate this.
Yes, work is being done by some to explore the idea of decentralized peer-to-peer consensual inspections. For things like biolabs that want to reassure each other that none of their student volunteers is up to bad stuff.
Consensual inspections don’t help much if the dangerous thing is actually cheap and easy to create.