I don’t really disagree with anything here, but it does seem to be historically well founded that nation-states only respond to a much greater standard of evidence with respect to dangerous technology. I think there’s a few conditions we need to meet.
Conditions,
A. Clear demonstration of the technology’s great danger—We dropped the bomb before we recognized that the bomb shouldn’t be dropped under any circumstance. Or in the case of the Montreal Protocol, we had unambiguous evidence that HCFCs were depleting the Ozone layer.
B. Brinksmanship using the technology in question—It took the Cuban Missile Crisis to prompt international cooperation on nuclear arms control steps for example, starting with the 1963 Partial Test Ban Treaty.
C. Public fear/horror regarding specific uses of the technology—I think the examples regarding nuclear weapons are obvious. Another case would be the Chemical Weapons Convention, which was prompted in part by news coverage of the 1988 Halabja attack.
D. No win condition—Nuclear proliferation proceeded practically unabated so long as countries viewed it as a race they could win. Von Neumann’s MAD doctrine changed the calculus surrounding the use of nuclear weapons; any use of nuclear weapons could result in an exchange that no one could win.
E. Limited controls—Even in the case of nuclear technology, nuclear material is produced and used across the economy. Controls are specific to the necessary steps to produce nuclear weapons (enrichment for example) as opposed to general use of nuclear material.
Applied with respect to AI,
A. No major events due to frontier AI just yet. Some deaths due to at-risk individuals using Helpful and Harmful AI.
B. No brinksmanship since no near peer power has similarly dangerous AI.
C. Mass distaste for AI, but not yet fear/horror. AI hasn’t been used as weapon on a large scale yet.
D. Policy leaders seem to believe that there is a win condition where they can maintain ‘control’ of superintelligent AI. No way to make them believe otherwise i.e. human hubris as a failure condition.
E. Although limited controls are on the table in the form of privacy, data use, and more usefully limiting datacenter size (not passed yet anywhere), safetyists still push for a pause until we have better alignment technology. I agree with this, but it’s an impossible thing to advocate for when none of the previous conditions have been met.
So overall, I doubt we can push for safety regulation that’s targeted towards ASI until there’s a clear and present danger—catch 22, that’s what we want to prevent. I would recommend that AI safety reorient around meeting or predicting that these conditions will be met, and having drafted legislation and countermeasures immediately ready to go whenever they are.
(Pithy comment, not to be taken seriously—unless someone at the frontier labs uses Mythos or the OpenAI equivalent to conduct a mass cyberattack against middle America, its unlikely regulation like this will even be on the table)
I don’t really disagree with anything here, but it does seem to be historically well founded that nation-states only respond to a much greater standard of evidence with respect to dangerous technology. I think there’s a few conditions we need to meet.
Conditions,
A. Clear demonstration of the technology’s great danger—We dropped the bomb before we recognized that the bomb shouldn’t be dropped under any circumstance. Or in the case of the Montreal Protocol, we had unambiguous evidence that HCFCs were depleting the Ozone layer.
B. Brinksmanship using the technology in question—It took the Cuban Missile Crisis to prompt international cooperation on nuclear arms control steps for example, starting with the 1963 Partial Test Ban Treaty.
C. Public fear/horror regarding specific uses of the technology—I think the examples regarding nuclear weapons are obvious. Another case would be the Chemical Weapons Convention, which was prompted in part by news coverage of the 1988 Halabja attack.
D. No win condition—Nuclear proliferation proceeded practically unabated so long as countries viewed it as a race they could win. Von Neumann’s MAD doctrine changed the calculus surrounding the use of nuclear weapons; any use of nuclear weapons could result in an exchange that no one could win.
E. Limited controls—Even in the case of nuclear technology, nuclear material is produced and used across the economy. Controls are specific to the necessary steps to produce nuclear weapons (enrichment for example) as opposed to general use of nuclear material.
Applied with respect to AI,
A. No major events due to frontier AI just yet. Some deaths due to at-risk individuals using Helpful and Harmful AI.
B. No brinksmanship since no near peer power has similarly dangerous AI.
C. Mass distaste for AI, but not yet fear/horror. AI hasn’t been used as weapon on a large scale yet.
D. Policy leaders seem to believe that there is a win condition where they can maintain ‘control’ of superintelligent AI. No way to make them believe otherwise i.e. human hubris as a failure condition.
E. Although limited controls are on the table in the form of privacy, data use, and more usefully limiting datacenter size (not passed yet anywhere), safetyists still push for a pause until we have better alignment technology. I agree with this, but it’s an impossible thing to advocate for when none of the previous conditions have been met.
So overall, I doubt we can push for safety regulation that’s targeted towards ASI until there’s a clear and present danger—catch 22, that’s what we want to prevent. I would recommend that AI safety reorient around meeting or predicting that these conditions will be met, and having drafted legislation and countermeasures immediately ready to go whenever they are.
(Pithy comment, not to be taken seriously—unless someone at the frontier labs uses Mythos or the OpenAI equivalent to conduct a mass cyberattack against middle America, its unlikely regulation like this will even be on the table)