Why I Am Skeptical of AI Regulation as an X-Risk Mitigation Strategy

Epistemic Signpost: I’m not an expert in policy research, but I think most of the points here are straightforward.

1. Regulations are ineffective at preventing bad behaviors

Here’s a question/​thought experiment: Can you think of any large, possibly technology related, company who has broken a regulation. What happened as a result? Was the conclusion the total cessation of that behavior?

I’m not bothering to look up examples (at the risk of losing a tiny bit of epistemic points) because these sorts of events are so freaking common. I’m not going into the more heinous cases, where it wasn’t even plausible to anybody that the behaviors were defensible (and instead just expected to not get caught) or cases where the behavior was more profitable than the fine imposed. My point is that the regulations failed to prevent the behavior.

If you try to defend this with “Regulations are about incentives and not safety (prevention)”—then I think you, too, should be pessimistic about AI Regulation as a strategy for X-Risk mitigation.

2. What does effective regulation look like

Lest I be accused of making a fully-general counterargument, here’s some examples of regulations that seem to actually work:

2.1. Defense Export Controls

The United States has a long history of enforcing export controls on defense-related technologies, and I’ll just focus on the USML (United States Munitions List) and CCL (Commerce Control List) here.

The USML is the more “extreme” of the two, and this is what people are talking about if they’re talking about ITAR (International Traffic in Arms Regulations). It covers things like firearms, missiles, and nuclear weapons, and licenses/​export of these are pretty strictly controlled. I don’t think anyone doubts that putting AI technology on the USML would cause some sort of impact, but it seems pretty unlikely (and also undesirable for other strategic reasons).

Lesser known is the CCL, and how it regulates a much more lightweight (but still strongly enforced) export license requirement. It’s designed to facilitate American technology companies selling their tech abroad, even if the technology is relevant to national security.

An example relevant to AI technology is when the US government prevented the export of computer chips to China for use in a supercomputer. Everyone building a (publicly known) supercomputer claims it will be used for publicly good science, but also supercomputers are a very important tool for designing nuclear weapons (which is one of the reasons supercomputer parts are export controlled).

2.2. Weirdly Strongly Enforced IP Laws

While the US has a fairly strong internal system of IP protections, they are often insufficient at preventing IP from being used outside of US jurisdiction. Normally I think this means that IP protections are not good candidates for AI regulation, but there is at least one example I like.

(It’s possible this is just defense export enforcement, but sometimes people talk about it as an IP enforcement, in particular for the older and longer ban on EUV export, as opposed to the recent pressure in DUV export.)

Semiconductor manufacturing equipment I think are also relevant to AI technology, and ASML is one of the most important manufacturers. Despite being a Dutch company, it’s being prevented from selling latest generation (EUV) manufacturing equipment to China. This is ostensibly because the Dutch government hasn’t approved an export license, but it seems to basically be common knowledge that this is due to the US government’s influence.

3. What I would like to see in AI Regulation proposals

I think there’s two things I would like to see on any AI regulation policy research that’s purportedly for x-risk reduction:

First, an acknowledgement that the authors understand and are somehow not blind to the reality that regulations, especially those for large technology companies, are largely ineffective at curtailing behavior.

Second, they explain some very good reasons for why their plan will actually **work** at preventing the behavior.

Then I would be much more excited to hear your AI regulation policy proposal.