Why libertarians are advocating for regulation on AI

Motivation: some people on the internet seem confused about why libertarians (and others generally suspicious of government intervention) are advocating for regulation to help prevent x-risk from AI. The arguments here aren’t novel and generally seem pretty obvious to me, but I wanted a quick reference.


Who is advocating for regulations?

This is obviously the first question: is the premise correct? Are libertarians uncharacteristically advocating for regulations?

They probably aren’t the loudest or most numerous among the set of people who are advocating for regulations while being motivated by reducing AI x-risk, but I’m one such person and know multiple others.

I tend to bucket those advocating for regulations into three buckets.

Non-libertarians

Probably the biggest group, though I have wide error bars here. Most of the people I know working on the policy & comms side are either liberal, apolitical, or not usefully described by a label like that. I think people in this group tend to have somewhat more optimistic views on how likely things are to go well, but this isn’t a super strong correlation.

Libertarians

Although some of them are focusing primarily on policy & comms, those I know more often dual-class with a technical primary & comms secondary. Tend to have more pessimistic views on our likelihood of making it through.

Eliezer Yudkowsky

I could have put him in the “libertarian” group, but he has a distinct and specific policy “ask”. Also often misquoted/​misunderstood.


The basic argument

There are many arguments for why various regulations in this domain might seem compelling from a non-libertarian point of view, and people generally object to those on different grounds (i.e. accusations of regulatory capture, corruption, tribalism, etc.), so I’ll skip trying to make their case for them.

Why might libertarians advocate for (certain kinds of) regulation on AI, given their general distrust of government?

Straightforwardly, the government is not a malicious genie optimizing for the inverse of your utility function, should you happen to ask it for a favor. Government interventions tend to come in familiar shapes, with a relatively well-understood distribution of likely first- and second-order effects.

If you’re mostly a deontological libertarian, you probably oppose government interventions because they violate people’s sovereignty; nevermind the other consequences.

If you’re mostly a consequentialist libertarian, you probably oppose government interventions because you observe that they tend to have undesirable second-order effects, often causing more harm than any good from the first-order effects[1].

But this is a contingent fact about the intersection between the effects of regulations, and the values of libertarians. Many regulations often have the effect of slowing down technological progress and economic growth, by making it more expensive to operate a business, do R&D, etc. Libertarians usually aren’t fans, since, you know, technological progress and economic growth are good things.

Unless you expect a specific kind of technological progress to kill everyone, possibly in the next decade or few. A libertarian who believes that the default course of AI development will end with us creating an unaligned ASI that kills us and eats the accessible lightcone is not going to object to government regulations on AI because “regulations bad”.

Now, a libertarian who believes this, and is thinking sensibly about the subject[2], will have specific models about which regulations seem like they might help reduce the chance of that outcome, and which might hurt. This libertarian is not going to make basic mistakes like thinking that the intent of the regulation[3] will be strongly correlated with its actual effects.

They will simply observe that, while there are certainly ways in which government regulation could make the situation worse, such as by speeding things up, very often the effect of regulations is to slow things down, instead. The libertarian is not going to be confused about the likelihood that government-mandated evals will successfully catch and stop an unaligned ASI from being deployed, should one be developed. They will not make the mistake of thinking that the government will miraculously solve ethics and philosophy, and provide us with neat guardrails to ensure that progress goes in the right direction.

To a first approximation, all they care about is buying time to solve the actual (technical) problem.

Eliezer’s ask is quite specific, but targeted at the same basic endpoint: institute a global moratorium on AI training runs over a certain size, in order to (run a crash program on augmenting human intelligence, so that you can) solve the technical alignment problem before someone accidentally builds an unaligned ASI.

If you want to argue that these people are making a mistake according to their own values and starting with their premises, arguing that governments tend to mess up whatever they touch is a non-sequitur. Yes, they do—in pretty specific ways!

Arguments that might actually address the cruxes of someone in this reference class might include:

  • By default, risk is not as high as you think, so there is substantial room for government intervention to make x-risk more likely.

    • This does still require hashing out the likely distribution of outcomes, given various proposals.

  • The distribution of outcomes from government interventions are so likely to give you less time, or otherwise make it more difficult to solve the technical alignment problem, that there are fewer surviving worlds where the government intervenes as a result of you asking them to, compared to the counterfactual.

    • Keep in mind that if your position is something like “default doom, pretty overdetermined”, you can think that government interventions will make things “worse” 90% of the time and still come out ahead, since those worlds were doomed anyways and it wasn’t borderline (such that you’re not losing a bunch of probability mass on other interventions tipping you over into non-doomed worlds).

Arguments that are not likely to be persuasive, since they rely on premises that most people in this reference class think are very unlikely to be true:

  • Involving the government increases risk of authoritarian value lock-in

    • Assumes that we are very likely to solve alignment in time, or that government involvement increases the probability of bad outcomes even if it increases the odds of solving the technical alignment problem[4]. (The deployment problem concern might be an interesting argument if made, but I haven’t yet seen a convincing effort in that direction.)

  • Centralization of power (as is likely to result from many possible government interventions) is bad

    • I’m not sure what the actual argument here is. I often see this paired with calls for more open-sourcing of various things. Assumes, man, I don’t even know, this isn’t even trying to model any serious opposing perspective.

  • China!

    • HE’S STILL DEAD, JIM.

Thanks to Drake Thomas for detailed feedback. Thanks also to Raemon, Sam, and Adria for their thoughts & suggestions.

  1. ^

    Which are often also negative!

  2. ^

    As always, this is a small minority of those participating in conversations on the subject. Yes, I am asking you to ignore all the terrible arguments in favor of evaluating the good arguments.

  3. ^

    To the extent that it’s meaningful to ascribe intent to regulations, anyways—maybe useful to instead think of the intent of those who were responsible for the regulation’s existence.

  4. ^

    Example provided by Drake Thomas: Someone could have 10 year timelines with 90% chance of paperclips and 10% chance of tech CEO led utopia, but government intervention leads to 20 year timelines, 80% chance of paperclips and 20% chance of aligned to someone, but 75% probability of authoritarian dystopia conditional on someone being able to align an AI. Under this model, they wouldn’t want regulations, because they move the good worlds from 10% to 5% even though the technical problem is more likely to get solved.