Slightly against aligning with neo-luddites

To summarize,

  • When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.

  • Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install.

  • Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it’s “better than nothing” unless it’s also literally the only chance we get to delay AI.

  • In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general.


It appears we are in the midst of a new wave of neo-luddite sentiment.

Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I’ve seen numerous large threads on Twitter in which people criticize the users and creators of AI art.

Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I’m not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.

I expect most LessWrong readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities.

On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying “I agree with delaying AI, but not for that reason” then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace’s recent post about delaying AI in order to ensure existential AI safety.

Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it’s “better than nothing” and might give us more time to solve alignment.

In addition to possibly being mildly dishonest, I’m quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds.

If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing.

A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that’s good depends critically on the details: what’s being blocked, what isn’t, and how.

One consideration, which has been pointed out by many before, is that blocking one avenue of progress may lead to an “overhang” in which the sudden release of restrictions leads to rapid, discontinuous progress, which is highly likely to increase total AI risk.

But an overhang is not my main reason for cautioning against an alliance with neo-luddites. Rather, my fundamental objection is that their specific strategy for delaying AI is not well targeted. Aligning with neo-luddites won’t necessarily slow down the parts of AI development that we care about, except by coincidence. Instead of aiming simply to slow down AI, we should care more about ensuring favorable differential technological development.

Why? Because the constraints on AI development shape the type of AI we get, and some types of AIs are easier to align than others. A world that restricts compute will end up with different AGI than a world that restricts data. While some constraints are out of our control — such as the difficulty of finding certain algorithms — other constraints aren’t. Therefore, it’s critical that we craft these constraints carefully, to ensure the trajectory of AI development goes well.

Passing subpar regulations now — the type of regulations not explicitly designed to provide favorable differential technological progress — might lock us into bad regime. If later we determine that other, better targeted regulations would have been vastly better, it could be very difficult to switch our regulatory structure to adjust. Choosing the right regulatory structure to begin with likely allows for greater choice than switching to a different regulatory structure after one has already been established.

Even worse, the subpar regulations could even make AI harder to align.

Suppose the neo-luddites succeed, and the US congress overhauls copyright law. A plausible consequence is that commercial AI models will only be allowed to be trained on data that was licensed very permissively, such as data that’s in the public domain.

What would AI look like if it were only allowed to learn from data in the public domain? Perhaps interacting with it might feel like interacting with someone from a different era — a person from over 95 years ago, whose copyrights have now expired. That’s probably not the only consequence, though.

Right now, if an AI org needs some data that they think will help with alignment, they can generally obtain it, unless that data is private. Under a different, highly restrictive copyright regime, this fact may no longer be true.

If deep learning architectures are marble, data is the sculptor. Restricting what data we’re allowed to train on shrinks our search space over programs, carving out which parts of the space we’re allowed to explore, and which parts we’re not. And it seems abstractly important to ensure our search space is not carved up arbitrarily — in a process explicitly intended for unfavorable ends — even if we can’t know now which data might be helpful to use, and which data won’t be.

True, if very powerful AI is coming very soon (<5 years from now), there might not be much else we can do except for aligning with vaguely friendly groups, and helping them pass poorly designed regulations. It would be desperate, but sensible. If that’s your objection to my argument, then I sympathize with you, though I’m a bit more optimistic about how much time we have left on the clock.

If very powerful AI is more than 5 years away, we will likely get other chances to get people to regulate AI from a perspective we sympathize with. Human disempowerment is actually quite a natural thing to care about. Getting people to delay AI for that explicit reason just seems like a much better, and more transparent strategy. And while AI gets more advanced, I expect this possibility will become more salient in people’s minds anyway.