How “Pause AI” advocacy could be net harmful

Link post

In the olden days, Yudkowsky and Bostrom warned people about the risks associated with developing powerful AI. Many people listened and went “woah, AI is dangerous, we better not build it”. A few people went “woah, AI is powerful, I better be the one to build it”. And we’ve got the AI race we have today, where a few organizations (bootstrapped with EA funding) are functionally trying to kill literally everyone, but at least we also have a bunch of alignment researchers trying to save the world before they do.

I don’t think that that first phase of advocacy was net harm, compared to inaction. We have a field of alignment at all, with (by my vague estimate) maybe a dozen or so researchers actually focused on the parts of the problem that matter; plausibly, that’s a better chance than the median human-civilization-timeline gets.

But now, we’re trying to make politicians take AI risks seriously. Politicians who don’t even have very basic rationalist training against cognitive biases, come from a highly conflict-theoritic perspective full of political pressures, and haven’t read the important lesswrong literature. And this is a topic contentious enough that even many EAs/​rationalists who have been around for a while and read many of those important posts still feel very confused about the whole thing.

What do we think is going to happen?

I expect that some governments will go “woah, AI is dangerous, we better not build it”. And some governments will go “woah, AI is powerful, we better be the ones to build it”. And this time, there’s a good chance it’ll be net harm, because most governments have in fact a lot more power to do bad than good, here. Things could be a lot worse.

(Pause AI advocacy plausibly also puts the attention of a lot of private actors on how dangerous (and thus powerful!) AI can be, which is also bad (maybe worse!). I’m focusing on politicians here because they’re the more obvious failure mode.)

Now, the upside of Pause AI advocacy (and other governance efforts) is possibly great! Maybe Pause AI manages to slow down the labs enough to buy us a few years (I currently expect AI to kill literally everyone sometime this decade), and which would be really good for increasing the chances of solving alignment before one of the big AI organizations launch an AI that kills literally everyone. I’m currently about 50:50 on whether Pause AI advocacy is net good or net bad.

Being in favor of Pausing AI is great (I’m definitely in favor of pausing AI!), but it’s good to keep in mind that the ways you go about advocating for that can actually have harmful side-effects, and you have to consider the possibility that those harmful side-effects might be worse than your expected gain (what you might gain, multiplied by how likely you are to gain it).

Again, I’m not saying they are worse! I’m saying we should be thinking about whether they are worse.

Crossposted to EA Forum (3 points, 3 comments)