Trying for Five Minutes on AI Strategy

Epistemic Status: I know basically nothing about any of this, or at least no more than any other LessWronger, I just happened to listen to some podcasts about AI strategy recently and decided to try my hand.

Epistemic Effort: Probably about an hour of thought cumulatively, plus maybe two hours to write this up

Global Coordination

AI, like many other x-risks, is in part a global coordination problem. As I see it, there are two main subproblems here: the problem of “where to go” (i.e. how to coordinate and what to coordinate on), and the problem of “how to get there from here” (taking into account the inadequacies present in our actual, real-life political systems).

Putting this another way: there is a top-down way of looking at the problem, which is: if we imagine what the world would be like if we managed to globally coordinate on stopping AI risk, what would that look like? What coordination mechanisms would we have used, etc. And then there is a bottom-up way of looking at it, which is: there are certain people in the world who are already, right now, concerned about AI risk. What series of actions could those people perform that would ensure (or as close to “ensure” as we can muster) that AI risk would be mitigated? (Though see this post from Benquo for a forceful objection to this line of thinking that I don’t yet know how to take into account.)

As far as I can tell, there are three classes of solutions to the bottom-up version of the problem:

  1. Work unilaterally, outside of states

  2. Get people who are already in positions of power within states to care about the problem (by which I mean be fully willing to implement their end of a solution. Merely believing it’s important in the abstract doesn’t count).

  3. Get people who already care about the problem into positions of power within states.

Solutions in class 1 may, of course, not be sufficient, if state coordination ends up being unavoidably necessary to solve the problem. Solutions in class 2 and 3 run into various problems described in Inadequate Equilibria; in particular, class 2 solutions face the “lemons problem.” I don’t (yet) have anything especially original to say about how to solve these problems (I’d be highly grateful for reading suggestions of places where people have proposed solutions/​workarounds to the problems in Inadequate Equilibria, outside the book itself of course).

As for the top-down part of the problem, I see the following solutions:

  1. International treaty

  2. Weak supranational organization (the UN or another in a similar vein)

  3. Strong supranational organization (the EU but on a world scale) i.e. a supranational confederation

  4. One world nation e.g. supranational federalism

in rough order of ease of implementation. Solutions 3 or 4 (ignoring their political infeasibility) would be especially useful because they could solve not just AI risk, but also other x-risks that require global coordination, whereas if we solve AI risk by treaty, that makes little or no progress on other x-risks.

Actually, “fostering global coordination” seems to me like a good candidate for a high-impact cause area in its own right, as it attacks multiple x-risks at once. A lack of ability to internationally coordinate is a major factor increasing the likelihood of most x-risks (AI, climate change, bioterrorism, nuclear war, and maybe asteroid risk, though probably not non-anthropogenic pandemic risk or solar flares knocking out the power grid), not just AI, so working directly on methods of fostering our ability to globally coordinate is probably a high-impact cause area in itself, separate from work on particular x-risks. Though I should note that either Bryan Caplan or Robin Hanson (or maybe both; I don’t have time to find the reference at the moment, I’ll edit it in if I find it) has argued that pushing on increasing global coordination carries some risk of ending up with a global tyranny, an x-risk in itself.

Avoiding politicization

Maybe this is super obvious, but I don’t think I’ve seen anyone else come out and say it: it’s important that AI safety not become a politically polarized issue the way e.g. climate change has. It doesn’t seem to be much of one yet, as neither party (sorry for U.S.-centrism) is talking about it basically at all (maybe this is why nobody has made this point), though I see Democrats talking about technology more than Republicans.

So, we need to make sure AI safety doesn’t become politically polarized. How can we ensure this? Well, that depends whether you think we need to make sure it’s widely discussed or not. On the one hand, I’ve seen it argued (I don’t remember where or by who right now) that it might be dangerous to try to promote AI safety to a lay audience, because the message will almost certainly get distorted; if you think this is the way to go, then of course it’s rather easy to make sure AI safety doesn’t get polarized—just don’t talk about it much to lay audiences or try to do public outreach about it. On the other hand, it seems likely that politicians will be a necessary component of any effort to globally coordinate around AI safety, and politicians need to focus to a large extent on the issues the public is concerned with in order to get reelected (I suspect this model is way too naive, but I don’t know enough about the nitty-gritty of political science at the moment to make it better), so one way to make politicians care about AI safety is to get the public to care about AI safety. If this is the strategy you favor, then you have to balance making AI safety part of the national conversation with making sure it doesn’t get politically polarized. But it seems like most issues in the public consciousness are also polarized to some extent. (This is another claim I’m highly unconfident about. Feel free to suggest counterexamples or models that contradict it; one counterexample I can think of off the top of my head is social media—it’s quite salient in the national conversation, but not particularly partisan. I also believe the recent book Uncivil Agreement is largely an argument against my claim, but I haven’t read it yet so I don’t know for sure.) So this is an instance of a more general problem: getting politics to pay attention to issues that aren’t polarized.

[After writing this section, a friend raised a ton of complications about whether polarization might be a good or bad thing, which I plan to write up in a separate post at some point.]

These are preliminary thoughts, posted in an exploratory spirit. Please do point out places where you think I’ve gotten something wrong; e.g. if you think one of my taxonomies is incomplete, it probably is, so I’d love if you’d point it out. I’d be especially grateful for reading suggestions, as I’m sure there are tons of places where I’m simply ignorant of relevant literature or of entire fields of study related to what I’m talking about (I have already gone through Allan Dafoe’s AI strategy reading list and research agenda and extracted the readings I want to start with, although I’d be grateful if anyone has a copy of Huntington, Samuel P. “Arms Races: Prerequisites and Results.” Public Policy 8.1 (1958): 41–86. Can’t find it online anywhere.)