The Basics of AGI Policy (Flowchart)

Disclaimer: I specialized in US-China Affairs and international AI affairs a few years ago. I’ve talked to numerous people in both of those areas, but I’m currently not working for any organization, and the statements and opinions stated here do not represent me, any organization, or anyone else.

There’s been a recent interest in how to proceed with AI safety, in part due to the fact that AI alignment is hard and it’s easier to work on something else. AI policy is definitely one of those alternatives, but it’s also something that people tend to get really, really wrong. For example, plenty of people think that the general public can vote regulation into place that will curtail the AI industry.

That will absolutely never happen.

The very first thing to understand about AI in government is that AI is already being mounted on nuclear stealth missiles, allowing them to fly low to the ground (under the radar) and navigate using on-board cameras in the event that all GPS systems are spoofed or obliterated by the adversary nation. Which is notable, because “stealth” is a euphemism for disguising their radar signature as a computer glitch, specifically, the kind of computer glitch that persuades commanders like Stanislav Petrov to “wait and see” for a few minutes, even if doing so risks losing a large chunk of their homeland’s nuclear arsenal before it can be used.

If that doesn’t sound like a very big problem, that’s fine, because it isn’t the problem at all. It’s a symptom.

Technical AI safety people tend to see AI policy people as the ones who buy infinite time to solve alignment, which would give them a fair shot. From my perspective, it’s more like the opposite; if alignment were to be solved tomorrow, that would give the AI policy people a fair shot at getting it implemented.

To illustrate the problem to AI policy beginners, and anyone who wants perspective, I made a flowchart inspired by this post recommending the best textbooks for every field. But even the best textbooks are long and are incentivized with a costly credential that comes with memorizing them. Which is why I only picked a couple chapters from 101-level books renowned for being enjoyable page-turners that introduce people to the field, which is about as good as it gets for nonfiction. And anyone can the flowchart to find exactly what they need to read, if anything.

These books are cheap on amazon, the important bits are around 30-60 pages, and both the writing and the material make these books as entertaining and satisfying as nonfiction gets; you’ll breeze through each chapter as if you were reading a novel, no matter what kind of brain you have or what you tend to read about. They’re also generally available at your local library (unless it’s near September or January, since all the IR and Cybersecurity majors will be checking them out when they receive the syllabi for their introductory classes).

These are the links to the exact books I’m talking about:

The Tragedy of Great Power Politics (2014): A pretty-solid beginner’s model for how international relations actually work, and why foreign policy tends to be ludicrously aggressive e.g. first-strike nuclear weapons that are deliberately disguised as computer glitches.

Joseph Nye’s Soft Power (2004): Chapter 4 is a history of propaganda.

Schelling’s Arms and Influence (1966): The absolute basics of game theory and negotiation in nuclear war, gain-of-function research, and other insanity.