[Question] Would it be good or bad for the US military to get involved in AI risk?

Meant as a neutral question. I’m not sure whether this would be good or bad on net:

Suppose key elements of the US military took x-risk from misaligned strong AI very seriously. Specifically, I mean:

  • Key scientists at the Defense Threat Reduction Agency. They have a giant budget (~$3B/​year) and are literally responsible for x-risks. Current portfolio is focused on nuclear risks with some biosecurity

  • Influential policy folks at the Office of the Undersecretary of Defense for Policy. Think dignified career civil servants, 2 levels below the most senior political appointees

  • Commander’s Initiative Group at USSTRATCOM. Folks who have the commander’s ear, tend to be well-respected, and have a huge effect on which ideas are taken seriously

Why this would be good:

  • The military has far more discretionary budget than anyone else in the world. You could multiply the resources dedicated to AI safety research tenfold overnight

  • The military is a huge source of AI risk (in the sense that advancing AI capabilities faster obviously helps the US in competition with China). If key influencers took the risk seriously, they might be more judicious about their capabilities research

  • A key policy goal is preventing the sharing of AI capabilities research. The military is very good at keeping things secret and has policy levers to make private companies do the same

  • The military is a huge source of legitimacy with the general public. And it seems easier than other routes to legitimacy. I think less than 10 key people actually need to be persuaded on the merits, and everyone else will follow suit

  • If the right person agrees, it’s literally possible to get one of the best researchers from this community appointed to lead AI safety research for a major government agency, in the same sense that Wernher von Braun led the space program. You just have to be really familiar with the civil service’s intricate rules for hiring

Why this would be bad:

  • If someone presents the ideas badly, it’s possible to poison the well for later. You could build permanent resistance in the civil service to AI safety ideas. And it’s really easy to make that mistake: presentations that work in these agencies are VERY different from what works in the tech community

  • Even if the agency is persuaded, they could make a noisy and expensive but ultimately useless effort

  • A big government project (with lots of middle managers) adds a moral maze element to alignment research