AI Policy?

Here’s a question: are there any policies that could be worth lobbying for to improve humanity’s chances re: AI risk?

In the near term, it’s possible that not much can be done. Human-level AI still seems a long ways off (and it probably is), making it both hard to craft effective policy on, and hard to convince people it’s worth doing something about. The US government currently funds work on what it calls “AI” and “nanotechnology,” but that mostly means stuff that might be realizable in the near-term, not human-level AI or molecular assemblers. Still, if anyone has ideas on what can be done in the near term, they’d be worth discussing.

Furthermore, I suspect that as human-level AI gets closer, there will be a lot the US government will be able to do to affect the outcome. For example, there’s been talk of secret AI projects, but if the US gov got worried about those, I suspect they’d be hard to keep secret from a determined US gov, especially if you believe (as I do) that larger organizations will have a much better shot at building AI than smaller ones.

The lesson of Snowden’s NSA revelations seems to be that, while in theory there are procedures humans can use to keep secrets, in practice humans are so bad at implementing those procedures that secrecy will fail against a determined attacker. Ironically, this applies both to the government and everyone the government has spied on. However, the ability of people outside the US gov to find out about hypothetical secret government AI projects seems less predictable, dependent on decisions of individual would-be leakers.

And it seems like, as long as the US government is aware of an AI project, there will be a lot it will be able to do to shut the project down if desired. For foreign projects, there will be the possibility of a Stuxnet-style attack, though the government might be reluctant to do that against a nuclear power like China or Russia (or would it?) However, I expect the US to lead the world in innovation for a long time to come, so I don’t expect foreign AI projects to be much of an issue in the early stages of the game.

The real issue is US gov vs. private US groups working on AI. And there, given the current status quo for how these things work in the US, my guess is that if the government ever became convinced that an AI project was dangerous, they would find some way to shut it down citing “national security” and basically that would work. However, I can see big companies with an interest in AI lobbying the government to make that not happen. I can also see them deciding to pack their AI operations off to Europe or South Korea or something.

And on top of all this is simply the fact that, if it becomes convinced that AI is important, the US government has a lot of money to throw at AI research.

These are just some very hastily sketched thoughts, don’t take them too seriously, and there’s probably a lot more that can be said. I do strongly suspect, however, that people who are concerned about risks from AI ignore the government at our peril.