AI Risk and the US Presidential Candidates

It’s the new year, and the 2024 primaries are approaching, starting with the Iowa Republican causus on January 15. For a lot of people here on LessWrong, the issue of AI risk will likely be an important factor in making a decision. AI hasn’t been mentioned much during any of the candidates’ campaigns, but I’m attempting to analyze the information that there is, and determine which candidate is most likely to bring about a good outcome.

A few background facts about my own position—such that if these statements do not apply to you, you won’t necessarily want to take my recommendation:

  • I believe that, barring some sort of action to prevent this, the default result of creating artificial superintelligence is human extinction.

  • I believe that our planet is very far behind in alignment research compared to capabilities, and that this means we will likely need extensive international legislation to slow/​pause/​stop the advance of AI systems in order to survive.

  • I believe that preventing ASI from killing humanity is so much more important than any[1] other issue in American politics that I intend to vote solely on the basis of AI risk, even if this requires voting for candidates I would otherwise not have wanted to vote for.[2]

  • I believe that no mainstream politicians are currently suggesting any plans that would be sufficient for survival, nor do they even realize the problem exists. Most mainstream discourse on AI safety is focused on comparatively harmless risks, like misinformation and bias. The question I am asking is “which of these candidates seems most likely to end up promoting a somewhat helpful AI policy” rather than “which of these candidates has already noticed the problem and proposed the ideal solution,” since the answer to the second question is none of them.

(Justification for these beliefs is not the subject of this particular post.)

And a few other background facts about the election, just in case you haven’t been following American politics:

  • As the incumbent president, Joe Biden is essentially guaranteed to be the Democratic nominee, unless he dies or is otherwise incapacitated.

  • Donald Trump is leading in the polls for Republican nominee by very wide margins, followed by Nikki Haley, Ron DeSantis, Vivek Ramaswamy, and Chris Christie. Manifold[3] currently gives him an 88% chance of winning the nomination.

  • However, Trump is facing criminal charges regarding the Capitol attack on January 6, 2021, and the Supreme Courts of Colorado and Maine have attempted to disqualify him from the election.

  • As usual, candidates from outside the Democratic and Republican parties are not getting much support, although Robert F. Kennedy Jr. is polling unusually well for an independent candidate.

Joe Biden

Biden’s most notable action regarding AI was Executive Order 14110[4]. The executive order was intended to limit various risks from AI… none of which were at all related to human extinction, except maybe bioweapons. The order covers risks from misinformation, cybersecurity, algorithmic discrimination, and job loss, while also focusing on trying to reap potential benefits of AI.

But the measures contained in the order, while limited in scope, seem to be a step in the right direction. Most importantly, anyone training a model with 10^26 floating point operations or more must report their actions and safety precautions to the government. That’s a necessary piece of any future regulation on such large models.

Biden has spoken with the UN about international cooperation on AI, and frequently speaks of AI and other new technologies as both a source of “enormous potential and enormous peril,” or other similar phrasings. “We need to make sure they’re used as tools of opportunity, not weapons of oppression,” he said. “We need to be sure they’re used as tools of opportunity, not as weapons of oppression. Together with leaders around the world, the United States is working to strengthen rules and policies so AI technologies are safe before they’re released to the public, to make sure we govern this technology, not the other way around, having it govern us.”[5]

Biden seems to be taking seriously the possibility of existential risk from ASI. That being said, his latest fears of superintelligences seem to have been inspired by the latest Mission: Impossible movie[6], so I’m not confident that he’s reasoning clearly here. But regardless of where he got the idea, he’s paying more attention to the actually important issue than anyone else. Biden appears to be significantly better than nothing.

Donald Trump

Trump was involved in occasional AI legislation during his time as president, and passed Executive Order 13859[7] and Executive Order 13960[8]. Both of these were focused on being “pro-innovation” and increasing the scope of AI development, with the latter regarding more AI in the federal government. Trump has not mentioned existential risk at any point.

In recent years, Trump has spoken little of AI, except regarding its use in campaign ads. At present time, it remains difficult to determine what views he might now have on the subject, but if his policies as president are any indication, he likely won’t be any help when it comes to slowing down AI development. Still, it’s not implausible that he changes his mind in the upcoming years. Unclear leaning negative.

Nikki Haley

Haley’s mentions of AI are even rarer than Trump’s. What she has said is mostly about China, and how the US and its military needs to use AI to gain an advantage over China.[9] In general, much of her campaign has focused on fighting China and its allies, so it doesn’t seem likely that she’ll be a supporter of an international alliance with China to ban AI. Unclear leaning negative.

Ron DeSantis

DeSantis has described much of AI regulation as primarily a tool to enforce wokeness, saying that it only limits some companies while protecting those with woke agendas.[10] He also believes said regulations would only help China gain an advantage in AI development. However, he does support some forms of AI regulation.[11]

“China is trying to do it for its military,” he said. “We’re going to have to compete on the military side with AI, but I don’t want a society in which computers overtake humanity. And I don’t know what the appropriate guardrail is federally because a lot is going to change in a year and a half with this because it’s going so rapidly.”

“But at the same time, we want any technology to benefit our citizens and benefit humanity. We don’t want it to displace our citizens or displace humanity. So as this happens, and there’s rapid development every month, every two months, we’re going to be looking to see, OK, you know, what is it that we need to do. And if there are guard rails that need to be put in place, you know, I would be willing to do that. I think it’s important for society.”[12]

So despite the possibility that DeSantis refuses to sign onto AI regulation out of fear of wokeness, he’s able to draw a distinction between different types of regulations, and is explicitly concerned about AI overtaking humanity. Also significantly better than nothing.

Vivek Ramaswamy

Ramaswamy has said that the most serious risk AI poses is that once humans begin to treat it as an authority, they will be swayed by the beliefs that it suggests.[13] He doesn’t support explicit regulation, but argues that companies must be held liable for the results of any AI system they create.[14] “Just like you can’t dump your chemicals, if you’re a chemical company, in somebody else’s river, well if you’re developing an AI algorithm today that has a negative impact on other people, you bear the liability for it.”

He’s not focusing on the right problems, but it’s possible he’ll be willing to take action against AI companies. Unclear leaning neutral.

Chris Christie

Christie has only spoken of AI as an “opportunity to expand productivity,” and says that “we can’t be afraid of innovation.” He states that “what [he] will do is to make sure that every innovator in this country gets the government the hell off its back and out of its pocket so that it can innovate and bring great new inventions to our country that will make everybody’s lives better.” In the case of AI, this would, of course, actively make things worse. Very bad.

Robert F. Kennedy Jr.

Kennedy has spoken on the Lex Fridman podcast about AI risk, and was familiar with the possibility of human extinction. “It could kill us all,” he said. “I mean, Elon said, first it’s gonna steal our jobs, then it’s gonna kill us, right? And it’s, it’s probably not hyperbole, it’s actually, you know, if it follows the laws of biological evolution, which are just the laws of mathematics, that’s probably a good endpoint for it… it’s gonna happen, but we need to make sure it’s regulated, it’s regulated properly for safety, in every country. And, and that includes Russia, and China, and Iran. Right now, we, we should be putting all the weapons of war aside, and sitting down with those guys and saying… how are we gonna do this?”[15]

Well, that contradicts my above expectations quite a bit. Kennedy is completely aware of the actual problem and what is necessary to solve it. By far the best candidate.

Conclusion

Kennedy is ideal, Biden and DeSantis might be okay, Christie is definitely bad, and as for the others, it’s not clear.

As for a final recommendation… well, we don’t really know yet who to vote for in the general election, since we haven’t seen the Republican candidate and we don’t have as accurate data on Kennedy’s electability as we’ll have in November. But the one clear recommendation I do have at the moment is to vote for DeSantis in the Republican primary.

If anyone else has relevant information on any of these candidates’ views on AI—particularly Trump, Haley, and Ramaswamy—please link it in the comments.

  1. ^

    Nuclear war would be an exception here; it’s within an order of magnitude of catastrophe as unaligned ASI. But I believe that ASI is significantly more likely than nuclear war, and so a more important priority.

  2. ^

    Oh… crap. oh crap. I didn’t think it would be this much of a “someone I didn’t want to vote for” situation. (For the record, apart from AI, my ordering would have been Christie > Haley > Ramaswamy > Biden > DeSantis > Trump > Kennedy, which is… not literally exactly the opposite of what I concluded here, but pretty damn close.)

  3. ^
  4. ^
  5. ^
  6. ^
  7. ^
  8. ^
  9. ^
  10. ^
  11. ^
  12. ^
  13. ^
  14. ^
  15. ^