The “public debate” about AI is confusing for the general public and for policymakers because it is a three-sided debate

Summary of Argument: The public debate among AI experts is confusing because there are, to a first approximation, three sides, not two sides to the debate. I refer to this as a đŸ”șthree-sided framework, and I argue that using this three-sided framework will help clarify the debate (more precisely, debates) for the general public and for policy-makers.

Broadly speaking, under my proposed đŸ”șthree-sided framework, the positions fall into three broad clusters:

  • AI “pragmatists” or realists are most worried about AI and power. Examples of experts who are (roughly) in this cluster would be Melanie Mitchell, Timnit Gebru, Kate Crawford, Gary Marcus, Klon Kitchen, and Michael Lind. For experts in this group, the biggest concern is how the use of AI by powerful humans will harm the rest of us. In the case of Gebru and Crawford, the “powerful humans” that they are most concerned about are large tech companies. In the case of Kitchen and Lind, the “powerful humans” that they are most concerned about are foreign enemies of the U.S., notably China.

  • AI “doomers” or extreme pessimists are most worried about AI causing the end of the world. @Eliezer Yudkowsky is, of course, the most well-known to readers of LessWrong but other well-known examples include Nick Bostrom, Max Tegmark, and Stuart Russell. I believe these arguments are already well-known to readers of LessWrong, so I won’t repeat them here.

  • AI “boosters” or extreme optimists are most worried that we are going to miss out on AI saving the world. Examples of experts in this cluster would be Marc Andreessen, Yann LeCun, Reid Hoffman, Palmer Luckey, Emad Mostaque. They believe that AI can, to use Andreessen’s recent phrase, “save the world,” and their biggest worry is that moral panic and overregulation will create huge obstacles to innovation.

These three positions are such that, on almost every important issue, one of the positions is opposed to a coalition of the other two of the positions

  • AI Doomers + AI Realists agree that AI poses serious risks and that the AI Boosters are harming society by downplaying these risks

  • AI Realists + AI Boosters agree that existential risk should not be a big worry right now, and that AI Doomers are harming society by focusing the discussion on existential risk

  • AI Boosters and AI Doomers agree that AI is progressing extremely quickly, that something like AGI is a real possibility in the next few years, and that AI Realists are harming society by refusing to acknowledge this possibility

Why This Matters. The “AI Debate” is now very much in the public consciousness (in large part, IMHO, due to the release of ChatGPT), but also very confusing to the general public in a way that other controversial issues, e.g. abortion or gun control or immigration, are not. I argue that the difference between the AI Debate and those other issues is that those issues are, essentially two-sided debates. That’s not completely true, there are nuances, but, in the public’s mind at their essence, they come down to two sides.

To a naive observer, the present AI debate is confusing, I argue, because various experts seem to be talking past each other, and the “expert positions” do not coalesce into the familiar structure of a two-sided debate with most experts on one side or the other. When there are three sides to a debate, then one fairly frequently sees what look like “temporary alliances” where A and C are arguing against B. They are not temporary alliances. They are based on principles and deeply held beliefs. It’s just that, depending on how you frame the question, you wind up with “strange bedfellows” as two groups find common ground on one issue even though they are sharply divided on others.

Example: the recent Munk Debate showed a team of “doomers” arguing with a mixed team of one booster and one realist

The recent Munk Debate on AI and existential risk illustrates the three-sided aspect of the debates and how, IMHO the đŸ”șthree-sided framework can help make sense of the conflict.

To summarize, Yoshua Bengio and Max Tegmark argued that “AI research and development poses an existential threat.” In other words, what I’m calling the “AI Doomer” position.

Arguing for the other side was a team made up of Yann LeCun and Melanie Mitchell. When I first listened to the debate, I thought both Mitchell and LeCun made strong arguments, but I found it hard to make their arguments fit together.

But after applying the đŸ”șthree-sided framework, it appeared to me that LeCun and Mitchell were both strongly opposed to the “doomer” position but for very different reasons.

LeCun, as an 🚀 AI Booster, argued for the vast potential and positive impact of AI. He acknowledged challenges but saw these as technical issues to be resolved rather than insurmountable obstacles or existential threats. He argued for AI as a powerful tool that can improve society and solve complex problems.

On the other hand, Mitchell, representing the ⚖ AI Realist perspective, questioned whether, in the near term, AI could ever reach a stage where it could pose an existential threat. While she agreed that AI presents risks, she argued that the most important risks are related to immediate, tangible concerns like job losses or the spread of disinformation.

Analyzed under the đŸ”șthree-sided framework, then, the Munk Debate was between:

An “All Doomer” team of Bengio and Tegmark, each making Doomer arguments; versus

A “Mixed Realist/​Booster” team of “Realist” Mitchell making Realist anti-Doomer arguments and “Booster” LeCun, making Booster anti-Doomer arguments.

Thought experiment: other debates.

  • Let’s imagine another debate on the question “at this stage over-regulation of AI is a much bigger threat to humanity than under-regulation.” My take is that the people taking what I’m calling the AI Booster position, such as Marc Andreessen, Yan LeCun, Reid Hoffman, would agree with this proposition. While both the AI Doomers, such as Yudkowsky, Bostrom, Soares, and the AI Realists, such as Melanie Mitchell and Timnit Gebru would agree that under regulation is a greater danger

  • Yet another debate could be on the question: “We are likely to see something like AGI in the next 5 years” Here, the Realists—many of whom believe that claims for AI are vastly overhype—would argue the “no” side, while both the Booster and the Doomers would team up against the Realists to argue that yes, AGI is likely in the relatively near term.