Focus on existential risk is a distraction from the real issues. A false fallacy

A retort familiar to people discussing existential risk is that these conversations distract us from the “real issues” and “current harms” of AI.

On the surface this retort is easily debunked:

  • What is more “harmful” or an “issue” than extinction of our species?

  • The current issues one speaks of affect a smaller number of people than the 100% of people who would be affected by mass death.

  • Whataboutism.

  • This retort itself is a distraction from the real issue of possible extinction.

  • The preparation and effort required to deal with x-risk is greater than the preparation and effort to deal with the current harms.

  • We already have laws for current harms, but few regulations to deal with extinction-level threats.

  • The urgency of dealing with fast take-off scenarios is greater than with bias and discrimination.

  • This list can be completed with more excellent responses.

Below are three reasons why discussing existential risk distracts us from the real issues and current harms:

Firstly, the statement is factually correct. Humans have only so much attention they can pay to AI. If you talk about x-risk, that means there is less time to talk about current harms. If you pay lawyers to draft a bill aimed at preventing x-risk and focus on that, they may neglect to draft in provisions to address current harms. Same goes if you get hold of a politician whose time is very limited and precious.

Second, focusing one’s mind on existential risk is just that: focusing usually means saying “no” to other things. Imagine you are buying a ticket to the Arctic to take pictures of blooming tundra. The brochure is all about the pictures and types of cameras best recommended for the most striking pictures of tussock grasses. And no word about how to get there, how to get dressed, what visas you need, etc. Just pay $10,000 and focus on the tundra. You call the sales infoline and their robotic voice tells you “You really need to consider the flowering tundra in its splendour. We will get you there”. “What visa do I need? Which country is it in?” you wonder. “Please focus on the tundra,” the highly convincing salesbot replies. That’s what “focusing” on existential risk means. It means risking failure to consider other relevant factors in decisions and policies around AI.

Even if one were to believe in “transformative AI” and “safe AGI”, one needs to have a plan to get where they want to go. Obstacles such as bias and discrimination and copyright violations and … must be addressed already now while they are still addressable, not “when we get there”.

Third, “existential risk” is a characterisation of the level of risk, not nature of risk. Extinction can come through many forms, including discriminatory decision making (the Australian Robodebt scandal is a good example). Including not having anything to eat because of under-employment, which is already starting to affect copywriters and digital artists. It can come from foom too. All these issues can be magnified and intertwingled as AI companies scale AI (in model sizes, use-cases, capital, hardware, influence, algorithmic improvements, etc.). Addressing and solving AI issues as and when they emerge is important. We are already behind on that.

We need to address old issues, monitor new issues and react quickly, and work on ways to prevent future issues. Discussion of existential risk (including “alignment plans” if you are writing one) must not be decoupled from remedying other harms and building a solid way forward.