Dependencies for AGI pessimism

Epistemic status: mildly confident

One must believe (with at least minimal confidence) in all of the following points in order to believe that AGI poses a unique existential risk:

A) AGI (at a superhuman level) is possible

B) AGI has a significant chance of finding value in destroying humanity (as we think of it at least)

C) AGI will be capable of killing all or nearly all humans in ways which non-superintelligent-AGI agents cannot or likely will not do.

If you believe that all of the above are true, are you forced to worry about existential AI risk? Not necessarily. Here are some other, more subtle/​fundamental premises one must accept:

  1. Humanity should not be destroyed.

  2. Humanity can in practice be destroyed, as a general idea. (There may be some religious/​philosophical views which don’t believe extinction is possible. This is more of a generalization of C than a separate dependency.)

  3. It’s possible to have an effect on the risk level from superintelligence. (If not, there’s no use worrying about it)

  4. There is no other near-term existential risk which is orders of magnitude more likely. (If so, it’s justifiable not to be concerned, for the same reason that we aren’t deeply occupied with near-term asteroid impact risk.)

  5. One should be concerned about risks which one may be able to help prevent.

  6. One should care about the long-term future, and of risks to others outside the self (this is also a dependency for 1, but not identical, because it’s possible in theory to be both a longtermist and a misanthrope).

  7. Taking practical steps based on logical thinking is a reasonable way to deal with the world. (If you don’t believe in logic, then you can probably contradict yourself and hold everything else to be true while still not changing your mind? I’m not sure this one is necessary to include, but I may be wrong.)

If a counter-example exists of someone who’s concerned about existential risk from AI but doesn’t believe in all of the above, or vice versa, please let me know and I will update accordingly.