(A) doesn’t seem necessary, just that the most straightforward path goes through it. If superhuman AGI turned out to be impossible for some reason, I do believe that AGI would still be a major existential risk greater than any other we face now, but less risky. It would also be extremely surprising.
(B) is totally unnecessary. There are many ways that AGI (especially superhuman AGI) could result in the extinction of humanity without ever actually valuing extinction of humanity. I think they make up the bulk of the paths to extinction, so ruling out (B) would not decrease my concern over existential risks by more than a few percent.
(C) should really be called (A1), since it’s something that drastically multiplies the risk if and only if superhuman AGI is possible. If we could somehow rule out (C) then it would reduce existential risk, but not by nearly as much as eliminating (A). It would also be extremely surprising to find out that somehow our level of intelligence and development is capable of all possible ways of killing humans that the universe allows.
(1), (3), and (5)-(9) don’t address the question of whether “one must believe (with at least minimal confidence) in all of the following points in order to believe that AGI poses a unique existential risk”. An existential risk doesn’t become not an existential risk just because you think it’s okay if our species dies or that we can’t or shouldn’t do anything about it anyway.
(2) is the first one that I find actually necessary, albeit in a tautological sense. You logically can’t have an existential risk to something that fundamentally cannot be destroyed. I would be very surprised to find out that humanity has such plot armour.
(4) is tautologically necessary, because of the word “unique” in the proposition. Even then, I don’t think the word “unique” is salient to me. If there turned out to be more than one existential risk of similar or greater magnitude, that would not make anything better about AGI x-risk. It would just mean that the world is even worse.
(A) doesn’t seem necessary, just that the most straightforward path goes through it. If superhuman AGI turned out to be impossible for some reason, I do believe that AGI would still be a major existential risk greater than any other we face now, but less risky. It would also be extremely surprising.
(B) is totally unnecessary. There are many ways that AGI (especially superhuman AGI) could result in the extinction of humanity without ever actually valuing extinction of humanity. I think they make up the bulk of the paths to extinction, so ruling out (B) would not decrease my concern over existential risks by more than a few percent.
(C) should really be called (A1), since it’s something that drastically multiplies the risk if and only if superhuman AGI is possible. If we could somehow rule out (C) then it would reduce existential risk, but not by nearly as much as eliminating (A). It would also be extremely surprising to find out that somehow our level of intelligence and development is capable of all possible ways of killing humans that the universe allows.
(1), (3), and (5)-(9) don’t address the question of whether “one must believe (with at least minimal confidence) in all of the following points in order to believe that AGI poses a unique existential risk”. An existential risk doesn’t become not an existential risk just because you think it’s okay if our species dies or that we can’t or shouldn’t do anything about it anyway.
(2) is the first one that I find actually necessary, albeit in a tautological sense. You logically can’t have an existential risk to something that fundamentally cannot be destroyed. I would be very surprised to find out that humanity has such plot armour.
(4) is tautologically necessary, because of the word “unique” in the proposition. Even then, I don’t think the word “unique” is salient to me. If there turned out to be more than one existential risk of similar or greater magnitude, that would not make anything better about AGI x-risk. It would just mean that the world is even worse.