Let me put it this way: I see many people having a model where AGI=doom with 100%. I haven’t seen any evidence for that, this makes me think that the real probability is way lower or otherwise I would be reading a lot of good arguments, but it is not the case. The fact that the superintelligence can kill all humans is taken for granted, I am pushing against that precisely because I haven’t seen any good arguments on how an AGI does that.
Let me put it this way: I see many people having a model where AGI=doom with 100%. I haven’t seen any evidence for that, this makes me think that the real probability is way lower or otherwise I would be reading a lot of good arguments, but it is not the case. The fact that the superintelligence can kill all humans is taken for granted, I am pushing against that precisely because I haven’t seen any good arguments on how an AGI does that.
It seems like someone’s already put the effort in to give you a list of ways AGI could kill all humans so I don’t have to do that.
I’m totally not impressed by that list. I can come up myself with more ideas too, that does not mean anything