This seems to inevitably lead to the conclusion that anyone who opposes genocide must oppose the creation of superhuman AI; or at least privately-controlled superhuman AI. (Which shouldn’t be a surprise from a classic AI-safety standpoint.)
I disagree with this conclusion, actually, because I didn’t say that AI developers or AIs themselves would attempt to exterminate humanity, I only said that my analysis was compatible with that outcome, and so was more general than you thought.
In order to reach this conclusion, you also need opinions on how likely this is to happen.
This seems to inevitably lead to the conclusion that anyone who opposes genocide must oppose the creation of superhuman AI; or at least privately-controlled superhuman AI. (Which shouldn’t be a surprise from a classic AI-safety standpoint.)
I disagree with this conclusion, actually, because I didn’t say that AI developers or AIs themselves would attempt to exterminate humanity, I only said that my analysis was compatible with that outcome, and so was more general than you thought.
In order to reach this conclusion, you also need opinions on how likely this is to happen.