I agree that strictly speaking, they don’t need to keep them alive anymore, and to be clear, this analysis holds almost as well if you replaced people with AI, with the exception of the points on violence, so most of the analysis doesn’t depend on people being around to live in it or being commanded.
This seems to inevitably lead to the conclusion that anyone who opposes genocide must oppose the creation of superhuman AI; or at least privately-controlled superhuman AI. (Which shouldn’t be a surprise from a classic AI-safety standpoint.)
I disagree with this conclusion, actually, because I didn’t say that AI developers or AIs themselves would attempt to exterminate humanity, I only said that my analysis was compatible with that outcome, and so was more general than you thought.
In order to reach this conclusion, you also need opinions on how likely this is to happen.
I agree that strictly speaking, they don’t need to keep them alive anymore, and to be clear, this analysis holds almost as well if you replaced people with AI, with the exception of the points on violence, so most of the analysis doesn’t depend on people being around to live in it or being commanded.
This seems to inevitably lead to the conclusion that anyone who opposes genocide must oppose the creation of superhuman AI; or at least privately-controlled superhuman AI. (Which shouldn’t be a surprise from a classic AI-safety standpoint.)
I disagree with this conclusion, actually, because I didn’t say that AI developers or AIs themselves would attempt to exterminate humanity, I only said that my analysis was compatible with that outcome, and so was more general than you thought.
In order to reach this conclusion, you also need opinions on how likely this is to happen.