If humans on the upper end of the economic/political inequality scale are there because of command over superhuman AI, what reason would they have to command the obedience (or protect the existence) of humans on the lower end of that scale? The kings of “the Old Deal” needed underlings to feed them and fight in their wars. Without that need, retaining underlings is a fetish, not a necessity — like hiring someone to wash dishes for you by hand instead of owning a dishwasher, or hiring a chauffeur instead of using a self-driving car.
I agree that strictly speaking, they don’t need to keep them alive anymore, and to be clear, this analysis holds almost as well if you replaced people with AI, with the exception of the points on violence, so most of the analysis doesn’t depend on people being around to live in it or being commanded.
This seems to inevitably lead to the conclusion that anyone who opposes genocide must oppose the creation of superhuman AI; or at least privately-controlled superhuman AI. (Which shouldn’t be a surprise from a classic AI-safety standpoint.)
I disagree with this conclusion, actually, because I didn’t say that AI developers or AIs themselves would attempt to exterminate humanity, I only said that my analysis was compatible with that outcome, and so was more general than you thought.
In order to reach this conclusion, you also need opinions on how likely this is to happen.
If humans on the upper end of the economic/political inequality scale are there because of command over superhuman AI, what reason would they have to command the obedience (or protect the existence) of humans on the lower end of that scale? The kings of “the Old Deal” needed underlings to feed them and fight in their wars. Without that need, retaining underlings is a fetish, not a necessity — like hiring someone to wash dishes for you by hand instead of owning a dishwasher, or hiring a chauffeur instead of using a self-driving car.
I agree that strictly speaking, they don’t need to keep them alive anymore, and to be clear, this analysis holds almost as well if you replaced people with AI, with the exception of the points on violence, so most of the analysis doesn’t depend on people being around to live in it or being commanded.
This seems to inevitably lead to the conclusion that anyone who opposes genocide must oppose the creation of superhuman AI; or at least privately-controlled superhuman AI. (Which shouldn’t be a surprise from a classic AI-safety standpoint.)
I disagree with this conclusion, actually, because I didn’t say that AI developers or AIs themselves would attempt to exterminate humanity, I only said that my analysis was compatible with that outcome, and so was more general than you thought.
In order to reach this conclusion, you also need opinions on how likely this is to happen.