I appreciate your emphasis the bottom line that (IIUC) we agree upon: rogue superintelligence would permanently ruin the future and (at best) relegate humanity to something like zoo animals.
As I understand your arguments, they boil down to (a) maybe the AI will care a little and (b) maybe the AI will make backups of humans and sell them to aliens.
My basic response to (a) is: weak prefs about humans (if any) would probably get distorted, and existing ppl probably wouldn’t be the optimum (and if they were, the result probably wouldn’t be pretty). cf
My basic response to (b) is: I’m skeptical of your apparent high probabilities (>50%?) on this outcome, which looks somewhat specific and narrow to me. I also expect most implementations to route through a step most people would call “death”.
In case (b), the traditional concept of “death” gets dodgy. (What if copies of you are sold multiple times? What if you only start running in an alien zoo billions of years later? What if the aliens distort your mind before restoring you?) I consider this topic a tangent.
Mostly I consider these cases to be exotic enough and death-flavored enough that I don’t think “maybe AIs will sell backups of us to aliens” merits a caveat when communicating the basic danger. But I’m happy to acknowledge the caveat, and that some people think it’s likely.
@So8res[1] responded on twitter, copying his response here for completeness:
If Nate cross posts himself, I’ll delete this comment.
Link to Nate’s comments.