I think this argument-complex is stronger than the AI risk folk admit (and I know how to strengthen your argument at various points). A plausible off-beat counter is that humans have been getting less moral (and their aesthetic tastes have gotten worse) over time despite their getting closer to building AI—what you see in history is might consistently rewriting the rules of morality to make right, and predicting this continued trend, while accurate in some descriptive sense (e.g. if you take people’s values at face value as defining morality), it doesn’t seem like sound moral philosophy. In this sense a singularity would be like a glorious communist revolution—seemingly inevitable, seemingly the logical endpoint of morality, yet in fact incredibly destructive both physically and culturally. The problem with AI is that, even if in the limit intelligence and morality (might and right) are the same thing, it seems like an AI would be able to set up the equivalent of a communist dictatorship and hold on to it for as long as it takes a black hole to evaporate. And even if the new communist dictatorship were better than what came before it, it still seems like we have a shot to ensure that AI will jump straight to 100% intelligence and 100% morality without getting caught up somewhere along the way. But of course, even the communist dictatorship scenario isn’t really compatible with the orthogonality thesis...
I think this argument-complex is stronger than the AI risk folk admit (and I know how to strengthen your argument at various points). A plausible off-beat counter is that humans have been getting less moral (and their aesthetic tastes have gotten worse) over time despite their getting closer to building AI—what you see in history is might consistently rewriting the rules of morality to make right, and predicting this continued trend, while accurate in some descriptive sense (e.g. if you take people’s values at face value as defining morality), it doesn’t seem like sound moral philosophy. In this sense a singularity would be like a glorious communist revolution—seemingly inevitable, seemingly the logical endpoint of morality, yet in fact incredibly destructive both physically and culturally. The problem with AI is that, even if in the limit intelligence and morality (might and right) are the same thing, it seems like an AI would be able to set up the equivalent of a communist dictatorship and hold on to it for as long as it takes a black hole to evaporate. And even if the new communist dictatorship were better than what came before it, it still seems like we have a shot to ensure that AI will jump straight to 100% intelligence and 100% morality without getting caught up somewhere along the way. But of course, even the communist dictatorship scenario isn’t really compatible with the orthogonality thesis...