I was the one who downvoted, and my reasoning for doing this is at a fundamental level, I think a lot of their argument rests on fabrication of options that only appear to work because they ignore the issue of why value disagreement is less tolerable in an AI-controlled future than now.
I have a longer comment below, and @sunwillrise makes a similar point, but a lot of the argument around AI safety having an attitude towards minimizing value conflict makes more sense than the post is giving it credit for, and the mechanisms that allow value disagreements to not blow up into take-over attempts/mass violence relies on certain features of modern society that AGI will break (and there is no talk about how to actually make the vision sustainable):
I was the one who downvoted, and my reasoning for doing this is at a fundamental level, I think a lot of their argument rests on fabrication of options that only appear to work because they ignore the issue of why value disagreement is less tolerable in an AI-controlled future than now.
I have a longer comment below, and @sunwillrise makes a similar point, but a lot of the argument around AI safety having an attitude towards minimizing value conflict makes more sense than the post is giving it credit for, and the mechanisms that allow value disagreements to not blow up into take-over attempts/mass violence relies on certain features of modern society that AGI will break (and there is no talk about how to actually make the vision sustainable):
https://www.lesswrong.com/posts/iJzDm6h5a2CK9etYZ/a-conservative-vision-for-ai-alignment#eBdRwtZeJqJkKt2hn