I think this post makes an important point—or rather, raises a very important question, with some vivid examples to get you started. On the other hand, I feel like it doesn’t go further, and probably should have—I wish it e.g. sketched a concrete scenario in which the future is dystopian not because we failed to make our AGIs “moral” but because we succeeded, or e.g. got a bit more formal and complemented the quotes with a toy model (inspired by the quotes) of how moral deliberation in a society might work, under post-AGI-alignment conditions, and how that could systematically lead to dystopia unless we manage to be foresightful and set up the social conditions just right.
I think this post makes an important point—or rather, raises a very important question, with some vivid examples to get you started. On the other hand, I feel like it doesn’t go further, and probably should have—I wish it e.g. sketched a concrete scenario in which the future is dystopian not because we failed to make our AGIs “moral” but because we succeeded, or e.g. got a bit more formal and complemented the quotes with a toy model (inspired by the quotes) of how moral deliberation in a society might work, under post-AGI-alignment conditions, and how that could systematically lead to dystopia unless we manage to be foresightful and set up the social conditions just right.
I recommend not including this post, and instead including this one and Wei Dai’s exchange in the comments.