I find it interesting that the goal of a lot of alignment work seems to be to align AI with human values, when humans with human values spend so much of their time in (often lethal) conflict. I’m more inclined to the idea of building AI with a value-set that is complementary to human values in some widely-desirable way, rather than literally having a bunch of AIs that behave like humans.
I wonder if this perspective intersects with some of your points about thick and thin moralities, as well as social technology. Am I in the right ballpark to suggest that what you are after is a global thin morality defined via social technology that allows AI to participate in diverse, thick human cultures without escalating conflict between groups and contexts to above an unacceptable level?
In a sense, at the risk of oversimplifying, you are looking for a pragmatic solution for keeping conflict low in a world of diverse, highly powerful AI?
I’m glad to see someone talking about pragmatism!
I find it interesting that the goal of a lot of alignment work seems to be to align AI with human values, when humans with human values spend so much of their time in (often lethal) conflict. I’m more inclined to the idea of building AI with a value-set that is complementary to human values in some widely-desirable way, rather than literally having a bunch of AIs that behave like humans.
I wonder if this perspective intersects with some of your points about thick and thin moralities, as well as social technology. Am I in the right ballpark to suggest that what you are after is a global thin morality defined via social technology that allows AI to participate in diverse, thick human cultures without escalating conflict between groups and contexts to above an unacceptable level?
In a sense, at the risk of oversimplifying, you are looking for a pragmatic solution for keeping conflict low in a world of diverse, highly powerful AI?