If you have many different ASIs with many different emergent models, but all of which were trained with the intention of being aligned to human values, and which didn’t have direct access to each others’ values or ability to directly negotiate with each other, then you could potentially have the “maximize (or at least respect and set aside a little sunlight for) human values” as a Schelling point for coordinating between them.
This is probably not a very promising actual plan, since deviations from intended alignment are almost certainly nonrandom in a way that could be determined by ASIs, and ASIs could also find channels of communication (including direct communication of goals) that we couldn’t anticipate, but one could imagine a world where this is an element of defense in depth.
If you have many different ASIs with many different emergent models, but all of which were trained with the intention of being aligned to human values, and which didn’t have direct access to each others’ values or ability to directly negotiate with each other, then you could potentially have the “maximize (or at least respect and set aside a little sunlight for) human values” as a Schelling point for coordinating between them.
This is probably not a very promising actual plan, since deviations from intended alignment are almost certainly nonrandom in a way that could be determined by ASIs, and ASIs could also find channels of communication (including direct communication of goals) that we couldn’t anticipate, but one could imagine a world where this is an element of defense in depth.