[Question] Potential alignment targets for a sovereign superintelligent AI

I’d like to compile a list of potential alignment targets for a sovereign superintelligent AI.

By an alignment target, I mean something like what goals/​values/​utility function we might want to instill in a sovereign superintelligent AI (assuming we’ve solved the alignment problem).

Here are some alignment targets I’ve come across:

Examples, reviews, critiques, and comparisons of alignment targets are welcome.

No comments.