[Question] Forget AGI alignment, are you aligned with yourself?

Here’s the problem to solve:

- value alignment between AI and broad societal values

Here’s some other unsolved problems:

- value alignment across factions of society

- value alignment between two members of society

- value alignment between two members of society who have a a 99% overlap in their values and beliefs

- value alignment between a person and the very same person from 5 years ago

- value alignment between a person and a clone of the person with cognitive superpowers

- value alignment between a person and a clone of the person in a different mood

- value alignment between a person and a clone of the person

This might sound weird, but assume I was locked in a room with a clone of myself. Also in the room is a button that grants absolute totalitarian control of the future of humanity via a ton of previously uninvented tech. I imagine it would start with two of us bein vary—physically protecting ourselves, not pushing the button, and starting a a conversation, both of us unsure both of our own goals and how well they align with the other person in the room. Even though the other person is my clone, we could interact asymmetrically in a conversation due to non-deterministic effects or asymmetric instantiation (we both walked into the room with different thoughts in our mind). And as the conversation evolves, we will diverge—be it over minutes or weeks. I can totally imagine a future where we both hit on some key point of difference in the conversation that causes us to fight to the death in the very same room, for that control. (Assuming ofcourse we both care that strongly about the future of humanity to begin with.)

Wondering if anyone else relates to this.