The problem of whether the goals and values of an artificially intelligent agent will align with human goals and values, can be reduced to this problem: Will the goals and values of different human agents ever align with each other?
Aligning human agents is a subproblem, but if you align human agents you don’t automatically align agents in a world where the most powerful agents aren’t human agents.
Aligning human agents is a subproblem, but if you align human agents you don’t automatically align agents in a world where the most powerful agents aren’t human agents.