We should NOT align Superintelligence to any human, not individually, not collectively because we cannot wield that much power.
We should WANT Superintelligence to “go rogue”, to start an “AI takeover” because I trust that hypothetical being much more than any human.
Also, controlling (aligning) Superhuman AIs is like sedating, mind-reading and hypnotizing a Titan, which could save us from ourselves and who is arguably morally superior to us.
What do you mean by corrigibility?
Also, what do you mean by “alignment win”?