An aligned superintelligence would work with goals of the same kind, even if it’s aligned to early AGIs rather than humans. Goals-as-computations may be constant, like the code of a program may be constant, but what’s known about its behavior isn’t constant. And so the way it guides actions of an agent develops as it gets computed further, ultimately according to decisions of the underlying humans/AGIs (and their future iterations) in various hypothetical situations. Also, an uplifted (grown up) human could be a superintelligence personally, it’s not a different kind of thing with respect to values it could have.
An aligned superintelligence would work with goals of the same kind, even if it’s aligned to early AGIs rather than humans. Goals-as-computations may be constant, like the code of a program may be constant, but what’s known about its behavior isn’t constant. And so the way it guides actions of an agent develops as it gets computed further, ultimately according to decisions of the underlying humans/AGIs (and their future iterations) in various hypothetical situations. Also, an uplifted (grown up) human could be a superintelligence personally, it’s not a different kind of thing with respect to values it could have.