Why do you ask? This is a somewhat interesting question, but I don’t usually spend time on it. I think alignment/AI thinkers don’t think about it much because we’re usually more concerned with getting an AGI to reliably pursue any target. If we got it to actually have humanity’s happiness as its goal, in the way we meant it and would like it, we’d just see what it does and enjoy the result. But getting it to reliably do anything at all is one problem, and making that thing something we actually want is another huge problem. See A case for AI alignment being difficult for a well-written intro on why most of us think alignment is at least fairly hard.
Why do you ask? This is a somewhat interesting question, but I don’t usually spend time on it. I think alignment/AI thinkers don’t think about it much because we’re usually more concerned with getting an AGI to reliably pursue any target. If we got it to actually have humanity’s happiness as its goal, in the way we meant it and would like it, we’d just see what it does and enjoy the result. But getting it to reliably do anything at all is one problem, and making that thing something we actually want is another huge problem. See A case for AI alignment being difficult for a well-written intro on why most of us think alignment is at least fairly hard.