It is not clear that human-level or friendly alignment would do us much good for long either, given the nature and history of humans, and the competitive dynamics involved, and the various reasons to expect change. If AGIs are much smarter and more capable and efficient than us, is there reason to think this level of alignment might be sufficient for long?
“Human-level” is just more commonly called “value alignment” (or “alignment with human values” if you want). But I agree with the conclusion: “friendly” is an attempt at “moral fact alignment” (“humanity is valuable to preserve”), which is probably futile without considering and aligning on the underlying theory ethics, i.e., without the methodological and scientific alignment, as I described in a different comment. Value alignment, if taken literally, i.e., as attempting to impart AI with humans’ heuristics about value, is also a species of “moral fact alignment”, just somewhat more concrete than just “humanity is valuable to preserve” (although the latter is also one of the human values).
“Human-level” is just more commonly called “value alignment” (or “alignment with human values” if you want). But I agree with the conclusion: “friendly” is an attempt at “moral fact alignment” (“humanity is valuable to preserve”), which is probably futile without considering and aligning on the underlying theory ethics, i.e., without the methodological and scientific alignment, as I described in a different comment. Value alignment, if taken literally, i.e., as attempting to impart AI with humans’ heuristics about value, is also a species of “moral fact alignment”, just somewhat more concrete than just “humanity is valuable to preserve” (although the latter is also one of the human values).