The history of autocracies and monarchies suggests that taking something with the ethical properties of an average human being and handing it unconstrained power doesn’t usually work out very well. So yes, to create an aligned ASI that is safe for us to share a planet with does require creating something morally ‘better’ than most humans. I’m not sure it needs to be perfect and ideal, as long as it is good enough and aspires to improve: they it can help us create better training data for its upgraded next version that will make that be closer to fully aligned; this is an implementation of Value Learning.
The history of autocracies and monarchies suggests that taking something with the ethical properties of an average human being and handing it unconstrained power doesn’t usually work out very well. So yes, to create an aligned ASI that is safe for us to share a planet with does require creating something morally ‘better’ than most humans. I’m not sure it needs to be perfect and ideal, as long as it is good enough and aspires to improve: they it can help us create better training data for its upgraded next version that will make that be closer to fully aligned; this is an implementation of Value Learning.