I agree that all of these seem like good aspects of human-AI interaction to have, especially for narrow AI systems. For superhuman AI systems, there’s a question of how much of this should the AI infer for itself vs. make sure to ask the human.
There is a problem of “moral unemployment”—that is, if superintelligent AI will do all hard work of analysing “what I should want”, it will strip from me the last pleasant duty I may have.
E.g: Robot: “I know that the your deepest desire, which you may be not fully aware of, but after a lot of suffering, you will learn it for sure – is to write a novel. And I already wrote this novel for you—the best one which you could possibly write.”