Oh, come on, Lara, did you really think I hadn’t thought of that? One of the reasons why Friendly AI isn’t trivial is that you need to describe human values like autonomy—“I want to optimize my own life, not have you do it for me”—whose decision-structure is nontrivial, e.g., you wouldn’t want an AI choosing the exact life-course for you that maximized your autonomy.
Oh, come on, Lara, did you really think I hadn’t thought of that? One of the reasons why Friendly AI isn’t trivial is that you need to describe human values like autonomy—“I want to optimize my own life, not have you do it for me”—whose decision-structure is nontrivial, e.g., you wouldn’t want an AI choosing the exact life-course for you that maximized your autonomy.