To reliably avoid 1 (the human expectation of the results of an AI implementing a goal differing from the results that the AI actually works toward) - and do anything reasonably useful, I think you pretty-much have to scan the human’s brain to find out what its actual expectations were.
If you do that non-invasively, that’s some pretty high technology you are talking about there—pushing this scenario way off into the future—long after we have created machine intelligence.
To reliably avoid 1 (the human expectation of the results of an AI implementing a goal differing from the results that the AI actually works toward) - and do anything reasonably useful, I think you pretty-much have to scan the human’s brain to find out what its actual expectations were.
If you do that non-invasively, that’s some pretty high technology you are talking about there—pushing this scenario way off into the future—long after we have created machine intelligence.