A utility function measured in dollars seems fairly unambiguously to lead to decisions that are non-optimal for humans, without a sophisticated understanding of what dollars are.
Dollars mean something for humans because they are tokens in a vast, partly consensual and partially reified game. Economics, which is our approach to developing dollar maximising strategies, is non-trivial.
Training an AI to understand dollars as something more than data points would be similarly non-trivial to training an AI to faultlessly assess human happiness.
Interestingly enough, my teacher, Chris. Alexander (author of A Pattern Language), recounts his entrance test for a physics degree at Cambridge. The applicants were asked to experimentally determine the magnetic field of the earth. He performed the experiment, and came up with an answer he knew to be wrong. Wrong by too large a margin to put down to experimental error. A smart chap, he had time to repeat the key part of the experiment, and recalculate—got the same answer. He used the last part of his time to write down his hypothesis for having achieved such a result. And, alone among the students, he was right. A massive electro-magnet was being used on the floor below as part of another experiment.
I believe the advice offered to me as an 18yr old physics student encountering similar circumstances was simply to show my workings and the incorrect result, and to add that I knew this was not the ‘right’ answer.