I wouldn’t claim that any human is actually able to describe their own utility function; they’re much too complex and riddled with strange exceptions and pieces of craziness like hyperbolic discounting.
I also think that there’s some confusion surrounding the whole idea of utility functions in reality, which I should have been more explicit about. Your utility function is just a description of what you want/value; it is not explicitly about maximizing happiness. For example, I don’t want to murder people, even under circumstances where it would make me very happy to do so. For this reason, I would do everything within my power to avoid taking a pill that would change my preferences such that I would then generally want to murder people; this is the murder pill I mentioned.
As for swapping the utilities of spinach and cheesecake, I think the only way that makes sense to do so would be to change how you perceive their respective tastes, which isn’t a change to your utility function at all. You still want to eat food that tastes good; changing that would have much broader and less predictable consequences.
This does raise an interesting issue: if I’m a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility? Selfishness is not necessary; it just makes the question much simpler.
Only if your current utility function is “maximize expected utility.” (It isn’t.)
I wouldn’t claim that any human is actually able to describe their own utility function; they’re much too complex and riddled with strange exceptions and pieces of craziness like hyperbolic discounting.
I also think that there’s some confusion surrounding the whole idea of utility functions in reality, which I should have been more explicit about. Your utility function is just a description of what you want/value; it is not explicitly about maximizing happiness. For example, I don’t want to murder people, even under circumstances where it would make me very happy to do so. For this reason, I would do everything within my power to avoid taking a pill that would change my preferences such that I would then generally want to murder people; this is the murder pill I mentioned.
As for swapping the utilities of spinach and cheesecake, I think the only way that makes sense to do so would be to change how you perceive their respective tastes, which isn’t a change to your utility function at all. You still want to eat food that tastes good; changing that would have much broader and less predictable consequences.
Only if your current utility function is “maximize expected utility.” (It isn’t.)