So, in general not having your values changed is an Omohundro goal, right? But would I suggest that if you you change your utility function[1] from U(w) = weightedSumSapientSatisfaction(w) + personalHappiness(w) + someIdiosyncraticPreferences(w) or whatever it is, to U(w) = weightedSumSapientSatisfaction(w) + personalHappiness(w) + someIdiosyncraticPreferences(w) + 5000, all your choices that involve explicit expected utility comparisons will come out the same as before, but you’ll be happier.
So, in general not having your values changed is an Omohundro goal, right? But would I suggest that if you you change your utility function[1] from
U(w) = weightedSumSapientSatisfaction(w) + personalHappiness(w) + someIdiosyncraticPreferences(w)or whatever it is, toU(w) = weightedSumSapientSatisfaction(w) + personalHappiness(w) + someIdiosyncraticPreferences(w) + 5000, all your choices that involve explicit expected utility comparisons will come out the same as before, but you’ll be happier.There are a lot of issues with utility functions as a framing for describing actual human motivations, but bear with me.