for example I would regard hyperbolic discounting and diminishing returns both as an example of 2D utility.
The OP was triggered by thinking about wireheading and the reasoning that leads some people to adopt it as desirable, or who don’t perceive it as something that is obviously undesirable for humans. Here the first question that comes to my mind is if the adoption of expected utility maximization could reduce the complexity of human values to a narrow set, such as the maximization of desirable bodily sensations.
I reject wireheading myself. But on what basis? To have a consistent utility function one needs to know how to assign utility to new goals one might encounter. If utility is not objectively grounded in some physical fact, e.g. bodily sensations or qualia, then how could we judge if a new goal does outweigh some of our previous goals? For example, if a stone age hunter gatherer learns about the opera, how can he adapt its utility function to such a new goal that can not be defined in terms of previous goals? Should he assign some arbitrary amount of utility to it based on naive introspection? That seems equal to having no utility-function at all, or will at least lead to some serious problems. Here I suspect that a valid measure could be the amount of desirable bodily sensations that are expected as a result of taking a certain action.
If we were to measure utility in units of bodily sensations, then by maximizing utility we maximize bodily sensations. Consequently, as expected utility maximizer’s we should assign most utility to universes where we experience the largest amount of desirable bodily sensations. This might lead to the reduction of complex values to the one action that yields the most utility, wireheading. I don’t agree with this of course, just one line of thought to explain where I am coming from.
Now measuring utility in terms of bodily sensations might lead any human utility maximizer to abandon his complex values. But it will also exclude a lot of human values by the very unit in which utility is grounded. For example, humans care about other humans. Some humans even believe that we should not apply discounting to human well-being and that the mitigation of human suffering never hits diminishing returns. This is incompatible with the maximization of bodily sensations because you can’t expect your positive feelings to increase linearly with the number of people you help (at least as long as you don’t suspect that universal altruism isn’t instrumental to your own long-term well-being etc.). But the example of where an objective grounding of utility in bodily sensations breaks down is also an example of another reduction of complex values to a narrow set of values, i.e. human well-being. If you measure utility in the number of beings you save, your complex values are again outweighed by a single goal, saving people.
ummm, I’m not quite sure if you’re arguing or agreeing with me or what. I’m asserting that the function that assigns utility to a particular set of sensory data can have multiple parameters. If it turns out that a certain set of these parameters is a global maximum that’s fine.
The OP was triggered by thinking about wireheading and the reasoning that leads some people to adopt it as desirable, or who don’t perceive it as something that is obviously undesirable for humans. Here the first question that comes to my mind is if the adoption of expected utility maximization could reduce the complexity of human values to a narrow set, such as the maximization of desirable bodily sensations.
I reject wireheading myself. But on what basis? To have a consistent utility function one needs to know how to assign utility to new goals one might encounter. If utility is not objectively grounded in some physical fact, e.g. bodily sensations or qualia, then how could we judge if a new goal does outweigh some of our previous goals? For example, if a stone age hunter gatherer learns about the opera, how can he adapt its utility function to such a new goal that can not be defined in terms of previous goals? Should he assign some arbitrary amount of utility to it based on naive introspection? That seems equal to having no utility-function at all, or will at least lead to some serious problems. Here I suspect that a valid measure could be the amount of desirable bodily sensations that are expected as a result of taking a certain action.
If we were to measure utility in units of bodily sensations, then by maximizing utility we maximize bodily sensations. Consequently, as expected utility maximizer’s we should assign most utility to universes where we experience the largest amount of desirable bodily sensations. This might lead to the reduction of complex values to the one action that yields the most utility, wireheading. I don’t agree with this of course, just one line of thought to explain where I am coming from.
Now measuring utility in terms of bodily sensations might lead any human utility maximizer to abandon his complex values. But it will also exclude a lot of human values by the very unit in which utility is grounded. For example, humans care about other humans. Some humans even believe that we should not apply discounting to human well-being and that the mitigation of human suffering never hits diminishing returns. This is incompatible with the maximization of bodily sensations because you can’t expect your positive feelings to increase linearly with the number of people you help (at least as long as you don’t suspect that universal altruism isn’t instrumental to your own long-term well-being etc.). But the example of where an objective grounding of utility in bodily sensations breaks down is also an example of another reduction of complex values to a narrow set of values, i.e. human well-being. If you measure utility in the number of beings you save, your complex values are again outweighed by a single goal, saving people.
ummm, I’m not quite sure if you’re arguing or agreeing with me or what. I’m asserting that the function that assigns utility to a particular set of sensory data can have multiple parameters. If it turns out that a certain set of these parameters is a global maximum that’s fine.