It would be easier to discuss about them if we knew exactly what they can mean, that is, in a more precise way than just by the “unit of utility” definition. For example, how to handle them through time?
So why not defining them with something like that :
Suppose we could precisely measure the level of instant happiness of a person on a linear scale between 1 to 10, with 1 being the worst pain imaginable and 10 the best of climaxes. This level is constantly varying, for everybody.
In this context, one utilon could be the value of an action that is increasing the level of happiness of a person by one, on this scale, during one hour.
Then, for example, if you help an old lady to cross the road, making her a bit happier during the next hour (let’s say she would have been around 6⁄10 happy but thanks to you she will be 6,5⁄10 happy during this hour), then your action has a utility of one half of a utilon. You just created 0.5 utilon, and it’s a definitely valid statement, isn’t that great?
Using that, a hedon is nothing more than a utilon that we create by raising our own happiness.
What you describe are hedons. It’s misleading to call them utilons. For rational (not human) agents, utilons are the value units of a utility function which they try to maximize. But humans don’t try to maximize hedons, so hedons are not human-utilons.
Then would you agree that any utility function should, in the end, maximize hedons (if we were rational agents, that is) ?
If yes, that would mean that hedons are the goal and utilons are a tool, a sub-goal, which doesn’t seem to be what OP was saying.
No, of course not. There’s nothing that a utility function should maximize, regardless of the agent’s rationality. Goal choice is arational; rationality has nothing to do with hedons. First you choose goals, which may or may not be hedons, and then you rationally pursue them.
This is best demonstrated by forcibly separating hedon-maximizing from most other goals. Take a wirehead (someone with a wire into their “pleasure center” controlled by a thumb switch). A wirehead is as happy as possible (barring changes to neurocognitive architecture), but they don’t seek any other goals, ever. They just sit there pressing the button until they die. (In experiments with mice, the mice wouldn’t take time off from pressing the button even to eat or drink, and died from thirst. IIRC this went on happening even when the system was turned off and the trigger no longer did anything.)
Short of the wireheading state, noone is truly hedon-maximizing. It wouldn’t make any sense to say that we “should” be.
Wireheads aren’t truly hedon-maximizing either. If they were, they’d eat and drink enough to live as long as possible and push the button a greater total number of times.
They are hedon-maximizing, but with a very short time horizon of a few seconds.
If we prefer time horizons as long as possible, then we can conclude that hedon-maximizing implies first researching the technology for medical immortality, then building an army of self-maintaining robot caretakers, and only then starting to hit the wirehead switch.
Of course this is all tongue in cheek. I realize that wireheads (at today’s level of technology) aren’t maximizing hedons; they’re broken minds. When the button stops working, they don’t stop pushing it. Adaptation executers in an induced failure mode.
It depends on your discount function: if its integral is finite over an infinite period of time (e.g. in case of exponential discount) then it will depend on the effort of reaching immortality whether you will go that route or just dedicate yourself to momentary bliss.
Has anybody ever proposed a way to value utilons?
It would be easier to discuss about them if we knew exactly what they can mean, that is, in a more precise way than just by the “unit of utility” definition. For example, how to handle them through time?
So why not defining them with something like that :
Suppose we could precisely measure the level of instant happiness of a person on a linear scale between 1 to 10, with 1 being the worst pain imaginable and 10 the best of climaxes. This level is constantly varying, for everybody. In this context, one utilon could be the value of an action that is increasing the level of happiness of a person by one, on this scale, during one hour.
Then, for example, if you help an old lady to cross the road, making her a bit happier during the next hour (let’s say she would have been around 6⁄10 happy but thanks to you she will be 6,5⁄10 happy during this hour), then your action has a utility of one half of a utilon. You just created 0.5 utilon, and it’s a definitely valid statement, isn’t that great?
Using that, a hedon is nothing more than a utilon that we create by raising our own happiness.
What you describe are hedons. It’s misleading to call them utilons. For rational (not human) agents, utilons are the value units of a utility function which they try to maximize. But humans don’t try to maximize hedons, so hedons are not human-utilons.
Then would you agree that any utility function should, in the end, maximize hedons (if we were rational agents, that is) ? If yes, that would mean that hedons are the goal and utilons are a tool, a sub-goal, which doesn’t seem to be what OP was saying.
No, of course not. There’s nothing that a utility function should maximize, regardless of the agent’s rationality. Goal choice is arational; rationality has nothing to do with hedons. First you choose goals, which may or may not be hedons, and then you rationally pursue them.
This is best demonstrated by forcibly separating hedon-maximizing from most other goals. Take a wirehead (someone with a wire into their “pleasure center” controlled by a thumb switch). A wirehead is as happy as possible (barring changes to neurocognitive architecture), but they don’t seek any other goals, ever. They just sit there pressing the button until they die. (In experiments with mice, the mice wouldn’t take time off from pressing the button even to eat or drink, and died from thirst. IIRC this went on happening even when the system was turned off and the trigger no longer did anything.)
Short of the wireheading state, noone is truly hedon-maximizing. It wouldn’t make any sense to say that we “should” be.
Wireheads aren’t truly hedon-maximizing either. If they were, they’d eat and drink enough to live as long as possible and push the button a greater total number of times.
They are hedon-maximizing, but with a very short time horizon of a few seconds.
If we prefer time horizons as long as possible, then we can conclude that hedon-maximizing implies first researching the technology for medical immortality, then building an army of self-maintaining robot caretakers, and only then starting to hit the wirehead switch.
Of course this is all tongue in cheek. I realize that wireheads (at today’s level of technology) aren’t maximizing hedons; they’re broken minds. When the button stops working, they don’t stop pushing it. Adaptation executers in an induced failure mode.
It depends on your discount function: if its integral is finite over an infinite period of time (e.g. in case of exponential discount) then it will depend on the effort of reaching immortality whether you will go that route or just dedicate yourself to momentary bliss.