No, of course not. There’s nothing that a utility function should maximize, regardless of the agent’s rationality. Goal choice is arational; rationality has nothing to do with hedons. First you choose goals, which may or may not be hedons, and then you rationally pursue them.
This is best demonstrated by forcibly separating hedon-maximizing from most other goals. Take a wirehead (someone with a wire into their “pleasure center” controlled by a thumb switch). A wirehead is as happy as possible (barring changes to neurocognitive architecture), but they don’t seek any other goals, ever. They just sit there pressing the button until they die. (In experiments with mice, the mice wouldn’t take time off from pressing the button even to eat or drink, and died from thirst. IIRC this went on happening even when the system was turned off and the trigger no longer did anything.)
Short of the wireheading state, noone is truly hedon-maximizing. It wouldn’t make any sense to say that we “should” be.
Wireheads aren’t truly hedon-maximizing either. If they were, they’d eat and drink enough to live as long as possible and push the button a greater total number of times.
They are hedon-maximizing, but with a very short time horizon of a few seconds.
If we prefer time horizons as long as possible, then we can conclude that hedon-maximizing implies first researching the technology for medical immortality, then building an army of self-maintaining robot caretakers, and only then starting to hit the wirehead switch.
Of course this is all tongue in cheek. I realize that wireheads (at today’s level of technology) aren’t maximizing hedons; they’re broken minds. When the button stops working, they don’t stop pushing it. Adaptation executers in an induced failure mode.
It depends on your discount function: if its integral is finite over an infinite period of time (e.g. in case of exponential discount) then it will depend on the effort of reaching immortality whether you will go that route or just dedicate yourself to momentary bliss.
No, of course not. There’s nothing that a utility function should maximize, regardless of the agent’s rationality. Goal choice is arational; rationality has nothing to do with hedons. First you choose goals, which may or may not be hedons, and then you rationally pursue them.
This is best demonstrated by forcibly separating hedon-maximizing from most other goals. Take a wirehead (someone with a wire into their “pleasure center” controlled by a thumb switch). A wirehead is as happy as possible (barring changes to neurocognitive architecture), but they don’t seek any other goals, ever. They just sit there pressing the button until they die. (In experiments with mice, the mice wouldn’t take time off from pressing the button even to eat or drink, and died from thirst. IIRC this went on happening even when the system was turned off and the trigger no longer did anything.)
Short of the wireheading state, noone is truly hedon-maximizing. It wouldn’t make any sense to say that we “should” be.
Wireheads aren’t truly hedon-maximizing either. If they were, they’d eat and drink enough to live as long as possible and push the button a greater total number of times.
They are hedon-maximizing, but with a very short time horizon of a few seconds.
If we prefer time horizons as long as possible, then we can conclude that hedon-maximizing implies first researching the technology for medical immortality, then building an army of self-maintaining robot caretakers, and only then starting to hit the wirehead switch.
Of course this is all tongue in cheek. I realize that wireheads (at today’s level of technology) aren’t maximizing hedons; they’re broken minds. When the button stops working, they don’t stop pushing it. Adaptation executers in an induced failure mode.
It depends on your discount function: if its integral is finite over an infinite period of time (e.g. in case of exponential discount) then it will depend on the effort of reaching immortality whether you will go that route or just dedicate yourself to momentary bliss.