This discussion has made me feel I don’t understand what “utilon” really means.
I agree that the OP is somewhat ambiguous on this. For my own part, I distinguish between at least the following four categories of things-that-people-might-call-a-utility-function. Each involves a mapping from world histories into the reals according to:
how the history affects our mind/emotional states;
how we value the history from a self-regarding perspective (“for our own sake”);
how we value the history from an impartial (moral) perspective; or
the choices we would actually make between different world histories (or gambles over world histories).
Hedons are clearly the output of the first mapping. My best guess is that the OP is defining utilons as something like the output of 3, but it may be a broader definition that could also encompass the output of 2, or it could be 4 instead.
I guess that part of the point of rationality is to get the output of 4 to correspond more closely to the output of either 2 or 3 (or maybe something in between): that is to help us act in greater accordance with our values—in either the self-regarding or impartial sense of the term.
“Values” are still a bit of a black box here though, and it’s not entirely clear how to cash them out. I don’t think we want to reduce them either to actual choices or simply to stated values. Believed values might come closer, but I think we probably still want to allow that we could be mistaken about them.
What’s the difference between 1 and 2? If we’re being selfish then surely we just want to experience the most pleasurable emotional states. I would read “values” as an individual strategy for achieving this. Then, being unselfish is valuing the emotional states of everyone equally…
…so long as they are capable of experiencing equally pleasurable emotions, which may be untestable.
Note: just re-read OP, and I’m thinking about integrating over instantaneous hedons/utilons in time and then maximising the integral, which it didn’t seem like the OP did.
We can value more than just our emotional states. The experience machine is the classic thought experiment designed to demonstrate this. Another example that was discussed a lot here recently was the possibility that we could value not being deceived.
I agree that the OP is somewhat ambiguous on this. For my own part, I distinguish between at least the following four categories of things-that-people-might-call-a-utility-function. Each involves a mapping from world histories into the reals according to:
how the history affects our mind/emotional states;
how we value the history from a self-regarding perspective (“for our own sake”);
how we value the history from an impartial (moral) perspective; or
the choices we would actually make between different world histories (or gambles over world histories).
Hedons are clearly the output of the first mapping. My best guess is that the OP is defining utilons as something like the output of 3, but it may be a broader definition that could also encompass the output of 2, or it could be 4 instead.
I guess that part of the point of rationality is to get the output of 4 to correspond more closely to the output of either 2 or 3 (or maybe something in between): that is to help us act in greater accordance with our values—in either the self-regarding or impartial sense of the term.
“Values” are still a bit of a black box here though, and it’s not entirely clear how to cash them out. I don’t think we want to reduce them either to actual choices or simply to stated values. Believed values might come closer, but I think we probably still want to allow that we could be mistaken about them.
What’s the difference between 1 and 2? If we’re being selfish then surely we just want to experience the most pleasurable emotional states. I would read “values” as an individual strategy for achieving this. Then, being unselfish is valuing the emotional states of everyone equally… …so long as they are capable of experiencing equally pleasurable emotions, which may be untestable.
Note: just re-read OP, and I’m thinking about integrating over instantaneous hedons/utilons in time and then maximising the integral, which it didn’t seem like the OP did.
We can value more than just our emotional states. The experience machine is the classic thought experiment designed to demonstrate this. Another example that was discussed a lot here recently was the possibility that we could value not being deceived.