There is one way that I know of to handle this; I don’t know if you’ll find it satisfactory or not, but it’s the best I’ve found so far. You can go slightly meta and evaluate desires as means instead of as ends, and ask which desires are most useful to have.
Of course, this raises the question “Useful for what?”. Well, one thing desires can be useful for is fulfilling other desires. If I desire that people don’t drown, which causes me to act on that desire by saving people from drowning so they can go on to fulfill whatever desires they happen to have, then my desire than people don’t drown is a useful means for fulfilling other desires. Wanting to stop fake drownings isn’t as useful a desire as wanting to stop actual drownings. And there does seem to be a more-or-less natural reference point against which to evaluate a set of desires: the set of all other desires that actually exist in the real world.
As luck would have it, this method of evaluating desires tends to work tolerably well. For example, the desire held by Clippy, the paperclip maximizer, to maximize the number of paperclips in the universe, doesn’t hold up very well under this standard; relatively few desires that actually exist get fulfilled by maximizing paperclips. A desire to make only the number of paperclips that other people want is a much better desire.
It does make sense. However, what would you make of the objection that it is semi-realist? A first-order realist position would claim that what is desired has objective value, while this represents the more subtle belief that the fulfillment of desire has objective value. I do agree—it is very close to my own original realist position about value. I reasoned that there would be objective (real rather than illusory) value in the fulfillment of the desires of any sentient/valuing being, as some kind of property of their valuing.
There is one way that I know of to handle this; I don’t know if you’ll find it satisfactory or not, but it’s the best I’ve found so far. You can go slightly meta and evaluate desires as means instead of as ends, and ask which desires are most useful to have.
Of course, this raises the question “Useful for what?”. Well, one thing desires can be useful for is fulfilling other desires. If I desire that people don’t drown, which causes me to act on that desire by saving people from drowning so they can go on to fulfill whatever desires they happen to have, then my desire than people don’t drown is a useful means for fulfilling other desires. Wanting to stop fake drownings isn’t as useful a desire as wanting to stop actual drownings. And there does seem to be a more-or-less natural reference point against which to evaluate a set of desires: the set of all other desires that actually exist in the real world.
As luck would have it, this method of evaluating desires tends to work tolerably well. For example, the desire held by Clippy, the paperclip maximizer, to maximize the number of paperclips in the universe, doesn’t hold up very well under this standard; relatively few desires that actually exist get fulfilled by maximizing paperclips. A desire to make only the number of paperclips that other people want is a much better desire.
(I hope that made sense.)
It does make sense. However, what would you make of the objection that it is semi-realist? A first-order realist position would claim that what is desired has objective value, while this represents the more subtle belief that the fulfillment of desire has objective value. I do agree—it is very close to my own original realist position about value. I reasoned that there would be objective (real rather than illusory) value in the fulfillment of the desires of any sentient/valuing being, as some kind of property of their valuing.