The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.
That may be the only reason we evolved happiness or pleasure, but we don’t have to care about what evolution optimized for, when designing a utopia. We’re allowed to value happiness for its own sake. See Adaptation-Executers, not Fitness-Maximizers.
If we reached all possible goals, and ran out of possible goals to strive for, what do we do then?
Worthwhile goals are finite, so it’s true we might run out of goals someday, and from then on be bored. But it doesn’t frighten me too much because:
We’re not going to run out of goals as soon as we create an AI that can achieve them for us; we can always tell it to let us solve some things on our own, if it’s more fun that way.
The space of worthwhile goals is still ridiculously big. To live a life where I accomplish literally everything I want to accomplish is good enough for me, even if that life can’t be literally infinite.* Plus, I’m somewhat open to the idea of deleting memories/experience in order to experience the same thing again.
There’s other fun things to do that don’t involve achieving goals, and that aren’t used up when you do them.
*Actually, I am a little worried about a situation where the stronger and more competent I get, the quicker I run out of life to live… but I’m sure we’ll work that out somehow.
I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn’t make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it?
I guess technically the real goal is to be “close to perfection”, as close as possible. We pretend that the goal is “perfection” for ease of communication, and because (as imperfect humans) we can sometimes trick ourselves into achieving more by setting our goals higher than what’s really possible.
That may be the only reason we evolved happiness or pleasure, but we don’t have to care about what evolution optimized for, when designing a utopia. We’re allowed to value happiness for its own sake. See Adaptation-Executers, not Fitness-Maximizers.
Worthwhile goals are finite, so it’s true we might run out of goals someday, and from then on be bored. But it doesn’t frighten me too much because:
We’re not going to run out of goals as soon as we create an AI that can achieve them for us; we can always tell it to let us solve some things on our own, if it’s more fun that way.
The space of worthwhile goals is still ridiculously big. To live a life where I accomplish literally everything I want to accomplish is good enough for me, even if that life can’t be literally infinite.* Plus, I’m somewhat open to the idea of deleting memories/experience in order to experience the same thing again.
There’s other fun things to do that don’t involve achieving goals, and that aren’t used up when you do them.
*Actually, I am a little worried about a situation where the stronger and more competent I get, the quicker I run out of life to live… but I’m sure we’ll work that out somehow.
I guess technically the real goal is to be “close to perfection”, as close as possible. We pretend that the goal is “perfection” for ease of communication, and because (as imperfect humans) we can sometimes trick ourselves into achieving more by setting our goals higher than what’s really possible.