I am interested in the discussion, so I am going to roleplay such a person. I’ll call him “Bob”.
Bob does not intend to have children, for a variety of reasons. He understands that some people do want children, and, while he believes that they are wrong, he does agree that wills are sensible tools to employ once a person commits to having children.
Bob wants to maximize his own utility. He recognizes that certain actions give him “warm fuzzies”; but he also understands that his brain is full of biases, and that not all actions that produce “warm fuzzies” are in his long-term interest. Bob has been working diligently to erdaicate as many of his biases as is reasonably practical.
So, please convince Bob that caring about what happens after he’s dead is important.
If Bob really doesn’t care, then there’s not much to say. I mean, who am I to tell Bob what Bob should want? That said, I may be able to explain to Bob why I care, and he might accept or at least understand my reasoning. Would that satisfy?
I think it would. Bob wants to want the things that will make him better off in the long run. This is why, for example, Bob trained himself to resist the urge to eat fatty/sugary foods. As the result, he is now much healthier (not to mention, leaner) than he used to be, and he doesn’t even enjoy the taste of ice cream as much as he did. In the process, he also learned to enjoy physical exercise. He’s also planning to apply polyhacking to himself, for reasons of emotional rather than physical health.
So, if you could demonstrate to Bob that caring about what happens after he’s dead is in any way beneficial, he will strive to train himself to do so—as long as doing so does not conflict with his terminal goals, of course.
I’m not sure what you mean. If I were able to construct a utility function for myself, it would have dependence on my projections of what happens after I die.
It is not my goal to have this sort of utility function.
Well, you said that the disagreement between you and Bob comes down to a choice of terminal goals, and thus it’s pointless for you to try to persuade Bob and vice versa. I am trying to figure out which goals are in conflict. I suspect that you care about what happens after you die because doing so helps advance some other goal, not because that’s a goal in and of itself (though I could be wrong).
By analogy, a paperclip maximizer would care about securing large quantities of nickel not because it merely loves nickel, but because doing so would allow it to create more paperclips, which is its terminal goal.
I don’t know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
But that’s quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?
I am interested in the discussion, so I am going to roleplay such a person. I’ll call him “Bob”.
Bob does not intend to have children, for a variety of reasons. He understands that some people do want children, and, while he believes that they are wrong, he does agree that wills are sensible tools to employ once a person commits to having children.
Bob wants to maximize his own utility. He recognizes that certain actions give him “warm fuzzies”; but he also understands that his brain is full of biases, and that not all actions that produce “warm fuzzies” are in his long-term interest. Bob has been working diligently to erdaicate as many of his biases as is reasonably practical.
So, please convince Bob that caring about what happens after he’s dead is important.
If Bob really doesn’t care, then there’s not much to say. I mean, who am I to tell Bob what Bob should want? That said, I may be able to explain to Bob why I care, and he might accept or at least understand my reasoning. Would that satisfy?
I think it would. Bob wants to want the things that will make him better off in the long run. This is why, for example, Bob trained himself to resist the urge to eat fatty/sugary foods. As the result, he is now much healthier (not to mention, leaner) than he used to be, and he doesn’t even enjoy the taste of ice cream as much as he did. In the process, he also learned to enjoy physical exercise. He’s also planning to apply polyhacking to himself, for reasons of emotional rather than physical health.
So, if you could demonstrate to Bob that caring about what happens after he’s dead is in any way beneficial, he will strive to train himself to do so—as long as doing so does not conflict with his terminal goals, of course.
Well, that’s the thing. It’s a choice of terminal goals. If we hold those fixed, then we have nothing left to talk about.
Are you saying that caring about what happens after your death is a terminal goal for you ? That doesn’t sound right.
I’m not sure what you mean. If I were able to construct a utility function for myself, it would have dependence on my projections of what happens after I die.
It is not my goal to have this sort of utility function.
Well, you said that the disagreement between you and Bob comes down to a choice of terminal goals, and thus it’s pointless for you to try to persuade Bob and vice versa. I am trying to figure out which goals are in conflict. I suspect that you care about what happens after you die because doing so helps advance some other goal, not because that’s a goal in and of itself (though I could be wrong).
By analogy, a paperclip maximizer would care about securing large quantities of nickel not because it merely loves nickel, but because doing so would allow it to create more paperclips, which is its terminal goal.
Your guess model of my morality breaks causality. I’m pretty sure that’s not a feature of my preferences.
That rhymes, but I’m not sure what it means.
How could I care about things that happen after I die only as instrumental values so as to affect things that happen before I die?
I don’t know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
But that’s quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?