Well, you said that the disagreement between you and Bob comes down to a choice of terminal goals, and thus it’s pointless for you to try to persuade Bob and vice versa. I am trying to figure out which goals are in conflict. I suspect that you care about what happens after you die because doing so helps advance some other goal, not because that’s a goal in and of itself (though I could be wrong).
By analogy, a paperclip maximizer would care about securing large quantities of nickel not because it merely loves nickel, but because doing so would allow it to create more paperclips, which is its terminal goal.
I don’t know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
But that’s quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?
Well, you said that the disagreement between you and Bob comes down to a choice of terminal goals, and thus it’s pointless for you to try to persuade Bob and vice versa. I am trying to figure out which goals are in conflict. I suspect that you care about what happens after you die because doing so helps advance some other goal, not because that’s a goal in and of itself (though I could be wrong).
By analogy, a paperclip maximizer would care about securing large quantities of nickel not because it merely loves nickel, but because doing so would allow it to create more paperclips, which is its terminal goal.
Your guess model of my morality breaks causality. I’m pretty sure that’s not a feature of my preferences.
That rhymes, but I’m not sure what it means.
How could I care about things that happen after I die only as instrumental values so as to affect things that happen before I die?
I don’t know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
But that’s quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?