The problem I have with considering future discounting is that it forces me to formulate a consistent personal identity across time scales longer than a few moments. I’ve never successfully managed that.
To my horror, my future self has different interests to my present self
Can you describe ‘my future self’ without any sort of pronoun? If you could do that, the horror might, y’know, go away a bit. Thou art physics, after all.
as surely as if I knew the day a murder pill would be forced upon me.
Not quite as surely, otherwise you’d be taking steps to stop your mind changing day to day. This is exaggeration, even though the analogy works to a degree.
consider the alcoholic who moves to a town in which alcohol is not sold, anticipating a change in desires and deliberately constraining their own future self
So is ‘the alcoholic’ the optimiser that wants to keep drinking? Or the optimiser that wants to stop drinking? Or both? This article shows a tendency to view the mind as a point-particle of desire and value. It’s nothing of the sort. We all have to accept that identity is a fuzzy, shifting entity. Trusting in rationality is an early step towards resolving this problem.
Good points. I share your concern. But it’s not clear which direction rationality cuts in this case. If I have no special attachment to the “me” of one year from now, why should I sacrifice present interests for his? On the other hand, I’ve been wondering recently if it’s possible to salvage our folk concept of identity by positing that, while “me” at T2 might not be “me” in any robust sense, 1). there will be a person (or locus of consciousness, if you will) at T2 who thinks he’s me, and shares many of my memories and behavioral predispositions, and 2). that person will be disproportionately influenced by my actions today. I think it follows from ethical considerations, then, if not prudential ones, that I should act today in a way that is in keeping with my best interests, so as not to unduly harm that future person.
Now, what would really be interesting would be if we discovered that the “rational” thing to do would be some averaging of the two extremes—i.e., I continue to act generally in my future best interests, but also prioritize present and near-term happiness to a much greater degree than seems naively appropriate.
The problem I have with considering future discounting is that it forces me to formulate a consistent personal identity across time scales longer than a few moments. I’ve never successfully managed that.
Can you describe ‘my future self’ without any sort of pronoun? If you could do that, the horror might, y’know, go away a bit. Thou art physics, after all.
Not quite as surely, otherwise you’d be taking steps to stop your mind changing day to day. This is exaggeration, even though the analogy works to a degree.
So is ‘the alcoholic’ the optimiser that wants to keep drinking? Or the optimiser that wants to stop drinking? Or both? This article shows a tendency to view the mind as a point-particle of desire and value. It’s nothing of the sort. We all have to accept that identity is a fuzzy, shifting entity. Trusting in rationality is an early step towards resolving this problem.
Good points. I share your concern. But it’s not clear which direction rationality cuts in this case. If I have no special attachment to the “me” of one year from now, why should I sacrifice present interests for his? On the other hand, I’ve been wondering recently if it’s possible to salvage our folk concept of identity by positing that, while “me” at T2 might not be “me” in any robust sense, 1). there will be a person (or locus of consciousness, if you will) at T2 who thinks he’s me, and shares many of my memories and behavioral predispositions, and 2). that person will be disproportionately influenced by my actions today. I think it follows from ethical considerations, then, if not prudential ones, that I should act today in a way that is in keeping with my best interests, so as not to unduly harm that future person.
Now, what would really be interesting would be if we discovered that the “rational” thing to do would be some averaging of the two extremes—i.e., I continue to act generally in my future best interests, but also prioritize present and near-term happiness to a much greater degree than seems naively appropriate.