Time Biases

Link post

Book review: Time Biases: A Theory of Rational Planning and Personal Persistence, by Meghan Sullivan.

I was very unsure about whether this book would be worth reading, as it could easily have been focused on complaints about behavior that experts have long known are mistaken.

I was pleasantly surprised when it quickly got to some of the really hard questions, and was thoughtful about what questions deserved attention. I disagree with enough of Sullivan’s premises that I have significant disagreements with her conclusions. Yet her reasoning is usually good enough that I’m unsure what to make of our disagreements - they’re typically due to differences of intuition that she admits are controversial.

I had hoped for some discussion of ethics (e.g. what discount rate to use in evaluating climate change), whereas the book focuses purely on prudential rationality (i.e. what’s rational for a self-interested person). Still, the discussion of prudential rationality covers most of the issues that make the ethical choices hard.

Personal identity

A key issue is the nature of personal identity—does one’s identity change over time?

Sullivan starts this discussion by comparing a single-agent model to a model with multiple distinct stages.

I initially found that part of the book confusing, as neither model seems close to mine, and she seemed to present them as the only possible models. Then somewhat later on in the book, I figured out that both those models assumed that identity needed to stay constant for some time, and that in the case of the multiple stage model, the change in identity was discontinuous.

Sullivan prefers that kind of single-agent model, and therefore concludes that we should adopt a time-neutral stance (i.e. she rejects any time-discounting, and expects us to care as much about our distant future selves as we care about our present selves).

Sullivan eventually compares that single-agent model to some models in which a current self partially identifies with a future self, which would allow for time-discounting of how much we care about a self.

The most interesting of those models involves some notion of connectedness between current and future selves.

This feels like the obviously right approach to me. I expect it to work something like the “me” of a decade ago or a decade in the future being something I consider to be maybe 75% “me”. I.e. I practice time-discounting, but with a low enough discount rate that it might be hard to distinguish from no discounting in many contexts.

Sullivan looks at some possible measures of connectedness, and convinces me that they don’t explain normal attitudes toward our future selves.

I suggest that she’s too quick to give up on her search for connectedness measures. I think the things which cause me to care about my future self more resemble what causes me to feel connected to friends, so it should include something like the extent to which we expect to interact. That includes some measures of how much my present self interacts with my future self. I don’t have anything resembling a complete model of how such connectedness would work, but I’m fairly confident that people do use some sort of connectedness model for deciding how much to identify with future selves.

Sullivan argues that concern-based discounting of future selves would require changes in how we evaluate our age (e.g. if my two-year-old self wasn’t connected enough with my current self, then there’s something wrong with starting at birth to count my age). I agree that it implies there’s something imperfect about standard answers to “how old am I?”, but it seem perfectly reasonable to say we want easily checkable answers to how old we are, even if that discards some relevant information. (“All models are wrong, but some are useful”).

If Sullivan can misunderstand our models of how old we are, then it’s easy to imagine that she’s made a similar mistake with how much we should care about existence over time—i.e. Sullivan sees that it’s useful to treat people as time-neutral for the purpose of, say, signing contracts; whereas for other purposes (e.g. do people save enough for retirement?), I see people caring less about their future selves. It seems likely that time-neutrality is a good enough approximation for most of our activities, which rarely require looking more than a decade into the future. Yet that doesn’t say much about how to model substantially longer time periods.

It’s more common for people to discount the future too much than too little, so it’s often wise to encourage the use of models with less discounting. That creates some temptation to go all the way to zero discounting, and doing so may be a good heuristic in some cases. But it produces weird results that I mention below.

Caring about the past

Sullivan devotes a substantial part of the book to arguing that it’s meaningful to have preferences about the past, and that rationality prevents us from discounting the past.

An example: people who sue in response to an accident that caused them pain will ask for more compensation if asked when they’re in pain than if they’re asked when the pain is in the past.

Sullivan provides other examples in the form of weird thought experiments about painful surgery followed by amnesia, where I might be left uncertain about whether the pain is in my past or my future.

No one example seems convincing by itself, but there seems to be a real pattern of something suspicious happening.

Sullivan convinced me that preferences about the past mean something, and that there’s something suspicious about discounting the past - unless it follows the same discounting rules as apply to the future.

But I don’t find it helpful to frame problems in terms of preferences about the past. Timeless decision theory (TDT) seems to provide a more natural and comprehensive solution to these kinds of problems, and assumes time-neutrality, or a discount rate that’s low enough to approximate time-neutrality over the relevant time periods. (I’m unsure whether TDT is understood much beyond the rather specialized community that cares about it).

Sullivan’s approach seems to leave her puzzled about Kavka’s toxin puzzle. It seems that something in her premises implies that we can’t become the kind of person who follows TDT (i.e. follows through on a commitment to drink the poison), but I haven’t quite pinned down what premise(s) lead her there, and it seems to leave her in a position that looks time-biased. Specifically, she seems to imply that there’s something irrational about making a time-neutral decision to drink the poison. Yet in other parts of the book, she says that our decisions shouldn’t depend on what time we happen to find ourselves located at, assuming all other evidence available to us remains the same.

Sullivan is careful to distinguish care for the past from the sunk cost fallacy. One relevant question is: did I get new info which would have altered my prior self’s decision? Another approach is to imagine what an impartial observer would say—that helps make it irrelevant whether an event is in my past or my future. I probably missed some important nuance that Sullivan describes, but those seem like the important distinctions to make.

Afterlives

How should we think about really long lifespans?

Sullivan uses a strange (but harmless?) concept of afterlife, which includes any time experienced after an “irreversible radical change”, e.g. heaven, dementia, or quadriplegia.

A key claim (which I’ll abbreviate as SALE) is:

The Simple Approach for Life Extension: it is rationally permissible (and perhaps rationally obligatory) to prefer some potential future state of affairs A over some potential future state of affairs B if A has greater aggregate expected well-being than B.

I reject the obligatory version of this claim, because I discount future well-being. If I adopted the time-neutrality that Sullivan endorses, I’d endorse the obligatory version of SALE.

Sullivan is uncomfortable with SALE, because it leads to the “single life repugnant conclusion”. I.e. that a sufficiently long life of slightly positive well-being is better than a short life with more well-being. She doesn’t describe a good reason for a time-neutral person to find this repugnant, so I’m inclined to guess that she (and most people) find it repugnant due to a (possibly unconscious) preference for time discounting.

My reluctance to endorse the single life repugnant conclusion seems to be purely a function of my caring more about near-future versions of myself than about distant-future versions.

Sullivan doesn’t seem to identify anything repugnant about the conclusion beyond the fact that it feels wrong. So I’m left guessing that her intuitions of repugnance result from intuitions that distant future selves are less important.

She can, of course, believe that those intuitions are irrational, and that she’s trying to overcome them. But then why doesn’t she also try to overcome the feelings of repugnance about the single life repugnant conclusion?

Sullivan describes another version of SALE’s consequences, asking us to imagine that hell has enough breaks in the torture that it yields an average positive balance of well-being, for eternity. It looks to me like she’s trying to prejudice our intuitions with words that suggest a life that’s worse than non-existence, and then contradicting that impression when she gets more specific. I don’t see what the substance of her objection is here.

It seems my main disagreement with Sullivan boils down to the choice between:

A) agreeing that we will care less about distant future versions of ourselves than near future versions (i.e. accept something like time-discounting),

or

B) running into repeated conflicts between our intuitions and our logic when making decisions about time-distant versions of our selves.

Either choice appears to produce ordinary results when dealing with relatively familiar time periods, and strange results when applied to unusually long time periods. Choice A seems a bit simpler and clearer.

Conclusion

The book is mostly clear and well-reasoned, with a few important exceptions that I’ve noted, and those exceptions are about topics that seem to confuse most philosophers who tackle them.

I expect that many people will learn something important from the book, but it caused me to mostly just reinforce my existing beliefs.

No comments.