It’s a worthwhile effort to overcome this problem, but let me offer a mode of criticising it. A lot of people are not going to want the principles of rationality to be contingent on how long you expect to live. There are a bunch of reasons for this. One is that how long you expect to live might not be well-defined. In particular, some people will want to say that there’s no right answer to the question of whether you become a new person each time you wake up in the morning, or each time some of your brain cells die. On the other extreme, it might be the case that to a significant degree, some of your life continues after your heart stops beating, either through your ideas living on in others’ minds, or by freezing yourself. If you freeze yourself for 1000 years, then wake up again for another hundred, should the frozen years be included in defining PESTs, or not? It seems weird that rationality should be dependent on how we formalise the philosophy of identity in the real world. Why should PESTs be defined based on how long you expect to live, compared to how long you expect humanity as a whole to live, or on the expected lifetime of anything else that you might care about.
Anyhow, despite my criticism, this is an interesting answer—cheers for writing this up.
I understand that line of reasoning, but to me it feels similar to the philosophy where one thinks that the principles of rationality should be totally objective and shouldn’t involve things like subjective probabilities, so then one settles on a frequentist interpretation of probability and tries to get rid of subjective (Bayesian) probabilities entirely. Which doesn’t really work in the real world.
One is that how long you expect to live might not be well-defined. In particular, some people will want to say that there’s no right answer to the question of whether you become a new person each time you wake up in the morning, or each time some of your brain cells die.
But most people already base their reasoning on an assumption of being the same person tomorrow; if you seriously start making your EU calculations based on the assumption that you’re only going to live for one day or for an even shorter period, lots of things are going to get weird and broken, even without my approach.
It seems weird that rationality should be dependent on how we formalise the philosophy of identity in the real world. Why should PESTs be defined based on how long you expect to live, compared to how long you expect humanity as a whole to live, or on the expected lifetime of anything else that you might care about.
It doesn’t seem all that weird to me; rationality has always been a tool for us to best achieve the things we care about, so its exact form will always be dependent on the things that we care about. The kinds of deals we’re willing to consider already depend on how long we expect to live. For example, if you offered me a deal that had a 99% chance of killing me on the spot and a 1% chance of giving me an extra 20 years of healthy life, the rational answer would be to say “no” if it was offered to me now, but “yes” if it was offered to me when I was on my deathbed.
If you say “rationality is dependent on how we formalize the philosophy of identity in the real world”, it does sound counter-intuitive, but if you say “you shouldn’t make deals that you never expect to be around to benefit from”, it doesn’t sound quite so weird anymore. If you expected to die in 10 years, you wouldn’t make a deal that would give you lots of money in 30. (Of course it could still be rational if someone else you cared about would get the money after your death, but let’s assume that you could only collect the payoff personally.)
Using subjective information within a decision-making framework seems fine. The troublesome part is that the idea of ‘lifespan’ is being used to create the framework.
Making practical decisions about how long I expect to live seems fine and normal currently. If I want an icecream tomorrow, that’s not contingent on whether ‘tomorrow-me’ is the same person as I was today or a different one. My lifespan is uncertain, and a lot of my values might be fulfilled after it ends. Weirdnesses like the possibility of being a Boltmann brain are tricky, but at least they don’t interfere with the machinery/principles of rationality—I can still do an expected value calculation. Weirdness on the object level I can deal with.
Allowing ‘lifespan’ to introduce weirdness into the decision-making framework itself seems less nice. Now, whether my frozen life counts as ‘being alive’ is extra important. Like being frozen for a long time, or lots of Boltzmann brains existing could interfere with what risks I should be willing to accept on this planet, a puzzle that would require resolution.
Using subjective information within a decision-making framework seems fine. The troublesome part is that the idea of ‘lifespan’ is being used to create the framework.
I’m not sure that the within/outside the framework distinction is meaningful. I feel like the expected lifetime component is also just another variable that you plug into the framework, similarly to your probabilities and values (and your general world-model, assuming that the probabilities don’t come out of nowhere). The rational course of action already depends on the state of the world, and your expected remaining lifetime is a part of the state of the world.
I also actually feel that the fact that we’re forced to think about our lifetime is a good sign. EU maximization is a tool for getting what we want, and Pascal’s Mugging is a scenario where it causes us to do things that don’t get us what we want. If a potential answer to PM reveals that EU maximization is broken because it doesn’t properly take into account everything that we want, and forces us to consider previously-swept-under-the-rug questions about what we do want… then that seems like a sign that the proposed answer is on the right track.
I have this intuition that I’m having slight difficulties putting into words… but roughly, EU maximization is a rule of how to behave in different situations, which abstracts over the details of those situations while being ultimately derived from them. I feel that attempts to resolve Pascal’s Mugging by purely “rational” grounds are mostly about trying to follow a certain aesthetic that favors deriving things from purely logical considerations and a priori principles. And that aesthetic necessarily ends up treating EU maximization as just a formal rule, neglecting to consider the actual situations it abstracts over, and loses sight of the actual purpose of the rule, which is to give us good outcomes. If you forget about trying to follow the aesthetic and look at the actual behavior that something like PEST leads to, you’ll see that it’s the agents who ignore PESTs are the ones who actually end up winning… which is the thing that should really matter.
If I want an icecream tomorrow, that’s not contingent on whether ‘tomorrow-me’ is the same person as I was today or a different one.
Really? If I really only cared about the tomorrow!me for the same amount that I cared about some random stranger, my behavior would be a lot different. I wouldn’t bother with any long-term plans, for one.
Of course, even people who believe that they’ll be another person tomorrow still mostly act the same as everyone else. One explanation would be that their implicit behavior doesn’t match their explicit beliefs… but even if it did match, there would still be a rational case for caring about their future self more than they cared about random strangers, because the future self would have more similar values to them than a random stranger. In particular, their future self would be likely to follow the same decision algorithm as they were.
So if they cared about things that happened after their death, it would be reasonable to still behave like they expected their total lifetime to be the same as with a more traditional theory of personal identity, and this is the case regardless of whether we’re talking traditional EU maximization or PEST.
Thanks for these thoughts, Kaj.
It’s a worthwhile effort to overcome this problem, but let me offer a mode of criticising it. A lot of people are not going to want the principles of rationality to be contingent on how long you expect to live. There are a bunch of reasons for this. One is that how long you expect to live might not be well-defined. In particular, some people will want to say that there’s no right answer to the question of whether you become a new person each time you wake up in the morning, or each time some of your brain cells die. On the other extreme, it might be the case that to a significant degree, some of your life continues after your heart stops beating, either through your ideas living on in others’ minds, or by freezing yourself. If you freeze yourself for 1000 years, then wake up again for another hundred, should the frozen years be included in defining PESTs, or not? It seems weird that rationality should be dependent on how we formalise the philosophy of identity in the real world. Why should PESTs be defined based on how long you expect to live, compared to how long you expect humanity as a whole to live, or on the expected lifetime of anything else that you might care about.
Anyhow, despite my criticism, this is an interesting answer—cheers for writing this up.
Thanks!
I understand that line of reasoning, but to me it feels similar to the philosophy where one thinks that the principles of rationality should be totally objective and shouldn’t involve things like subjective probabilities, so then one settles on a frequentist interpretation of probability and tries to get rid of subjective (Bayesian) probabilities entirely. Which doesn’t really work in the real world.
But most people already base their reasoning on an assumption of being the same person tomorrow; if you seriously start making your EU calculations based on the assumption that you’re only going to live for one day or for an even shorter period, lots of things are going to get weird and broken, even without my approach.
It doesn’t seem all that weird to me; rationality has always been a tool for us to best achieve the things we care about, so its exact form will always be dependent on the things that we care about. The kinds of deals we’re willing to consider already depend on how long we expect to live. For example, if you offered me a deal that had a 99% chance of killing me on the spot and a 1% chance of giving me an extra 20 years of healthy life, the rational answer would be to say “no” if it was offered to me now, but “yes” if it was offered to me when I was on my deathbed.
If you say “rationality is dependent on how we formalize the philosophy of identity in the real world”, it does sound counter-intuitive, but if you say “you shouldn’t make deals that you never expect to be around to benefit from”, it doesn’t sound quite so weird anymore. If you expected to die in 10 years, you wouldn’t make a deal that would give you lots of money in 30. (Of course it could still be rational if someone else you cared about would get the money after your death, but let’s assume that you could only collect the payoff personally.)
Using subjective information within a decision-making framework seems fine. The troublesome part is that the idea of ‘lifespan’ is being used to create the framework.
Making practical decisions about how long I expect to live seems fine and normal currently. If I want an icecream tomorrow, that’s not contingent on whether ‘tomorrow-me’ is the same person as I was today or a different one. My lifespan is uncertain, and a lot of my values might be fulfilled after it ends. Weirdnesses like the possibility of being a Boltmann brain are tricky, but at least they don’t interfere with the machinery/principles of rationality—I can still do an expected value calculation. Weirdness on the object level I can deal with.
Allowing ‘lifespan’ to introduce weirdness into the decision-making framework itself seems less nice. Now, whether my frozen life counts as ‘being alive’ is extra important. Like being frozen for a long time, or lots of Boltzmann brains existing could interfere with what risks I should be willing to accept on this planet, a puzzle that would require resolution.
I’m not sure that the within/outside the framework distinction is meaningful. I feel like the expected lifetime component is also just another variable that you plug into the framework, similarly to your probabilities and values (and your general world-model, assuming that the probabilities don’t come out of nowhere). The rational course of action already depends on the state of the world, and your expected remaining lifetime is a part of the state of the world.
I also actually feel that the fact that we’re forced to think about our lifetime is a good sign. EU maximization is a tool for getting what we want, and Pascal’s Mugging is a scenario where it causes us to do things that don’t get us what we want. If a potential answer to PM reveals that EU maximization is broken because it doesn’t properly take into account everything that we want, and forces us to consider previously-swept-under-the-rug questions about what we do want… then that seems like a sign that the proposed answer is on the right track.
I have this intuition that I’m having slight difficulties putting into words… but roughly, EU maximization is a rule of how to behave in different situations, which abstracts over the details of those situations while being ultimately derived from them. I feel that attempts to resolve Pascal’s Mugging by purely “rational” grounds are mostly about trying to follow a certain aesthetic that favors deriving things from purely logical considerations and a priori principles. And that aesthetic necessarily ends up treating EU maximization as just a formal rule, neglecting to consider the actual situations it abstracts over, and loses sight of the actual purpose of the rule, which is to give us good outcomes. If you forget about trying to follow the aesthetic and look at the actual behavior that something like PEST leads to, you’ll see that it’s the agents who ignore PESTs are the ones who actually end up winning… which is the thing that should really matter.
Really? If I really only cared about the tomorrow!me for the same amount that I cared about some random stranger, my behavior would be a lot different. I wouldn’t bother with any long-term plans, for one.
Of course, even people who believe that they’ll be another person tomorrow still mostly act the same as everyone else. One explanation would be that their implicit behavior doesn’t match their explicit beliefs… but even if it did match, there would still be a rational case for caring about their future self more than they cared about random strangers, because the future self would have more similar values to them than a random stranger. In particular, their future self would be likely to follow the same decision algorithm as they were.
So if they cared about things that happened after their death, it would be reasonable to still behave like they expected their total lifetime to be the same as with a more traditional theory of personal identity, and this is the case regardless of whether we’re talking traditional EU maximization or PEST.