I suggest you perform the following thought experiment. Imagine that you(t+1) is not the same person as you(t) for any value of t=time. Every moment you die. The “you” of a moment ago died, being replaced by you (of now). You are going to die within a moment, being replaced by the “you” of in a moment.
In that reality, would you still have non-trivial preferences? If the answer is “yes”, consider adapting those as your actual preferences. Suddenly you won’t need answers to (IMO) meaningless questions anymore.
The problem here is that I am not convinced that this is just a thought experiment—it looks like something that might be more or less true.
Seems like a non-central fallacy. The usual definition of death assumes a permanent (and irreversible) cessation of all detectable signs of life. Your definition of death would have none of that. Feel free to clarify your definition, but be aware that it is non-standard and so you are better off picking a different word for it.
The definition I was going with was ‘ceasing to exist’ or if you are referring to something in the post then the more accurate definition there is probably something along the line of ‘no longer having subjective experiences’.
How do you know if something ceases to exist if there are no outward signs of this? What measurable and testable definition of “ceasing to exist” do you use? If “ceasing to exist” is only in your mind, how is it different from being afraid of a monster under your bed?
My first reaction is that if I consider the person as t+1 to be someone different, these are the reactions that make sense:
a) selfish behavior, including selfishness against the future me. For example, when I am in the shop, I would take the most tasty thing and start eating it, because I want some pleasure now, and I don’t care about the future person getting in trouble.
b) altruist behavior, but considering the future me completely equal to future anyone-else. For example, I would donate all my money to someone else if I thought they need it just a little more than me, because I am simply choosing between two strangers.
c) some mix of the former two behaviors.
The important thing here is that the last option doesn’t add up to normality. My current behavior is partially selfish and partially altruist, but is not a linear combination of the first two options: both of them care about the future me exactly the same as about the future someone-else; but my current behavior doesn’t.
A possible way to fix this is to assume that I care equally about future-anyone intrinsically, but I care more about future-me instrumentally. What I do now has larger impact on what my future-me will do than what future someone-else will do; especially because by “doing” in this context I also mean things like “adopting beliefs” etc. Simply said: I am thousand times more efficient at programming the future-me than programming future someone-else, so my paths to create more utility in the future naturally mostly go through my future-self. --- However, this whole paragraph smells like a rationalization for a given bottom line.
For me, the obvious answer is b. This is the answer for all forms of consequentialism which treat all people symmetrically e.g. utilitarianism. However, you can adapt the “personal identity isn’t real” viewpoint and still prefer people who are similar to yourself (e.g. future you).
I suggest you perform the following thought experiment. Imagine that you(t+1) is not the same person as you(t) for any value of t=time. Every moment you die. The “you” of a moment ago died, being replaced by you (of now). You are going to die within a moment, being replaced by the “you” of in a moment.
In that reality, would you still have non-trivial preferences? If the answer is “yes”, consider adapting those as your actual preferences. Suddenly you won’t need answers to (IMO) meaningless questions anymore.
The problem here is that I am not convinced that this is just a thought experiment—it looks like something that might be more or less true.
And yes, I have non-death related preferences but those are way less important to me.
Seems like a non-central fallacy. The usual definition of death assumes a permanent (and irreversible) cessation of all detectable signs of life. Your definition of death would have none of that. Feel free to clarify your definition, but be aware that it is non-standard and so you are better off picking a different word for it.
You might also find A Human’s Guide to Words of some use.
The definition I was going with was ‘ceasing to exist’ or if you are referring to something in the post then the more accurate definition there is probably something along the line of ‘no longer having subjective experiences’.
How do you know if something ceases to exist if there are no outward signs of this? What measurable and testable definition of “ceasing to exist” do you use? If “ceasing to exist” is only in your mind, how is it different from being afraid of a monster under your bed?
My first reaction is that if I consider the person as t+1 to be someone different, these are the reactions that make sense:
a) selfish behavior, including selfishness against the future me. For example, when I am in the shop, I would take the most tasty thing and start eating it, because I want some pleasure now, and I don’t care about the future person getting in trouble.
b) altruist behavior, but considering the future me completely equal to future anyone-else. For example, I would donate all my money to someone else if I thought they need it just a little more than me, because I am simply choosing between two strangers.
c) some mix of the former two behaviors.
The important thing here is that the last option doesn’t add up to normality. My current behavior is partially selfish and partially altruist, but is not a linear combination of the first two options: both of them care about the future me exactly the same as about the future someone-else; but my current behavior doesn’t.
A possible way to fix this is to assume that I care equally about future-anyone intrinsically, but I care more about future-me instrumentally. What I do now has larger impact on what my future-me will do than what future someone-else will do; especially because by “doing” in this context I also mean things like “adopting beliefs” etc. Simply said: I am thousand times more efficient at programming the future-me than programming future someone-else, so my paths to create more utility in the future naturally mostly go through my future-self. --- However, this whole paragraph smells like a rationalization for a given bottom line.
For me, the obvious answer is b. This is the answer for all forms of consequentialism which treat all people symmetrically e.g. utilitarianism. However, you can adapt the “personal identity isn’t real” viewpoint and still prefer people who are similar to yourself (e.g. future you).