I think this sidesteps the underlying intuitions too quickly. We have cognitive mechanisms to predict “our next experience,” memories of this algorithm working well, and preferences in terms of “our next experience.” If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model. We don’t start with total utilitarian-like preferences over the fates of our future copies (i.e. most aren’t eager to lower their standard of living by a lot so as to be copied many times (with the copies also having low standards of living)), and one needs to explain why to translate our naive intuitions into the additive framework (rather than something more like averaging).
I think this sidesteps the underlying intuitions too quickly.
I think you are right. I also seem not to have conveyed quite the same position as the one I intended. That is:
Quantum Suicide is not something that you “Believe In” but rather a preference that in all worlds in which you don’t win you are killed.
This is a valid, coherent and not intrinsically irrational goal.
You don’t get more “winningness” by killing yourself.
The Everett branches in which you are killed are just as real as the ones where you are alive. They are not trimmed from reality.
These are the points I have found myself wishing I had a post to link to when I have been asked to explain a position. Going on to explain in detail why I have the preferences I have would open up another post or three worth of discussion of whether existence in more branches is equivalent to copies and a bunch of related philosophical questions like those that you allude to.
We have … preferences in terms of “our next experience.” If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model.
In what sense would I want to translate these preferences? Why wouldn’t I just discard the preferences, and use the mind that came up with them to generate entirely new preferences in the light of its new, improved world-model? If I’m asking myself, as if for the first time, the question, “if there are going to be a lot of me-like things, how many me-like things with how good lives would be how valuable?”, then the answer my brain gives is that it wants to use empathy and population ethics-type reasoning to answer that question, and that it feels no need to ever refer to “unique next experience” thinking. Is it making a mistake?
In what sense would I want to translate these preferences?
I think in the sense that the new world-model ought to add up to normality. The move you propose probably only works (i.e., is intuitively acceptable) for someone who already has a strong intuition that they ought to apply empathy and population ethics-type reasoning to all decisions, not just those that only affect other people. For others who don’t share such intuition, switching from “unique thread of experience” to empathy and population ethics-type reasoning would imply making radically different decisions, even for current real-world (i.e., not thought experiment) decisions, like whether to donate most of their money to charity (the former says “no” while the latter says “yes”, since the difference in empathy-level between “someone like me” and “a random human” isn’t that great).
That’s one approach to take, with various attractive features, but one needs to be careful in that case in thinking about thought-experiments like those Wei Dai offers (which are implicitly callling on the thread model).
Well, assuming that you generally don’t want to die, quantum suicide is irrational (not independent of irrelevant alternatives). The extent to which we should do irrational things because we want to is definitely something to think about, but I think it’s also alright to just say “it’s irrational and that’s bad.”
I think this sidesteps the underlying intuitions too quickly. We have cognitive mechanisms to predict “our next experience,” memories of this algorithm working well, and preferences in terms of “our next experience.” If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model. We don’t start with total utilitarian-like preferences over the fates of our future copies (i.e. most aren’t eager to lower their standard of living by a lot so as to be copied many times (with the copies also having low standards of living)), and one needs to explain why to translate our naive intuitions into the additive framework (rather than something more like averaging).
I think you are right. I also seem not to have conveyed quite the same position as the one I intended. That is:
Quantum Suicide is not something that you “Believe In” but rather a preference that in all worlds in which you don’t win you are killed.
This is a valid, coherent and not intrinsically irrational goal.
You don’t get more “winningness” by killing yourself.
The Everett branches in which you are killed are just as real as the ones where you are alive. They are not trimmed from reality.
These are the points I have found myself wishing I had a post to link to when I have been asked to explain a position. Going on to explain in detail why I have the preferences I have would open up another post or three worth of discussion of whether existence in more branches is equivalent to copies and a bunch of related philosophical questions like those that you allude to.
In what sense would I want to translate these preferences? Why wouldn’t I just discard the preferences, and use the mind that came up with them to generate entirely new preferences in the light of its new, improved world-model? If I’m asking myself, as if for the first time, the question, “if there are going to be a lot of me-like things, how many me-like things with how good lives would be how valuable?”, then the answer my brain gives is that it wants to use empathy and population ethics-type reasoning to answer that question, and that it feels no need to ever refer to “unique next experience” thinking. Is it making a mistake?
I think in the sense that the new world-model ought to add up to normality. The move you propose probably only works (i.e., is intuitively acceptable) for someone who already has a strong intuition that they ought to apply empathy and population ethics-type reasoning to all decisions, not just those that only affect other people. For others who don’t share such intuition, switching from “unique thread of experience” to empathy and population ethics-type reasoning would imply making radically different decisions, even for current real-world (i.e., not thought experiment) decisions, like whether to donate most of their money to charity (the former says “no” while the latter says “yes”, since the difference in empathy-level between “someone like me” and “a random human” isn’t that great).
That’s one approach to take, with various attractive features, but one needs to be careful in that case in thinking about thought-experiments like those Wei Dai offers (which are implicitly callling on the thread model).
What makes you think a mind came up with them?
I don’t understand what point you’re making; could you expand?
You can’t use the mind that came up with your preferences if no such mind exists. That’s my point.
What would have come up with them instead?
Evolution.
In the sense that evolution came up with my mind, or in some more direct sense?
Well, assuming that you generally don’t want to die, quantum suicide is irrational (not independent of irrelevant alternatives). The extent to which we should do irrational things because we want to is definitely something to think about, but I think it’s also alright to just say “it’s irrational and that’s bad.”