Doesn’t many-worlds solve this neatly? Thinking of it as 99.9999999% of the mes sacrificing ourselves so that the other 0.00000001% can live a ridiculously long time makes sense to me. The problem comes when you favor this-you over all the other instances of yourself.
Or maybe there’s a reason I stay away from this kind of thing.
There’s an easier solution to the posed problem if you assume MWI. (Has anyone else suggested this solution? It seems too obvious to me.)
Suppose you are offered & accept a deal where 99 out of 100 yous die, and the survivor gets 1000x his lifetime’s worth of computational resources. All the survivor has to do is agree to simulate the 99 losers (and obviously run himself) for a cost of 100 units, yielding a net profit of 900 units.
(Substitute units as necessary for each ever more extreme deal Omega offers.)
No version of yourself loses—each lives—and one gains enormously. So isn’t accepting Omega’s offers, as long as each one is a net profit as described, a Pareto-improving situation? Knowing this is true at each step, why would one then act like Eliezer and pay a penny to welsh on the entire thing?
Suppose that a scientist approached you and wanted to pay you $1000 to play the role of Schrödinger’s cat in an open-mike-night stage performance he’s putting together. Take as given that the trigger for the vial of poison will result in a many-worlds timeline split;(1) the poison is painless and instantaneous;(2) and there is nobody left in the world who would be hurt by your death (no close friends or family). You can continue performing, for $1000 a night, for as long as you want.
Personally I can’t think of a reason not to do this.
(1) I’m 83% confidant that I said something stupid about Many Worlds there.
(2) No drowning or pain for your other self like in ThePrestige.
(2) and there is nobody left in the world who would be hurt by your death (no close friends or family)
That’s actually an extremely strong precondition. People in modern society play positive-sum games all the time; in most interactions where people exchange one good or service for another (such as in selling their time or buying a material object for money), that leaves both participants better off.
A productive member of society killing themselves—even if they have no friends and are unlikely to make any—leaves the average surviving member of that society worse off. Many unproductive members of society (politicians come to mind) could probably become productive if they really wanted to; throwing your life away in some branches is still a waste.
None of this applies if you’re a perfect egoist, of course.
The opportunity cost of dying is the utility you could be netting by remaining alive. Unless you only value the rest of your life at less than £1000, you should go for life (presuming the decay is at 50:50, adjust as required)
The result applies to MWs too, I think- taking the bet results in opportunity cost for all the future yous who die/never exist, reducing your average utility across all future worlds.
It is possible that this sort of gamble on quantum immortality will maximise utility, but it is unlikely for such a small quantity of money,
I’d argue that it’s reasonable to place a $0 utility on my existence in other Everett branches; while theoretically I know they exist, theoretically there is something beyond the light-barrier at the edge of the visible universe. It’s existence is irrelevant, however, since I will never be able to interact with it.
Perhaps a different way of phrasing this—say I had a duplicating machine. I step into Booth B, and then an exact duplicate is created in booths A and C, while the booth B body is vapourized. For reasons of technobabble, the booth can only recreate people, not gold bullion, or tasty filet mignons. I then program the machine to ‘dissolve’ the booth C version into three vats of the base chemicals which the human body is made up of, through an instantaneous and harmless process. I then sell these chemicals for $50 on ebay. (Anybody with enough geek-points will know that the Star Trek teleporters work on this principle).
Keep in mind that the universe wouldn’t have differentiated into two distinct universes, one where I’m alive and one where I’m dead, if I hadn’t performed the experiment (technically it would still have differentiated, but the two results would be anthropically identical). Does my existence in another Everett branch have moral significance? Suffering is one thing, but existence? I’m not sure that it does.
I think this depends on the answers to problems in anthropics and consciousness (the subjects that no one understands). The aptness of your thought experiment depends on Everett branching being like creating a duplicate of yourself, rather than dividing your measure) or “degree-of-consciousness” in half. Now, since I only have the semipopular (i.e., still fake) version of QM, there’s a substantial probability that everything I believe is nonsense, but I was given to understand that Everett branching divides up your measure, rather than duplicating you: decoherence is a thermodynamic process occuring in the universal wavefunction; it’s not really about new parallel universes being created. Somewhat disturbingly, if I’m understanding it correctly, this seems to suggest that people in the past have more measure than we do, simply by virtue of being in the past …
this seems to suggest that people in the past have more measure than we do
One Everett branch in the past has more measure than one Everett branch now. But the total measures over all Everett branches containing humans differ only by the probability of an existential disaster in the intervening time. The measure is merely spread across more diversity now, which doesn’t seem all that disturbing to me.
Hopefully this conversation doesn’t separate into decoherence—though we may well have already jumped the shark. :)
First of all, I want to clarify something: do you agree that duplicating myself with a magical cloning booth for the $50 of mineral extracts is sensible, while disagreeing with the same tactic using Everett branches?
Secondly, could you explain how measure in the mathematical sense relates to moral value in unknowable realites (I confess, I remember only half of my calculus).
Thirdly, following up on the second, I was under the “semipopular (i.e., still fake) version of QM” idea that differing Everett branches were as unreal as something outside of my light cone. (This is a great link regarding relativity—sorry I don’t know how to html: http://www.theculture.org/rich/sharpblue/ )
For the record, I’m not entirely certain that differeing Everett branches of myself have 0 value; I wouldn’t want them to suffer but if one of the two of us stopped existing, the only concern I could justify to myself would be concern over my long-suffering mother. I can’t prove that they have zero value, but I can’t think of why they wouldn’t.
could you explain how measure in the mathematical sense relates to moral value in unknowable realites
Well, I know that different things are going to happen to different future versions of me across the many worlds. I don’t want to say that I only care about some versions of me, because I anticipate being all of them. I would seem to need some sort of weighing scheme. You’ve said you don’t want your analogues to suffer, but you don’t mind them ceasing to exist, but I don’t think you can do that consistently. The real world is continuous and messy: there’s no single bright line between life and death, between person and not-a-person. If you’re okay with half of your selves across the many worlds suddenly dying, are you okay with them gradually dropping into a coma? &c.
“Well, I know that different things are going to happen to different future versions of me across the many worlds.”
From what I understand, the many-worlds occur due to subatomic processes; while we’re certain to find billions of examples along the evolutionary chain that went A or B due to random-decaying-netronium-thing (most if not all of which will alter the present day), contemporary history will likely remain unchanged; for there to be multiple future-histories where the Nazis won (not Godwin’s law!), there’d have to be trillions of possible realities, each of which is differentiated by a reaction here on earth; and even if these trillions do exist, then it still won’t matter for the small subset in which I exist.
The googleplex of selves which exist down all of these lines will be nearly identical; the largest difference will will be that one set had a microwave ‘ping’ a split-second earlier than the other.
I don’t know that two googleplexes of these are inherently better than a single googleplex.
As for coma—is it immediate, spontaneous coma, with no probability of ressurection? If so, then it’s basically equivalent to painless death.
It just seems kind of oddly discontinuous to care about what happens to your analogues except death. I mention comas only in an attempt to construct a least convenient possible world with which to challenge your quantum immortalist position. I mean—are you okay with your scientist-stage-magician wiping out 99.999% of your analogues, as long as one copy of you exists somewhere? But decoherence is continuous: what does it even mean, to speak of exactly one copy of you? Cf. Nick Bostrom’s “Quantity of Experience” (PDF).
Evidence to support your idea- whenever I make a choice, in another branch, ‘I’ made a the other decision, so if I cared equally about all future versions of myself, the I’d have no reason to choose one option over another.
If correct, this shows I don’t care equally about currently parallel worlds, but not that I don’t care equally about future sub-branches from this one.
Whenever I make a choice, there are branches that made another choice. But not all branches are equal. The closer my decision algorithm is to deterministic (on a macroscopic scale), the more asymmetric the distribution of measure among decision outcomes. (And the cases where my decision isn’t close to deterministic are precisely the ones where I could just as easily have chosen the other way—where I don’t have any reason to pick one choice.)
Thus the thought experiment doesn’t show that I don’t care about all my branches, current and future, simply proportional to their measure.
Suppose you just took the poison instead ? Isn’t that just the same experiment occurring slightly earlier, since those branches would end but others wouldn’t ?
Suppose you just took the poison instead ? Isn’t that just the same experiment occurring slightly earlier, since those branches would end but others wouldn’t ?
Whatever Omega is doing that might kill you might not be tied to the mechanism that divides universes. It might be that the choice is between huge chance of all of the yous in every universe where you’re offered this choice dying, vs. tiny chance they’ll all survive.
Also, I’m pretty sure that Eliezer’s argument is intended to test our intuitions in an environment without extraneous factors like MWI. Bringing MWI into the problem is sort of like asking if there’s some sort of way to warn everyone off the tracks so no one dies in the Trolley Problem.
Doesn’t many-worlds solve this neatly? Thinking of it as 99.9999999% of the mes sacrificing ourselves so that the other 0.00000001% can live a ridiculously long time makes sense to me. The problem comes when you favor this-you over all the other instances of yourself.
Or maybe there’s a reason I stay away from this kind of thing.
There’s an easier solution to the posed problem if you assume MWI. (Has anyone else suggested this solution? It seems too obvious to me.)
Suppose you are offered & accept a deal where 99 out of 100 yous die, and the survivor gets 1000x his lifetime’s worth of computational resources. All the survivor has to do is agree to simulate the 99 losers (and obviously run himself) for a cost of 100 units, yielding a net profit of 900 units.
(Substitute units as necessary for each ever more extreme deal Omega offers.)
No version of yourself loses—each lives—and one gains enormously. So isn’t accepting Omega’s offers, as long as each one is a net profit as described, a Pareto-improving situation? Knowing this is true at each step, why would one then act like Eliezer and pay a penny to welsh on the entire thing?
I was thinking of this the other day...
Suppose that a scientist approached you and wanted to pay you $1000 to play the role of Schrödinger’s cat in an open-mike-night stage performance he’s putting together. Take as given that the trigger for the vial of poison will result in a many-worlds timeline split;(1) the poison is painless and instantaneous;(2) and there is nobody left in the world who would be hurt by your death (no close friends or family). You can continue performing, for $1000 a night, for as long as you want.
Personally I can’t think of a reason not to do this.
(1) I’m 83% confidant that I said something stupid about Many Worlds there.
(2) No drowning or pain for your other self like in The Prestige.
That’s actually an extremely strong precondition. People in modern society play positive-sum games all the time; in most interactions where people exchange one good or service for another (such as in selling their time or buying a material object for money), that leaves both participants better off.
A productive member of society killing themselves—even if they have no friends and are unlikely to make any—leaves the average surviving member of that society worse off. Many unproductive members of society (politicians come to mind) could probably become productive if they really wanted to; throwing your life away in some branches is still a waste.
None of this applies if you’re a perfect egoist, of course.
The opportunity cost of dying is the utility you could be netting by remaining alive. Unless you only value the rest of your life at less than £1000, you should go for life (presuming the decay is at 50:50, adjust as required)
The result applies to MWs too, I think- taking the bet results in opportunity cost for all the future yous who die/never exist, reducing your average utility across all future worlds.
It is possible that this sort of gamble on quantum immortality will maximise utility, but it is unlikely for such a small quantity of money,
I’d argue that it’s reasonable to place a $0 utility on my existence in other Everett branches; while theoretically I know they exist, theoretically there is something beyond the light-barrier at the edge of the visible universe. It’s existence is irrelevant, however, since I will never be able to interact with it.
Perhaps a different way of phrasing this—say I had a duplicating machine. I step into Booth B, and then an exact duplicate is created in booths A and C, while the booth B body is vapourized. For reasons of technobabble, the booth can only recreate people, not gold bullion, or tasty filet mignons. I then program the machine to ‘dissolve’ the booth C version into three vats of the base chemicals which the human body is made up of, through an instantaneous and harmless process. I then sell these chemicals for $50 on ebay. (Anybody with enough geek-points will know that the Star Trek teleporters work on this principle).
Keep in mind that the universe wouldn’t have differentiated into two distinct universes, one where I’m alive and one where I’m dead, if I hadn’t performed the experiment (technically it would still have differentiated, but the two results would be anthropically identical). Does my existence in another Everett branch have moral significance? Suffering is one thing, but existence? I’m not sure that it does.
I think this depends on the answers to problems in anthropics and consciousness (the subjects that no one understands). The aptness of your thought experiment depends on Everett branching being like creating a duplicate of yourself, rather than dividing your measure) or “degree-of-consciousness” in half. Now, since I only have the semipopular (i.e., still fake) version of QM, there’s a substantial probability that everything I believe is nonsense, but I was given to understand that Everett branching divides up your measure, rather than duplicating you: decoherence is a thermodynamic process occuring in the universal wavefunction; it’s not really about new parallel universes being created. Somewhat disturbingly, if I’m understanding it correctly, this seems to suggest that people in the past have more measure than we do, simply by virtue of being in the past …
But again, I could just be talking nonsense.
One Everett branch in the past has more measure than one Everett branch now. But the total measures over all Everett branches containing humans differ only by the probability of an existential disaster in the intervening time. The measure is merely spread across more diversity now, which doesn’t seem all that disturbing to me.
Hopefully this conversation doesn’t separate into decoherence—though we may well have already jumped the shark. :)
First of all, I want to clarify something: do you agree that duplicating myself with a magical cloning booth for the $50 of mineral extracts is sensible, while disagreeing with the same tactic using Everett branches?
Secondly, could you explain how measure in the mathematical sense relates to moral value in unknowable realites (I confess, I remember only half of my calculus).
Thirdly, following up on the second, I was under the “semipopular (i.e., still fake) version of QM” idea that differing Everett branches were as unreal as something outside of my light cone. (This is a great link regarding relativity—sorry I don’t know how to html: http://www.theculture.org/rich/sharpblue/ )
For the record, I’m not entirely certain that differeing Everett branches of myself have 0 value; I wouldn’t want them to suffer but if one of the two of us stopped existing, the only concern I could justify to myself would be concern over my long-suffering mother. I can’t prove that they have zero value, but I can’t think of why they wouldn’t.
Well, I know that different things are going to happen to different future versions of me across the many worlds. I don’t want to say that I only care about some versions of me, because I anticipate being all of them. I would seem to need some sort of weighing scheme. You’ve said you don’t want your analogues to suffer, but you don’t mind them ceasing to exist, but I don’t think you can do that consistently. The real world is continuous and messy: there’s no single bright line between life and death, between person and not-a-person. If you’re okay with half of your selves across the many worlds suddenly dying, are you okay with them gradually dropping into a coma? &c.
“Well, I know that different things are going to happen to different future versions of me across the many worlds.”
From what I understand, the many-worlds occur due to subatomic processes; while we’re certain to find billions of examples along the evolutionary chain that went A or B due to random-decaying-netronium-thing (most if not all of which will alter the present day), contemporary history will likely remain unchanged; for there to be multiple future-histories where the Nazis won (not Godwin’s law!), there’d have to be trillions of possible realities, each of which is differentiated by a reaction here on earth; and even if these trillions do exist, then it still won’t matter for the small subset in which I exist.
The googleplex of selves which exist down all of these lines will be nearly identical; the largest difference will will be that one set had a microwave ‘ping’ a split-second earlier than the other.
I don’t know that two googleplexes of these are inherently better than a single googleplex.
As for coma—is it immediate, spontaneous coma, with no probability of ressurection? If so, then it’s basically equivalent to painless death.
It just seems kind of oddly discontinuous to care about what happens to your analogues except death. I mention comas only in an attempt to construct a least convenient possible world with which to challenge your quantum immortalist position. I mean—are you okay with your scientist-stage-magician wiping out 99.999% of your analogues, as long as one copy of you exists somewhere? But decoherence is continuous: what does it even mean, to speak of exactly one copy of you? Cf. Nick Bostrom’s “Quantity of Experience” (PDF).
Evidence to support your idea- whenever I make a choice, in another branch, ‘I’ made a the other decision, so if I cared equally about all future versions of myself, the I’d have no reason to choose one option over another.
If correct, this shows I don’t care equally about currently parallel worlds, but not that I don’t care equally about future sub-branches from this one.
Whenever I make a choice, there are branches that made another choice. But not all branches are equal. The closer my decision algorithm is to deterministic (on a macroscopic scale), the more asymmetric the distribution of measure among decision outcomes. (And the cases where my decision isn’t close to deterministic are precisely the ones where I could just as easily have chosen the other way—where I don’t have any reason to pick one choice.)
Thus the thought experiment doesn’t show that I don’t care about all my branches, current and future, simply proportional to their measure.
Suppose you just took the poison instead ? Isn’t that just the same experiment occurring slightly earlier, since those branches would end but others wouldn’t ?
Suppose you just took the poison instead ? Isn’t that just the same experiment occurring slightly earlier, since those branches would end but others wouldn’t ?
Not all probabilities are quantum probabilities.
True, I was assuming a quantum probability.
Whatever Omega is doing that might kill you might not be tied to the mechanism that divides universes. It might be that the choice is between huge chance of all of the yous in every universe where you’re offered this choice dying, vs. tiny chance they’ll all survive.
Also, I’m pretty sure that Eliezer’s argument is intended to test our intuitions in an environment without extraneous factors like MWI. Bringing MWI into the problem is sort of like asking if there’s some sort of way to warn everyone off the tracks so no one dies in the Trolley Problem.