If the worlds in your MWI experiment are considered independent, you might as well do the same in a single deterministic world. Compare the expected utility calculations for one world and many-worlds: they’ll look the same, you just exchange “many-worlds” with “possible worlds” and averaging with expectation. MWI is morally uninteresting, unless you do nontrivial quantum computation. Just flip a logical coin from pi and kill the other guys.
More specifically: when you are saying “everyone survives in one of the worlds”, this statement gets intuitive approval (as opposed to doing the experiment in a deterministic world where all participants but one “die completely”), but there is no term in the expected utility calculation that corresponds to the sentiment.
MWI is morally uninteresting, unless you do nontrivial quantum computation.
I think that the intuition at steak here is something about continuity of conscious experience. The intuition that Christian might have, if I may anticipate him, is that everyone in the experiment will actually experience getting $750,000, because somehow the word-line of their conscious experience will continue only in the worlds where they do not die.
I think that, in some sense, this is a mistake, because it fundamentally rests upon a very very strong intuition that there exists a unique person who I will be in the future. This is an intuition that evolution programmed into us for obvious reasons: we are more likely to act like a good Could-Should-Would agent if we think that the benefits and costs associated with our actions will accrue to us rather than to some vague probability distribution over an infinite set of physically realized future continuations-of-me, with the property that whatever I do, some of them will die, and whatever I do, some of them will be rich and happy.
You can assign high negative utility to certain death.
You can, but then you should also do so in the expected utility calculation, which is never actually done in most discussions of MWI in this context, and isn’t done in this post. The problem is using MWI as rationalization for invalid intuitions.
I did it implicitly in the OP. Assuming that, you get a better expected value in the quantum scenario.
A logical coin flip be much more scary (and negative utility) assuming certain death for some of the participiants.
(I don’t buy quantum immortality arguments. They resemble on Achilles-Turtle problem: Being rescued in shorter and shorter intervals does not imply being rescued for a fixed time.)
If the worlds in your MWI experiment are considered independent, you might as well do the same in a single deterministic world. Compare the expected utility calculations for one world and many-worlds: they’ll look the same, you just exchange “many-worlds” with “possible worlds” and averaging with expectation. MWI is morally uninteresting, unless you do nontrivial quantum computation. Just flip a logical coin from pi and kill the other guys.
More specifically: when you are saying “everyone survives in one of the worlds”, this statement gets intuitive approval (as opposed to doing the experiment in a deterministic world where all participants but one “die completely”), but there is no term in the expected utility calculation that corresponds to the sentiment.
I think that the intuition at steak here is something about continuity of conscious experience. The intuition that Christian might have, if I may anticipate him, is that everyone in the experiment will actually experience getting $750,000, because somehow the word-line of their conscious experience will continue only in the worlds where they do not die.
I think that, in some sense, this is a mistake, because it fundamentally rests upon a very very strong intuition that there exists a unique person who I will be in the future. This is an intuition that evolution programmed into us for obvious reasons: we are more likely to act like a good Could-Should-Would agent if we think that the benefits and costs associated with our actions will accrue to us rather than to some vague probability distribution over an infinite set of physically realized future continuations-of-me, with the property that whatever I do, some of them will die, and whatever I do, some of them will be rich and happy.
You can assign high negative utility to certain death.
You can, but then you should also do so in the expected utility calculation, which is never actually done in most discussions of MWI in this context, and isn’t done in this post. The problem is using MWI as rationalization for invalid intuitions.
I did it implicitly in the OP. Assuming that, you get a better expected value in the quantum scenario.
A logical coin flip be much more scary (and negative utility) assuming certain death for some of the participiants.
(I don’t buy quantum immortality arguments. They resemble on Achilles-Turtle problem: Being rescued in shorter and shorter intervals does not imply being rescued for a fixed time.)