I don’t see the difficulty. No, you don’t win by giving Omega $100. Yes, it would have been a winning bet before the flip if, as you specify, the coin is fair. Your PS, in which you say to “assume that in the overwhelming measure of the MWI worlds it gives the same outcome”, contradicts the assertion that the coin is fair, and so you have asked us for an answer to an incoherent question.
Your PS, in which you say to “assume that in the overwhelming measure of the MWI worlds it gives the same outcome”, contradicts the assertion that the coin is fair, and so you have asked us for an answer to an incoherent question.
This doesn’t sound right to me. The coin doesn’t need to be quantum mechanical to be fair. Here is a fair but perfectly deterministic coin: the 1098374928th digit of pi, mod 2. I have no idea whether it’s a zero or one. I could figure it out if you gave me enough time, as could Omega. If both of us agree not to take the time to figure it out in advance, we can use it as a fair coin. But in all Everett branches, it comes out the same way.
I don’t see the difficulty. No, you don’t win by giving Omega $100. Yes, it would have been a winning bet before the flip if, as you specify, the coin is fair.
The difficulty comes from projecting the ideal decision theory on people. Look how many people are ready to pay up $100, so it must be a real difficulty.
The fairness of a coin is a property of your mind, not of the coin itself. The coin can be fair in a deterministic world, the same way you can have free will in deterministic world.
Better to say that your state of knowledge about the coin, prior to Omega appearing, is that it has a probability 1⁄2 of being heads and 1⁄2 of being tails. The MWI clause is supposed to make the problem harder by preventing you from assigning utility (once Omega appears) to your ‘other selves’ in other Everett branches. The problem is then just: “how, knowing that Omega might appear, but not knowing what the coin flip will be, can I maximise my utility?” If Omega appears in front of you right now then that’s a different question.
I don’t see the difficulty. No, you don’t win by giving Omega $100. Yes, it would have been a winning bet before the flip if, as you specify, the coin is fair. Your PS, in which you say to “assume that in the overwhelming measure of the MWI worlds it gives the same outcome”, contradicts the assertion that the coin is fair, and so you have asked us for an answer to an incoherent question.
This doesn’t sound right to me. The coin doesn’t need to be quantum mechanical to be fair. Here is a fair but perfectly deterministic coin: the 1098374928th digit of pi, mod 2. I have no idea whether it’s a zero or one. I could figure it out if you gave me enough time, as could Omega. If both of us agree not to take the time to figure it out in advance, we can use it as a fair coin. But in all Everett branches, it comes out the same way.
The difficulty comes from projecting the ideal decision theory on people. Look how many people are ready to pay up $100, so it must be a real difficulty.
The fairness of a coin is a property of your mind, not of the coin itself. The coin can be fair in a deterministic world, the same way you can have free will in deterministic world.
Better to say that your state of knowledge about the coin, prior to Omega appearing, is that it has a probability 1⁄2 of being heads and 1⁄2 of being tails. The MWI clause is supposed to make the problem harder by preventing you from assigning utility (once Omega appears) to your ‘other selves’ in other Everett branches. The problem is then just: “how, knowing that Omega might appear, but not knowing what the coin flip will be, can I maximise my utility?” If Omega appears in front of you right now then that’s a different question.
My state of knowledge about the coin prior to Omega appearing is that I don’t even know that the coin is going to be flipped, actually.