If you do present observations that move the beliefs to represent the thought experiment, it’ll work just as well as the magically contrived thought experiment. But the absence of relevant No-megas is part of the setting, so it too should be a conclusion one draws from those observations.
Yes, but you must make the precommitment to love Omegas and hate No-megas (or vice versa) before you receive those observations, because that precommitment of yours is exactly what they’re judging. (I think you see that point already, and we’re probably arguing about some minor misunderstanding of mine.)
You never have to decide in advance, to precommit. Precommitment is useful as a signal to those that can’t follow your full thought process, and so you replace it with a simple rule from some point on (“you’ve already decided”). For Omegas and No-megas, you don’t have to precommit, because they can follow any thought process.
I thought about it some more and I think you’re either confused somewhere, or misrepresenting your own opinions. To clear things up let’s convert the whole problem statement into observational evidence.
Scenario 1: Omega appears and gives you convincing proof that Upsilon doesn’t exist (and that Omega is trustworthy, etc.), then presents you with CM.
Scenario 2: Upsilon appears and gives you convincing proof that Omega doesn’t exist, then presents you with anti-CM, taking into account your counterfactual action if you’d seen scenario 1.
You wrote: “If you do present observations that move the beliefs to represent the thought experiment, it’ll work just as well as the magically contrived thought experiment.” Now, I’m not sure what this sentence was supposed to mean, but it seems to imply that you would give up $100 in scenario 1 if faced with it in real life, because receiving the observations would make it “work just as well as the thought experiment”. This means you lose in scenario 2. No?
Omega would need to convince you that Upsilon not just doesn’t exist, but couldn’t exist, and that’s inconsistent with scenario 2. Otherwise, you haven’t moved your beliefs to represent the thought experiment. Upsilon must be actually impossible (less probable) in order for it to be possible for Omega to correctly convince you (without deception).
Being updateless, your decision algorithm is only interested in observations so far as they resolve logical uncertainty and say which situations you actually control (again, a sort of logical uncertainty), but observations can’t refute logically possible, so they can’t make Upsilon impossible if it wasn’t already impossible.
Omega would need to convince you that Upsilon not just doesn’t exist, but couldn’t exist, and that’s inconsistent with scenario 2.
No it’s not inconsistent. Counterfactual worlds don’t have to be identical to the real world. You might as well say that Omega couldn’t have simulated you in the counterfactual world where the coin came up heads, because that world is inconsistent with the real world. Do you believe that?
By “Upsilon couldn’t exist”, I mean that Upsilon doesn’t live in any of the possible worlds (or only in insignificantly few of them), not that it couldn’t appear in the possible world where you are speaking with Omega.
The convention is that the possible worlds don’t logically contradict each other, so two different outcomes of coin tosses exist in two slightly different worlds, both of which you care about (this situation is not logically inconsistent). If Upsilon lives on such a different possible world, and not on the world with Omega, it doesn’t make Upsilon impossible, and so you care what it does. In order to replicate Counterfactual Mugging, you need the possible worlds with Upsilons to be irrelevant, and it doesn’t matter that Upsilons are not in the same world as the Omega you are talking to.
(How to correctly perform counterfactual reasoning on conditions that are logically inconsistent (such as the possible actions you could make that are not your actual action), or rather how to mathematically understand that reasoning is the septillion dollar question.)
Ah, I see. You’re saying Omega must prove to you that your prior made Upsilon less likely than Omega all along. (By the way, this is an interesting way to look at modal logic, I wonder if it’s published anywhere.) This is a very tall order for Omega, but it does make the two scenarios logically inconsistent. Unless they involve “deception”—e.g. Omega tweaking the mind of counterfactual-you to believe a false proof. I wonder if the problem still makes sense if this is allowed.
If you do present observations that move the beliefs to represent the thought experiment, it’ll work just as well as the magically contrived thought experiment. But the absence of relevant No-megas is part of the setting, so it too should be a conclusion one draws from those observations.
Yes, but you must make the precommitment to love Omegas and hate No-megas (or vice versa) before you receive those observations, because that precommitment of yours is exactly what they’re judging. (I think you see that point already, and we’re probably arguing about some minor misunderstanding of mine.)
You never have to decide in advance, to precommit. Precommitment is useful as a signal to those that can’t follow your full thought process, and so you replace it with a simple rule from some point on (“you’ve already decided”). For Omegas and No-megas, you don’t have to precommit, because they can follow any thought process.
I thought about it some more and I think you’re either confused somewhere, or misrepresenting your own opinions. To clear things up let’s convert the whole problem statement into observational evidence.
Scenario 1: Omega appears and gives you convincing proof that Upsilon doesn’t exist (and that Omega is trustworthy, etc.), then presents you with CM.
Scenario 2: Upsilon appears and gives you convincing proof that Omega doesn’t exist, then presents you with anti-CM, taking into account your counterfactual action if you’d seen scenario 1.
You wrote: “If you do present observations that move the beliefs to represent the thought experiment, it’ll work just as well as the magically contrived thought experiment.” Now, I’m not sure what this sentence was supposed to mean, but it seems to imply that you would give up $100 in scenario 1 if faced with it in real life, because receiving the observations would make it “work just as well as the thought experiment”. This means you lose in scenario 2. No?
Omega would need to convince you that Upsilon not just doesn’t exist, but couldn’t exist, and that’s inconsistent with scenario 2. Otherwise, you haven’t moved your beliefs to represent the thought experiment. Upsilon must be actually impossible (less probable) in order for it to be possible for Omega to correctly convince you (without deception).
Being updateless, your decision algorithm is only interested in observations so far as they resolve logical uncertainty and say which situations you actually control (again, a sort of logical uncertainty), but observations can’t refute logically possible, so they can’t make Upsilon impossible if it wasn’t already impossible.
No it’s not inconsistent. Counterfactual worlds don’t have to be identical to the real world. You might as well say that Omega couldn’t have simulated you in the counterfactual world where the coin came up heads, because that world is inconsistent with the real world. Do you believe that?
By “Upsilon couldn’t exist”, I mean that Upsilon doesn’t live in any of the possible worlds (or only in insignificantly few of them), not that it couldn’t appear in the possible world where you are speaking with Omega.
The convention is that the possible worlds don’t logically contradict each other, so two different outcomes of coin tosses exist in two slightly different worlds, both of which you care about (this situation is not logically inconsistent). If Upsilon lives on such a different possible world, and not on the world with Omega, it doesn’t make Upsilon impossible, and so you care what it does. In order to replicate Counterfactual Mugging, you need the possible worlds with Upsilons to be irrelevant, and it doesn’t matter that Upsilons are not in the same world as the Omega you are talking to.
(How to correctly perform counterfactual reasoning on conditions that are logically inconsistent (such as the possible actions you could make that are not your actual action), or rather how to mathematically understand that reasoning is the septillion dollar question.)
Ah, I see. You’re saying Omega must prove to you that your prior made Upsilon less likely than Omega all along. (By the way, this is an interesting way to look at modal logic, I wonder if it’s published anywhere.) This is a very tall order for Omega, but it does make the two scenarios logically inconsistent. Unless they involve “deception”—e.g. Omega tweaking the mind of counterfactual-you to believe a false proof. I wonder if the problem still makes sense if this is allowed.
Sorry, can’t parse that, you’d need to unpack more.