Would you agree that, given that Omega asks you, you are guaranteed by the rules of the problem to not pay him?
If you are inclined to take the (I would say) useless way out and claim it could be a simulation, consider the case where Omega makes sure the Omega in its simulation is also always right—creating an infinite tower of recursion such that the density of Omega being wrong in all simulations is 0.
If you are inclined to take the (I would say) useless way out and claim it could be a simulation,
Leaving open the question of whether Omega must work by simulating the Player, I don’t understand why you say this is a ‘useless way out’. So for now let’s suppose Omega does simulate the Player.
consider the case where Omega makes sure the Omega in its simulation is also always right
Why would Omega choose to, or need to, ensure that in its simulation, the data received by the Player equals Omega’s actual output?
There must be an answer to the question of what the Player would do if asked, by a being that it believes is Omega, to pay $100. Even if (as cousin_it may argue) the answer is “go insane after deducing a contradiction”, and then perhaps fail to halt. To get around the issue of not halting, we can either stipulate that if the Player doesn’t halt after a given length of time then it refuses to pay by default, or else that Omega is an oracle machine which can determine whether the Player halts (and interprets not halting as refusal to pay).
Having done the calculation, Omega acts accordingly. None of this requires Omega to simulate itself.
It’s “useless” in part because, as you note, it assumes Omega works by simulating the player. But mostly it’s just that it subverts the whole point of the problem; Omega is supposed to have your complete trust in its infallibility. To say “maybe it’s not real” goes directly against that. The situation in which Omega simulates itself is merely a way of restoring the original intent of infallibility.
This problem is tricky; since the decision-type “pay” is associated with higher rewards, you should pay, but if you are a person Omega asks to pay, you will not pay, as a simple matter of fact. So the wording of the question has to be careful—there is a distinction between counterfactual and reality—some of the people Omega counterfactually asks will pay, none of the people Omega really asks will successfully pay. Therefore what might be seen as mere grammatical structure has a huge impact on the answer—“If asked, would you pay?” vs. “Given that Omega has asked you, will you pay?”
It’s “useless” in part because, as you note, it assumes Omega works by simulating the player
Or, if you are thinking about it more precisely, it observes that however Omega works, it will be equivalent to Omega simulating the player. It just gives us something our intuitions can grasp at a little easier.
That’s a fairly good argument—simulation or something equivalent is the most realistic thing to expect. But since Omega is already several kinds of impossible, if Omega didn’t work in a way equivalent to simulating the player it would add minimally to the suspended disbelief. Heck, it might make it easier to believe, depending on the picture—“The impossible often has a kind of integrity to it which the merely improbable lacks.”
Heck, it might make it easier to believe, depending on the picture—“The impossible often has a kind of integrity to it which the merely improbable lacks.”
On the other hand sometimes the impossible is simply incomprehensible and the brain doesn’t even understand what ‘believing’ it would mean. (Which is what my brain is doing here.) Perhaps this is because it is related behind the scenes to certain brands of ‘anthropic’ reasoning that I tend to reject.
Would you agree that, given that Omega asks you, you are guaranteed by the rules of the problem to not pay him?
If you are inclined to take the (I would say) useless way out and claim it could be a simulation, consider the case where Omega makes sure the Omega in its simulation is also always right—creating an infinite tower of recursion such that the density of Omega being wrong in all simulations is 0.
Leaving open the question of whether Omega must work by simulating the Player, I don’t understand why you say this is a ‘useless way out’. So for now let’s suppose Omega does simulate the Player.
Why would Omega choose to, or need to, ensure that in its simulation, the data received by the Player equals Omega’s actual output?
There must be an answer to the question of what the Player would do if asked, by a being that it believes is Omega, to pay $100. Even if (as cousin_it may argue) the answer is “go insane after deducing a contradiction”, and then perhaps fail to halt. To get around the issue of not halting, we can either stipulate that if the Player doesn’t halt after a given length of time then it refuses to pay by default, or else that Omega is an oracle machine which can determine whether the Player halts (and interprets not halting as refusal to pay).
Having done the calculation, Omega acts accordingly. None of this requires Omega to simulate itself.
It’s “useless” in part because, as you note, it assumes Omega works by simulating the player. But mostly it’s just that it subverts the whole point of the problem; Omega is supposed to have your complete trust in its infallibility. To say “maybe it’s not real” goes directly against that. The situation in which Omega simulates itself is merely a way of restoring the original intent of infallibility.
This problem is tricky; since the decision-type “pay” is associated with higher rewards, you should pay, but if you are a person Omega asks to pay, you will not pay, as a simple matter of fact. So the wording of the question has to be careful—there is a distinction between counterfactual and reality—some of the people Omega counterfactually asks will pay, none of the people Omega really asks will successfully pay. Therefore what might be seen as mere grammatical structure has a huge impact on the answer—“If asked, would you pay?” vs. “Given that Omega has asked you, will you pay?”
Or, if you are thinking about it more precisely, it observes that however Omega works, it will be equivalent to Omega simulating the player. It just gives us something our intuitions can grasp at a little easier.
That’s a fairly good argument—simulation or something equivalent is the most realistic thing to expect. But since Omega is already several kinds of impossible, if Omega didn’t work in a way equivalent to simulating the player it would add minimally to the suspended disbelief. Heck, it might make it easier to believe, depending on the picture—“The impossible often has a kind of integrity to it which the merely improbable lacks.”
On the other hand sometimes the impossible is simply incomprehensible and the brain doesn’t even understand what ‘believing’ it would mean. (Which is what my brain is doing here.) Perhaps this is because it is related behind the scenes to certain brands of ‘anthropic’ reasoning that I tend to reject.