I don’t see how this is a decision problem at all.
And that is where most people make their mistake when encountering this kind of decision problem. It varies somewhat between people whether that error kicks in at standard Newcomb’s, Transparent Newcomb’s or some other even more abstract variant such as this.
Could you explain the error, rather than just say that it is a common error? How can I agree to pay in a situation which happens only if I was predicted to disagree?
(I don’t object to having precommited to agree to pay if Omega makes his request; that would indeed be a correct decision. But then, of course, Omega doesn’t appear. The given formulation
Omega asks you to pay him $100. Do you pay?
implies that Omega really appears, which logically excludes the variant of paying. Maybe it is only a matter of formulation.)
I’ll echo prase’s request. It seems to me that given that he’s made the offer and I am confident of his predictions, I ought not expect to pay him. This is true regardless of what decision I make: if I decide to pay him, I ought to expect to fail.
Perhaps I’m only carrying counterfeit bills, or perhaps a windstorm will come up and blow the money out of my hands, or perhaps by wallet has already been stolen, or perhaps I’m about to have a heart attack, or whatever.
Implausible as these things are, they are far more plausible than Omega being wrong. The last thing I should consider likely is that, having decided to pay, I actually will pay.
ETA—I am apparently confused on more fundamental levels than I had previously understood, not least of which is what is being presumed about Omega in these cases. Apparently I am not presumed to be as confident of Omega’s predictions as I’d thought, which makes the rest of this comment fairly irrelevant. Oops.
You just described the reasoning you would go through when making a decision. That would seem to be answer enough to demonstrate that this is a decision problem.
I don’t see how this is a decision problem at all.
It seems to me that if my reasoning tells me that no matter what decision I make, the same thing happens, that isn’t evidence that I have a decision problem.
But perhaps I just don’t understand what a decision problem is.
And that is where most people make their mistake when encountering this kind of decision problem. It varies somewhat between people whether that error kicks in at standard Newcomb’s, Transparent Newcomb’s or some other even more abstract variant such as this.
Could you explain the error, rather than just say that it is a common error? How can I agree to pay in a situation which happens only if I was predicted to disagree?
(I don’t object to having precommited to agree to pay if Omega makes his request; that would indeed be a correct decision. But then, of course, Omega doesn’t appear. The given formulation
implies that Omega really appears, which logically excludes the variant of paying. Maybe it is only a matter of formulation.)
I’ll echo prase’s request. It seems to me that given that he’s made the offer and I am confident of his predictions, I ought not expect to pay him. This is true regardless of what decision I make: if I decide to pay him, I ought to expect to fail.
Perhaps I’m only carrying counterfeit bills, or perhaps a windstorm will come up and blow the money out of my hands, or perhaps by wallet has already been stolen, or perhaps I’m about to have a heart attack, or whatever.
Implausible as these things are, they are far more plausible than Omega being wrong. The last thing I should consider likely is that, having decided to pay, I actually will pay.
ETA—I am apparently confused on more fundamental levels than I had previously understood, not least of which is what is being presumed about Omega in these cases. Apparently I am not presumed to be as confident of Omega’s predictions as I’d thought, which makes the rest of this comment fairly irrelevant. Oops.
You just described the reasoning you would go through when making a decision. That would seem to be answer enough to demonstrate that this is a decision problem.
Interesting.
It seems to me that if my reasoning tells me that no matter what decision I make, the same thing happens, that isn’t evidence that I have a decision problem.
But perhaps I just don’t understand what a decision problem is.