I think you are missing the point. Newcomb’s problem is equivalent to dividing by zero. Decision theories aren’t supposed to behave well when abused in this way. If they behave badly on this problem, maybe it is the fault of the problem rather than the fault of the theory.
If someone can present a more robust decision theory, UDT or TDT, or whatever, which handles all the well formed problems just as well as standard game theory, and also handles the ill-formed problems like Newcomb in accord with EY’s intuitions, then I think that is great. I look forward to reading the papers and textbooks explaining that decision theory. But until they have gone through at least some serious process of peer review, please forgive me if I dismiss them as just so much woo and/or vaporware.
Incidentally, I specified “EY’s intuitions” rather than “correctness” as the criterion of success, because unless Omega actually appears and submits to a series of empirical tests, I can’t imagine a more respectable empirical criterion.
No, randomness is kind of a red herring. I shouldn’t have brought it up.
At one point I thought I had a kind of Dutch Book argument against Omega—if he could predict some future “random” event which I intended to use in conjunction with a mixed strategy, then I should be able to profit by making side bets “hedging” my choice with respect to Omega. But when I looked more carefully, it didn’t work.
I think you are missing the point. Newcomb’s problem is equivalent to dividing by zero. Decision theories aren’t supposed to behave well when abused in this way. If they behave badly on this problem, maybe it is the fault of the problem rather than the fault of the theory.
If someone can present a more robust decision theory, UDT or TDT, or whatever, which handles all the well formed problems just as well as standard game theory, and also handles the ill-formed problems like Newcomb in accord with EY’s intuitions, then I think that is great. I look forward to reading the papers and textbooks explaining that decision theory. But until they have gone through at least some serious process of peer review, please forgive me if I dismiss them as just so much woo and/or vaporware.
Incidentally, I specified “EY’s intuitions” rather than “correctness” as the criterion of success, because unless Omega actually appears and submits to a series of empirical tests, I can’t imagine a more respectable empirical criterion.
IMO, you haven’t made a case for that—and few here agree with you.
If you really think randomness is an issue, imagine a deterministic program facing the problem, with no good source of randomness to hand.
No, randomness is kind of a red herring. I shouldn’t have brought it up.
At one point I thought I had a kind of Dutch Book argument against Omega—if he could predict some future “random” event which I intended to use in conjunction with a mixed strategy, then I should be able to profit by making side bets “hedging” my choice with respect to Omega. But when I looked more carefully, it didn’t work.
Yay: honesty points!