Personally, I can think of LOTS of reasons to doubt that Newcomb’s problem is even theoretically possible to set.
If you allow arbitrarily high but not 100%-accurate predictions (as EY is fond of repeating, 100% is not a probability), the original Newcomb’s problem is defined as the limit when prediction accuracy goes to 100%. As noted in other comments, the “winning” answer to the problem is not sensitive to the prediction level just above 50% accuracy (1/(2-1000/1000000), to be precise), so the limiting case must have the same answer.
If you allow arbitrarily high but not 100%-accurate predictions (as EY is fond of repeating, 100% is not a probability), the original Newcomb’s problem is defined as the limit when prediction accuracy goes to 100%. As noted in other comments, the “winning” answer to the problem is not sensitive to the prediction level just above 50% accuracy (1/(2-1000/1000000), to be precise), so the limiting case must have the same answer.
Damn good point, thanks. That certainly answers my concern about Newcomb’s problem.