Of course, when I examined the thing’s source code, I knew it would reason this way, and so I did not put the million.
Then you’re talking about an evil decision problem. But neither in the original nor in the genetic Newcombe’s problem is your source code investigated.
Then you’re talking about an evil decision problem. But neither in the original nor in the genetic Newcombe’s problem is your source code investigated.
No, it is not an evil decision problem, because I did that not because of the particular reasoning, but because of the outcome (taking both boxes).
The original does not specify how Omega makes his prediction, so it may well be by investigating source code.