It’s easy to construct Newcomb-like problems where EDT fails. For example, we could make the two boxes transparent, so you already see their contents and your action gives you no further evidence. One-boxing is still the right decision because that’s what you’d like to be predicted by Omega (alternatively: if you could modify your brain before meeting Omega, that’s what you’d precommit to doing), but both EDT and CDT fail to see that. Another similar example is Parfit’s Hitchhiker.
CDT still works in that case if you’re dealing with omega, and have no reaason to believe Omega won’t simulate you. If you are one of the simulations, you decide the prediction for the real version
It’s easy to construct Newcomb-like problems where EDT fails. For example, we could make the two boxes transparent, so you already see their contents and your action gives you no further evidence. One-boxing is still the right decision because that’s what you’d like to be predicted by Omega (alternatively: if you could modify your brain before meeting Omega, that’s what you’d precommit to doing), but both EDT and CDT fail to see that. Another similar example is Parfit’s Hitchhiker.
CDT still works in that case if you’re dealing with omega, and have no reaason to believe Omega won’t simulate you. If you are one of the simulations, you decide the prediction for the real version
How about if you’re dealing with me?
Then CDT seems to fail, with it being a low-% case (perhaps 55% as I used above) and EDT fails due to the prize already being in evidence