Wow, I’ve been cited. Exciting! But I no longer endorse EDT. Here’s why:
Anyway, when I posted that, someone pointed out that EDT fails the transparent-box variant of Newcomb’s problem, where Omega puts $1 million in box A if he would expect you to 1-box upon seeing box A with $1 million in it, and puts nothing in box A otherwise. An EDT agent who sees a full box A has no reason not to take the $1,000 in box B as well, because that does not provide evidence that he will not be able to get $1 million from box A, since he can already see that he can. But because of this, an EDT agent will never see $1 million in box A.
If all I know about the world I inhabit are the two facts: (1) the probability of rain is higher, given that the ground is wet, and (2) The probability of the ground being wet is higher, given that I turn the sprinklers on—then turning the sprinklers on really is the rational thing to do, if I want it to rain.
That is correct for straightforward models of complete uncertainty about the causal structure underlying (1) and (2), but also irrelevant. CDT can also handle causal uncertainty correctly, and EDT is criticized for acting the same way even when it is known that turning the sprinklers on does not increase the probability of rain enough to be worthwhile. You did address this, but I’m just saying that mentioning the possibility that turning the sprinklers on could be the correct action given some set of partial information doesn’t really add anything.
Wow, I’ve been cited. Exciting! But I no longer endorse EDT. Here’s why:
Anyway, when I posted that, someone pointed out that EDT fails the transparent-box variant of Newcomb’s problem, where Omega puts $1 million in box A if he would expect you to 1-box upon seeing box A with $1 million in it, and puts nothing in box A otherwise. An EDT agent who sees a full box A has no reason not to take the $1,000 in box B as well, because that does not provide evidence that he will not be able to get $1 million from box A, since he can already see that he can. But because of this, an EDT agent will never see $1 million in box A.
That is correct for straightforward models of complete uncertainty about the causal structure underlying (1) and (2), but also irrelevant. CDT can also handle causal uncertainty correctly, and EDT is criticized for acting the same way even when it is known that turning the sprinklers on does not increase the probability of rain enough to be worthwhile. You did address this, but I’m just saying that mentioning the possibility that turning the sprinklers on could be the correct action given some set of partial information doesn’t really add anything.