Great post overall, you’re making interesting points!
Couple of comments:
There are 8 possible worlds here, with different utilities and probabilities
your utility for “To smoke” and “No lesion, no cancer” should be 1,000,000 instead of 0
your utility for “Not to smoke” and “No lesion, no cancer” should be 1,000,000 instead of 0
Some decision theorists tend to get confused over this because they think of this magical thing they call “causality,” the qualia of your decisions being yours and free, causing the world to change upon your metaphysical command. They draw fancy causal graphs like this one:
That seems like an unfair criticism of the FDT paper. Drawing such a diagram doesn’t imply one believes causality to be magic any more than making your table of possible worlds.
Specifically, the diagrams in the FDT paper don’t say decisions are “yours and free”, at least if I understand you correctly. Your decisions are caused by your decision algorithm, which in some situations is implemented in other agents as well.
XOR Blackmail is (in my view) perhaps the clearest counterexample to EDT:
(Styling mine, not original.) EDT pays the $1,000 for nothing: it has absolutely no influence on whether or not the agent’s house is infested with termites.