A problem with Timeless Decision Theory (TDT)

According to Ingredients of Timeless Decision Theory, when you set up a factored causal graph for TDT, “You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation”, where “the logical computation” refers to the TDT-prescribed argmax computation (call it C) that takes all your observations of the world (from which you can construct the factored causal graph) as input, and outputs an action in the present situation.

I asked Eliezer to clarify what it means for another logical computation D to be either the same as C, or “dependent on” C, for purposes of the TDT algorithm. Eliezer answered:

For D to depend on C means that if C has various logical outputs, we can infer new logical facts about D’s logical output in at least some cases, relative to our current state of non-omniscient logical knowledge. A nice form of this is when supposing that C has a given exact logical output (not yet known to be impossible) enables us to infer D’s exact logical output, and this is true for every possible logical output of C. Non-nice forms would be harder to handle in the decision theory but we might perhaps fall back on probability distributions over D.

I replied as follows (which Eliezer suggested I post here).

If that’s what TDT means by the logical dependency between Platonic computations, then TDT may have a serious flaw.

Consider the following version of the transparent-boxes scenario. The predictor has an infallible simulator D that predicts whether I one-box here [EDIT: if I see $1M]. The predictor also has a module E that computes whether the ith digit of pi is zero, for some ridiculously large value of i that the predictor randomly selects. I’ll be told the value of i, but the best I can do is assign an a priori probability of .1 that the specified digit is zero.

The predictor puts $1M in the large box iff (D xor E) is true. (And that’s explained to me, of course.)
So let’s say I’m confronted with this scenario, and I see $1M in the large box.
The flaw then is that E (as well as D) meets your criterion for “depending on” my decision computation C. I’m initially unsure what C and E output. But if C in fact one-boxes here, then I can infer that E outputs False (or else the large box has to be empty, which it isn’t). Similarly, if C in fact two-boxes here, then I can infer that E outputs True. (Or equivalently, a third-party observer could soundly draw either of those inferences.)
So E does indeed “depend on” C, in the particular sense you’ve specified. Thus, if I happen to have a strong enough preference that E output True, then TDT (as currently formulated) will tell me to two-box for the sake of that goal. But that’s the wrong decision, of course. In reality, I have no choice about the specified digit of pi.
What’s going on, it seems to me, is that the kind of logical/​Platonic “dependency” that TDT would need to invoke here is this: that E’s output be counterfactually entailed by C’s output (which it isn’t, in this case [see footnote]), rather than (as you’ve specified) merely inferable from C’s output (which indeed it is, in this case). That’s bad news, because distinguishing what my action does or does not counterfactually entail (as opposed to what it implies, causes, gives evidence for, etc.) is the original full-blown problem that TDT’s prescribed decision-computation is meant to solve. So it may turn out that in order to proceed with that very computation (specifically, in order to ascertain which other Platonic computations “depend on” the decision computation C), you already need to (somehow) know the answer that the computation is trying to provide.
--Gary
[footnote] Because if-counterfactually C were to two-box, then (contrary to fact) the large box would (probably) be empty, circumventing the inference about E.
[appendix] In this post, you write:
...reasoning under logical uncertainty using limited computing power… is another huge unsolved open problem of AI. Human mathematicians had this whole elaborate way of believing that the Taniyama Conjecture implied Fermat’s Last Theorem at a time when they didn’t know whether the Taniyama Conjecture was true or false; and we seem to treat this sort of implication in a rather different way than ‘2=1 implies FLT’, even though the material implication is equally valid.
I don’t follow that. The sense of implication in which mathematicians established that TC implies FLT (before knowing if TC was true) is precisely material/​logical implication: they showed ~(TC & ~FLT). And similarly, we can prove ~(3SAT-in-P & ~(P=NP)), etc. There’s no need here to construct (or magically conjure) a whole alternative inference system for reasoning under logical uncertainty.
So if the inference you speak of (when specifying what it means for D to “depend on” C) is the same kind as was used in establishing TC=>FLT, then it’s just material implication, which (as argued above) leads TDT to give wrong answers. Or if we substitute counterfactual entailment for material implication, then TDT becomes circular (question-begging). Or if you have in mind some third alternative, I’m afraid I don’t understand what it might be.
EDIT: The rules of the original transparent-boxes problem (as specified in Good and Real) are: the predictor conducts a simulation that tentatively presumes there will be $1M in the large box, and then puts $1M in the box (for real) iff that simulation showed one-boxing. Thus, if the large box turns out to be empty, there is no requirement for that to be predictive of the agent’s choice under those circumstances. The present variant is the same, except that (D xor E) determines the $1M, instead of just D. (Sorry, I should have said this to begin with, instead of assuming it as background knowledge.)