Or would you try to build one big graph that encompasses physical and logical facts alike, and then use Pearl’s decision procedure without further modification?
I definitely want one big graph if I can get it.
Wait, isn’t it decision-computation C—rather than simulation D—whose “effect” (in the sense of logical consequence) on E we’re concerned about here?
Sorry, yes, C.
Even with the node structure you suggest, we can still infer E from C and from the physical node that matches (D xor E)—unless the new rule prohibits relying on that physical node, which I guess is the idea. But what exactly is the prohibition? Are we forbidden to infer any mathematical fact from any physical indicator of that fact?
No, but whenever we see a physical fact F that depends on a decision C/D we’re still in the process of making plus Something Else (E), then we express our uncertainty in the form of a causal graph with directed arrows from C to D, D to F, and E to F. Thus when we compute a counterfactual on C, we find that F changes, but E does not.
No, but whenever we see a physical fact F that depends on a decision C/D we’re still in the process of making plus Something Else (E),
Wait, F depends on decision computation C in what sense of “depends on”? It can’t quite be the originally defined sense (quoted from your email near the top of the OP), since that defines dependency between Platonic computations, not between a Platonic computation and a physical fact. Do you mean that D depends on C in the original sense, and F in turn depends on D (and on E) in a different sense?
then we express our uncertainty in the form of a causal graph with directed arrows from C to D, D to F, and E to F.
Ok, but these arrows can’t be used to define the relevant sense of dependency above, since the relevant sense of dependency is what tells us we need to draw the arrows that way, if I understand correctly.
Sorry to keep being pedantic about the meaning of “depends”; I know you’re in thinking-out-loud mode here. But the theory gives wildly different answers depending (heh) on how that gets pinned down.
In my view, the chief form of “dependence” that needs to be discriminated is inferential dependence and causal dependence. If earthquakes cause burglar alarms to go off, then we can infer an earthquake from a burglar alarm or infer a burglar alarm from an earthquake. Logical reasoning doesn’t have the kind of directionality that causation does—or at least, classical logical reasoning does not—there’s no preferred form between ~A->B, ~B->A, and A \/ B.
The link between the Platonic decision C and the physical decision D might be different from the link between the physical decision D and the physical observation F, but I don’t know of anything in the current theory that calls for treating them differently. They’re just directional causal links. On the other hand, if C mathematically implies a decision C-2 somewhere else, that’s a logical implication that ought to symmetrically run backward to ~C-2 → ~C, except of course that we’re presumably controlling/evaluating C rather than C-2.
Thinking out loud here, the view is that your mathematical uncertainty ought to be in one place, and your physical uncertainty should be built on top of your mathematical uncertainty. The mathematical uncertainty is a logical graph with symmetric inferences, the physical uncertainty is a directed acyclic graph. To form controlling counterfactuals, you update the mathematical uncertainty, including any logical inferences that take place in mathland, and watch it propagate downward into the physical uncertainty. When you’ve already observed facts that physically depend on mathematical decisions you control but you haven’t yet made and hence whose values you don’t know, then those observations stay in the causal, directed, acyclic world; when the counterfactual gets evaluated, they get updated in the Pearl, directional way, not the logical, symmetrical inferential way.
Okay, then we have a logical link from C-platonic to D-platonic, and causal links descending from C-platonic to C-physical, E-platonic to E-physical, and D-platonic to D-physical to F-physical = D-physical xor E-physical. The idea being that when we counterfactualize on C-platonic, we update D-platonic and its descendents, but not E-platonic or its descendents.
I suppose that as written, this requires a rule, “for purposes of computing counterfactuals, keep in the causal graph rather than the logical knowledge base, any mathematical knowledge gained by observing a fact descended from your decision-output or any logical implications of your decision-output”. I could hope that this is a special case of something more elegant, but it would only be hope.
Ok. I think it would be very helpful to sketch, all in one place, what TDT2 (i.e., the envisioned avenue-2 version of TDT) looks like, taking care to pin down any needed sense of “dependency”. And similarly for TDT1, the avenue-1 version. (These suggestions may be premature, I realize.)
I definitely want one big graph if I can get it.
Sorry, yes, C.
No, but whenever we see a physical fact F that depends on a decision C/D we’re still in the process of making plus Something Else (E), then we express our uncertainty in the form of a causal graph with directed arrows from C to D, D to F, and E to F. Thus when we compute a counterfactual on C, we find that F changes, but E does not.
Wait, F depends on decision computation C in what sense of “depends on”? It can’t quite be the originally defined sense (quoted from your email near the top of the OP), since that defines dependency between Platonic computations, not between a Platonic computation and a physical fact. Do you mean that D depends on C in the original sense, and F in turn depends on D (and on E) in a different sense?
Ok, but these arrows can’t be used to define the relevant sense of dependency above, since the relevant sense of dependency is what tells us we need to draw the arrows that way, if I understand correctly.
Sorry to keep being pedantic about the meaning of “depends”; I know you’re in thinking-out-loud mode here. But the theory gives wildly different answers depending (heh) on how that gets pinned down.
In my view, the chief form of “dependence” that needs to be discriminated is inferential dependence and causal dependence. If earthquakes cause burglar alarms to go off, then we can infer an earthquake from a burglar alarm or infer a burglar alarm from an earthquake. Logical reasoning doesn’t have the kind of directionality that causation does—or at least, classical logical reasoning does not—there’s no preferred form between ~A->B, ~B->A, and A \/ B.
The link between the Platonic decision C and the physical decision D might be different from the link between the physical decision D and the physical observation F, but I don’t know of anything in the current theory that calls for treating them differently. They’re just directional causal links. On the other hand, if C mathematically implies a decision C-2 somewhere else, that’s a logical implication that ought to symmetrically run backward to ~C-2 → ~C, except of course that we’re presumably controlling/evaluating C rather than C-2.
Thinking out loud here, the view is that your mathematical uncertainty ought to be in one place, and your physical uncertainty should be built on top of your mathematical uncertainty. The mathematical uncertainty is a logical graph with symmetric inferences, the physical uncertainty is a directed acyclic graph. To form controlling counterfactuals, you update the mathematical uncertainty, including any logical inferences that take place in mathland, and watch it propagate downward into the physical uncertainty. When you’ve already observed facts that physically depend on mathematical decisions you control but you haven’t yet made and hence whose values you don’t know, then those observations stay in the causal, directed, acyclic world; when the counterfactual gets evaluated, they get updated in the Pearl, directional way, not the logical, symmetrical inferential way.
No, D was the Platonic simulator. That’s why the nature of the C->D dependency is crucial here.
Okay, then we have a logical link from C-platonic to D-platonic, and causal links descending from C-platonic to C-physical, E-platonic to E-physical, and D-platonic to D-physical to F-physical = D-physical xor E-physical. The idea being that when we counterfactualize on C-platonic, we update D-platonic and its descendents, but not E-platonic or its descendents.
I suppose that as written, this requires a rule, “for purposes of computing counterfactuals, keep in the causal graph rather than the logical knowledge base, any mathematical knowledge gained by observing a fact descended from your decision-output or any logical implications of your decision-output”. I could hope that this is a special case of something more elegant, but it would only be hope.
Ok. I think it would be very helpful to sketch, all in one place, what TDT2 (i.e., the envisioned avenue-2 version of TDT) looks like, taking care to pin down any needed sense of “dependency”. And similarly for TDT1, the avenue-1 version. (These suggestions may be premature, I realize.)