One problem I’ve been chewing on is how to think about causality and abstraction in the presence of a feedback controller. An un-blackboxed current supply is one example—my understanding it that they’re typically implemented as a voltage supply with a feedback controller. Diving down into the low-level implementation details (charge, fields, etc) is certainly one way to get a valid causal picture. But I also think that abstract causal models can be “correct” in some substantive but not-as-yet-well-understood sense, even when they differ from the underlying physical causality.
An example with the same issues as a current supply, but is (hopefully) conceptually a bit simpler, is a thermostat. At the physical level, there’s a feedback loop: the thermostat measures temperature, compares it to the target temperature, and adjusts the fuel burn rate up/down accordingly. But at the abstract level, I turn a knob on the thermostat, and that causes the temperature to change. I think there is a meaningful sense in which that abstract model is correct. By contrast, an abstract model which says “the change in room temperature a few minutes from now causes me to turn the knob on the thermostat” would be incorrect, as would a causal model in which the two are unconnected.
So… yes, the example given clearly does not match the underlying physical causality for a current supply. On the other hand, the same can be said with the voltage supply; the macroscopic measured behavior results from back-and-forth causal arrows between the EM fields and the charges. And that’s all before we get down to quantum mechanics, at which point physical causality gets even more complicated. Point is: all of these models are operating at a pretty high level of abstraction, compared to the underlying physical reality. But it still seems like some abstract causal models are “right” and others are “wrong”.
The OP is about what might underlie that intuition—what “right” and “wrong” mean for abstract causal models.
Yeah, that all seems fair/right/good and I see what you’re getting at. I got nerdsniped by the current source example because it was familiar and I felt as phrased it got in the way of the core idea you were going for.
The person who properly introduced me to Pearl’s causality stuff had an example which seems good here and definitely erodes the notion of causality being uni-directional in time. It seems equivalent to the thermostat one, I think.
Suppose I’m a politician seeking election:
At time t0, I campaign on a platform which causes people to vote for me at time t1.
On one hand, my choice of campaign is seemingly the cause of people voting for me afterwards.
On another hand, I chose the platform I did because of an action which would occur afterwards, i.e. the voting. If I didn’t have a model that people would vote for a given platform, I wouldn’t have chosen that platform. My model/prediction is of a real-world thing. So it kinda seems a bit like the causality flows backwards in time. The voting causes the campaign choice same as the temperature changing in response to knob-turning causes the knob-turning.
I like the framing that the questions can be posed both for voltage supply and current supply, that seems more on track to me.
This and the parent comment were quite helpful for getting a more nuanced sense of what you’re up to.
Point is: all of these models are operating at a pretty high level of abstraction, compared to the underlying physical reality. But it still seems like some abstract causal models are “right” and others are “wrong”.
One problem I’ve been chewing on is how to think about causality and abstraction in the presence of a feedback controller. An un-blackboxed current supply is one example—my understanding it that they’re typically implemented as a voltage supply with a feedback controller. Diving down into the low-level implementation details (charge, fields, etc) is certainly one way to get a valid causal picture. But I also think that abstract causal models can be “correct” in some substantive but not-as-yet-well-understood sense, even when they differ from the underlying physical causality.
An example with the same issues as a current supply, but is (hopefully) conceptually a bit simpler, is a thermostat. At the physical level, there’s a feedback loop: the thermostat measures temperature, compares it to the target temperature, and adjusts the fuel burn rate up/down accordingly. But at the abstract level, I turn a knob on the thermostat, and that causes the temperature to change. I think there is a meaningful sense in which that abstract model is correct. By contrast, an abstract model which says “the change in room temperature a few minutes from now causes me to turn the knob on the thermostat” would be incorrect, as would a causal model in which the two are unconnected.
So… yes, the example given clearly does not match the underlying physical causality for a current supply. On the other hand, the same can be said with the voltage supply; the macroscopic measured behavior results from back-and-forth causal arrows between the EM fields and the charges. And that’s all before we get down to quantum mechanics, at which point physical causality gets even more complicated. Point is: all of these models are operating at a pretty high level of abstraction, compared to the underlying physical reality. But it still seems like some abstract causal models are “right” and others are “wrong”.
The OP is about what might underlie that intuition—what “right” and “wrong” mean for abstract causal models.
Yeah, that all seems fair/right/good and I see what you’re getting at. I got nerdsniped by the current source example because it was familiar and I felt as phrased it got in the way of the core idea you were going for.
The person who properly introduced me to Pearl’s causality stuff had an example which seems good here and definitely erodes the notion of causality being uni-directional in time. It seems equivalent to the thermostat one, I think.
Suppose I’m a politician seeking election:
At time t0, I campaign on a platform which causes people to vote for me at time t1.
On one hand, my choice of campaign is seemingly the cause of people voting for me afterwards.
On another hand, I chose the platform I did because of an action which would occur afterwards, i.e. the voting. If I didn’t have a model that people would vote for a given platform, I wouldn’t have chosen that platform. My model/prediction is of a real-world thing. So it kinda seems a bit like the causality flows backwards in time. The voting causes the campaign choice same as the temperature changing in response to knob-turning causes the knob-turning.
I like the framing that the questions can be posed both for voltage supply and current supply, that seems more on track to me.
Positive reinforcement for noticing getting nerdsniped and mentioning it!
This and the parent comment were quite helpful for getting a more nuanced sense of what you’re up to.
Good summary.