How do you figure a thermostat directly measures what it’s controlling? It controls heat added/removed per unit time, typically just more/less/no change, and measures the resulting temperature at a single point on typically a minute+ delay due to the dynamics of the system (air and heat take time to diffuse, even with a blower). Any time step sufficiently shorter than that delay is going to work the same. The current measurement depends on what the thermostat did tens of seconds if not minutes previously.
There are times the continuous/discrete distinction is very important but this example isn’t one of them. As soon as you introduce a significant delay between cause and effect the time step model works (it may well be a dependence on multiple previous time-steps, but not the current one).
I don’t think this is an unusual example, we have a small number of sensors, we get data on a delay, and we’re actually trying to control e.g. the temperature in the whole house, holding a set point, minimizing variation between rooms and minimizing variation across time, with the smallest amount of control authority over the system (typically just on/off).
I believe “sufficiently shorter than the delay” is just going to be Nyquist Shannon sampling theorem, once you’re sampling twice the frequency of the highest frequency dynamic in the system, your control system has all the information from the sensor and sampling more will not tell you anything else.
This is partly a terminology issue. By “controlling a variable” I mean “taking actions as necessary to keep that variable at some reference level.” So I say that the thermostat is controlling the temperature of the room (or if you want to split hairs, the temperature of the temperature sensor—suitably siting that sensor is an important part of a practical system). In the same sense, the body controls its core temperature, its blood oxygenation level, its posture[1], and many other things, and its actions to obtain those ends include sweating, breathing, changing muscle tensions, etc.
By the “output” or “action” of a control system I mean the actions it takes to keep the controlled variable at the reference. For the thermostat, this is turning the heat source on and off. It is not “controlling” (in the sense I defined) the rate of adding heat. The thermostat does not know how much heat is being delivered, and does not need to.
The resulting behaviour of the system is to keep the temperature of the room between two closely spaced levels: the temperature at which it turns the heat on, and the slightly higher temperature at which it turns the heat off. The rate at which the temperature goes up or down does not matter, provided the heat source is powerful enough to replenish all the energy leaking out of the walls, however cold it gets outside. If the heat source were replaced by one delivering twice as much power, the performance of the thermostat would be unchanged, except for being able to cope with more extreme cold weather.
The only delays in the thermostat itself are the time it takes for a mechanical switch to operate (milliseconds) and the time it takes for heat production to reach the sensor (minutes). These are so much faster than the changes in temperature due to the weather outside that it is most simply treated as operating in continuous time. There would be no practical benefit from sampling the temperature discretely and seeing how slow a sample rate you can get away with.
It sounded previously like you were making the strong claim that this setup can’t be applied to a closed control loop at all, even in e.g. the common (approximately universal?) case where we have a delay between the regulator’s action and it’s being able to measure that action’s effect. That’s mostly what I was responding to; the chaining that Alfred suggested in the sibling comment seems sensible enough to me.
It occurs to me that the household thermostat example is so non-demanding as to not be a poor intuition pump. I implicitly made the jump to thinking about a more demanding version of this without spelling that out. It’s always going to be a little silly trying to optimize an example that’s already intuitively good enough. Imagine for sake of argument a apparatus that needs tighter control such that there’s actually pressure to optimize beyond the simplest control algorithm.
Your examples of control systems all seem fine and accurate. I think we agree the tricky bit is picking the most sensible frame for mapping the real system to the diagram (assuming that’s roughly what you mean by terminology).
It seems like even with the improvements John Wentworth suggests there’s still some ambiguity in how to apply the result to a case where the regulator makes a time series of decisions, and you’re suggesting there’s some reason we can’t, or wouldn’t want to use discrete timesteps and chain/repeat the diagram.
At a little more length, I’m picturing the unrolling such that the current state is the sensor’s measurement time series through present, of which the regulator is certain. It’s merely uncertain about how its action—what fraction of the next interval to run the heat—will effect the measurement at future times. It’s probably easiest if we draw the diagram such that the time step is the delay between action and measured effect, and the regulator then sees the result of its action based on T1 at T3.
That seems pretty clearly to me to match the pattern this theorem requires, while still having a clear place to plug in whatever predictive model the regulator has. I bring up the sampling theorem as that is the bridge between the discrete samples we have and the continuous functions and differential equations you elsewhere say you want to use. Or stated a little more broadly, that theorem says we can freely move between continuous and discrete representations as needed, provided we sample frequently enough and the functions are well enough behaved to be amenable to calculus in the first place.
It sounded previously like you were making the strong claim that this setup can’t be applied to a closed control loop at all, even in e.g. the common (approximately universal?) case where we have a delay between the regulator’s action and it’s being able to measure that action’s effect.
I am making that claim. Closed loops have circular causal links between the (time-varying) variables. The SZR diagram that I originally objected to is acyclic, therefore it does not apply to closed loops.
Loop delays are beside the point. Sampling on that time scale is not required and may just degrade performance.
Imagine for sake of argument a apparatus that needs tighter control such that there’s actually pressure to optimize beyond the simplest control algorithm.
You are assuming that that tighter control demands the sort of more complicated algorithms that you are imagining, that predict how much heat to inject, based on a model of the whole environment, and so on.
Let’s look outward at the real world. All you need for precision temperature control is to replace the bang-bang control with a PID controller and a scheme for automatically tuning the PID parameters, and there you are. There is nothing in the manual for the ThermoClamp device to suggest a scheme of your suggested sort. In particular, like the room thermostat, the only thing it senses is the actual temperature. Nothing else. For this it uses a thermocouple, which is a continuous-time device, not sampled. There is also no sign of any model. I don’t know how this particular device tunes its PID parameters (probably a trade secret), but googling for how to auto-tune a PID has not turned up anything suggesting a model, only injecting test signals and adjusting the parameters to optimise observed performance—observed solely through measuring the controlled variable.
Everything is digital these days, but a modern automatic pilot is still sampling the sensors many times a second, and I’m sure that’s also true of the digital parts of the ThermoClamp. The time step is well below the characteristic timescales of the system being controlled. It has to be.
People talk about eliminating the cycles by unrolling. I believe this does not work. In causal graphs as generally understood, each of the variables is time-varying. In the unrolled version, each of the nodes represents the value of a single variable at a single point in time, so it’s a different sort of thing. Unrolling makes the number of nodes indefinitely large, so how are you going to mathematically talk about it? Only by taking advantage of its repetitive nature, and then you’re back to dealing with cyclic causal graphs while pretending you aren’t. “Tell me you’re using cyclic causal graphs without telling me you’re using cyclic causal graphs.”
Most centrally I think we’re seeing fundamentally different things with the causal graph. Or more to the point, I haven’t the slightest idea how one is supposed to do any useful reasoning with time varying nodes without somehow expanding it to consider how one node’s function and/or time series effects it’s leaf nodes (or another way, specifically what temporal relation the arrow represents). It also seems fairly inescapable to me that any way you consider that relation, an actual causal cycle where A causes B causes C causes A at the same instant looks very different than one where they indirectly effect each-other at some later time, to the point of needing different tools to analyze the two cases. The latter looks very much like the sort of thing solved with recursion or update loops in programs all the time. Alternately diff eq in the continuous case. The former looks like the sort of thing you need a solver to look for a valid solution for.
It’s fairly obvious why cycles of the first kind I describe would need different treatment—the graph would place constraints on valid solutions but not tell you how to find them. I’m not seeing how the second case is cyclic in the same sense and how you couldn’t just use induction arguments to extend to infinity.
AFAICT you and I aren’t disagreeing on anything about real control systems. It’s difficult to find a non-contrived example because so many control systems either aren’t that demanding or have a human in the loop. But this theorem is about optimal control systems, optimal in the formal computer science sense, so the fact that neither of us can come up with an example that isn’t solved by a PID control loop or similar is somewhat besides the point.
While PID controllers are applicable to many control problems and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provide optimal control.
How do you figure a thermostat directly measures what it’s controlling? It controls heat added/removed per unit time, typically just more/less/no change, and measures the resulting temperature at a single point on typically a minute+ delay due to the dynamics of the system (air and heat take time to diffuse, even with a blower). Any time step sufficiently shorter than that delay is going to work the same. The current measurement depends on what the thermostat did tens of seconds if not minutes previously.
There are times the continuous/discrete distinction is very important but this example isn’t one of them. As soon as you introduce a significant delay between cause and effect the time step model works (it may well be a dependence on multiple previous time-steps, but not the current one).
I don’t think this is an unusual example, we have a small number of sensors, we get data on a delay, and we’re actually trying to control e.g. the temperature in the whole house, holding a set point, minimizing variation between rooms and minimizing variation across time, with the smallest amount of control authority over the system (typically just on/off).
I believe “sufficiently shorter than the delay” is just going to be Nyquist Shannon sampling theorem, once you’re sampling twice the frequency of the highest frequency dynamic in the system, your control system has all the information from the sensor and sampling more will not tell you anything else.
This is partly a terminology issue. By “controlling a variable” I mean “taking actions as necessary to keep that variable at some reference level.” So I say that the thermostat is controlling the temperature of the room (or if you want to split hairs, the temperature of the temperature sensor—suitably siting that sensor is an important part of a practical system). In the same sense, the body controls its core temperature, its blood oxygenation level, its posture[1], and many other things, and its actions to obtain those ends include sweating, breathing, changing muscle tensions, etc.
By the “output” or “action” of a control system I mean the actions it takes to keep the controlled variable at the reference. For the thermostat, this is turning the heat source on and off. It is not “controlling” (in the sense I defined) the rate of adding heat. The thermostat does not know how much heat is being delivered, and does not need to.
The resulting behaviour of the system is to keep the temperature of the room between two closely spaced levels: the temperature at which it turns the heat on, and the slightly higher temperature at which it turns the heat off. The rate at which the temperature goes up or down does not matter, provided the heat source is powerful enough to replenish all the energy leaking out of the walls, however cold it gets outside. If the heat source were replaced by one delivering twice as much power, the performance of the thermostat would be unchanged, except for being able to cope with more extreme cold weather.
The only delays in the thermostat itself are the time it takes for a mechanical switch to operate (milliseconds) and the time it takes for heat production to reach the sensor (minutes). These are so much faster than the changes in temperature due to the weather outside that it is most simply treated as operating in continuous time. There would be no practical benefit from sampling the temperature discretely and seeing how slow a sample rate you can get away with.
How do we manage not to fall over? To walk and run? These are deep questions.
It sounded previously like you were making the strong claim that this setup can’t be applied to a closed control loop at all, even in e.g. the common (approximately universal?) case where we have a delay between the regulator’s action and it’s being able to measure that action’s effect. That’s mostly what I was responding to; the chaining that Alfred suggested in the sibling comment seems sensible enough to me.
It occurs to me that the household thermostat example is so non-demanding as to not be a poor intuition pump. I implicitly made the jump to thinking about a more demanding version of this without spelling that out. It’s always going to be a little silly trying to optimize an example that’s already intuitively good enough. Imagine for sake of argument a apparatus that needs tighter control such that there’s actually pressure to optimize beyond the simplest control algorithm.
Your examples of control systems all seem fine and accurate. I think we agree the tricky bit is picking the most sensible frame for mapping the real system to the diagram (assuming that’s roughly what you mean by terminology).
It seems like even with the improvements John Wentworth suggests there’s still some ambiguity in how to apply the result to a case where the regulator makes a time series of decisions, and you’re suggesting there’s some reason we can’t, or wouldn’t want to use discrete timesteps and chain/repeat the diagram.
At a little more length, I’m picturing the unrolling such that the current state is the sensor’s measurement time series through present, of which the regulator is certain. It’s merely uncertain about how its action—what fraction of the next interval to run the heat—will effect the measurement at future times. It’s probably easiest if we draw the diagram such that the time step is the delay between action and measured effect, and the regulator then sees the result of its action based on T1 at T3.
That seems pretty clearly to me to match the pattern this theorem requires, while still having a clear place to plug in whatever predictive model the regulator has. I bring up the sampling theorem as that is the bridge between the discrete samples we have and the continuous functions and differential equations you elsewhere say you want to use. Or stated a little more broadly, that theorem says we can freely move between continuous and discrete representations as needed, provided we sample frequently enough and the functions are well enough behaved to be amenable to calculus in the first place.
(I have been busy, hence the delay.)
I am making that claim. Closed loops have circular causal links between the (time-varying) variables. The SZR diagram that I originally objected to is acyclic, therefore it does not apply to closed loops.
Loop delays are beside the point. Sampling on that time scale is not required and may just degrade performance.
You are assuming that that tighter control demands the sort of more complicated algorithms that you are imagining, that predict how much heat to inject, based on a model of the whole environment, and so on.
Let’s look outward at the real world. All you need for precision temperature control is to replace the bang-bang control with a PID controller and a scheme for automatically tuning the PID parameters, and there you are. There is nothing in the manual for the ThermoClamp device to suggest a scheme of your suggested sort. In particular, like the room thermostat, the only thing it senses is the actual temperature. Nothing else. For this it uses a thermocouple, which is a continuous-time device, not sampled. There is also no sign of any model. I don’t know how this particular device tunes its PID parameters (probably a trade secret), but googling for how to auto-tune a PID has not turned up anything suggesting a model, only injecting test signals and adjusting the parameters to optimise observed performance—observed solely through measuring the controlled variable.
The early automatic pilots were analogue devices operating in continuous time.
Everything is digital these days, but a modern automatic pilot is still sampling the sensors many times a second, and I’m sure that’s also true of the digital parts of the ThermoClamp. The time step is well below the characteristic timescales of the system being controlled. It has to be.
People talk about eliminating the cycles by unrolling. I believe this does not work. In causal graphs as generally understood, each of the variables is time-varying. In the unrolled version, each of the nodes represents the value of a single variable at a single point in time, so it’s a different sort of thing. Unrolling makes the number of nodes indefinitely large, so how are you going to mathematically talk about it? Only by taking advantage of its repetitive nature, and then you’re back to dealing with cyclic causal graphs while pretending you aren’t. “Tell me you’re using cyclic causal graphs without telling me you’re using cyclic causal graphs.”
No worries, likewise.
Most centrally I think we’re seeing fundamentally different things with the causal graph. Or more to the point, I haven’t the slightest idea how one is supposed to do any useful reasoning with time varying nodes without somehow expanding it to consider how one node’s function and/or time series effects it’s leaf nodes (or another way, specifically what temporal relation the arrow represents). It also seems fairly inescapable to me that any way you consider that relation, an actual causal cycle where A causes B causes C causes A at the same instant looks very different than one where they indirectly effect each-other at some later time, to the point of needing different tools to analyze the two cases. The latter looks very much like the sort of thing solved with recursion or update loops in programs all the time. Alternately diff eq in the continuous case. The former looks like the sort of thing you need a solver to look for a valid solution for.
It’s fairly obvious why cycles of the first kind I describe would need different treatment—the graph would place constraints on valid solutions but not tell you how to find them. I’m not seeing how the second case is cyclic in the same sense and how you couldn’t just use induction arguments to extend to infinity.
AFAICT you and I aren’t disagreeing on anything about real control systems. It’s difficult to find a non-contrived example because so many control systems either aren’t that demanding or have a human in the loop. But this theorem is about optimal control systems, optimal in the formal computer science sense, so the fact that neither of us can come up with an example that isn’t solved by a PID control loop or similar is somewhat besides the point.