Control systems win while being arational. Either explain this in terms of Bayescraft, or explain why there is no such explanation.
The control system a person uses to steer a car would fail if it were not calibrated by processing evidence in a manner idealized by Bayescraft. Knowing the correct amount to turn the wheel to correct a deviation of the perceived direction from the desired direction depends on one’s previous experience turning the wheel, the evidence of how the car reacts to turning the wheel a given amount.
I often help to teach sailing classes, and I observe that inexperienced students have the problems with steering that would be expected for one unfamiliar with steering their boat. They are either too timid on the helm, allowing the boat to stay off course, or too aggressive, overshooting the desired course, and then over correcting again the other way. As they gain experience, that is, as they process the evidence of how the boat reacts to their use of the tiller, their control improves to the point that they can maintain their desired course. This is one reason we like students to start with smaller, more responsive boats, which give the evidence more quickly and obviously than larger boats that take time to react.
Control systems are useful, but they are useful because we use evidence to select the particular control system that wins.
Knowing the correct amount to turn the wheel to correct a deviation of the perceived direction from the desired direction depends on one’s previous experience turning the wheel, the evidence of how the car reacts to turning the wheel a given amount.
That isn’t the case with the control systems in the OP. A thermostat doesn’t know how long it will need to stay on to reach the desired temperature from the current temperature. Even its designers didn’t necessarily know that. It just
(1) turns on;
(2) checks the temperature
(3) stays on if still hasn’t reached desired temperature; else turns off.
Moreover, it doesn’t even learn from this experience. The next time it finds itself with exactly the same disparity between current and desired temperature, it will go through exactly the same procedure, without benefiting from its previous experience at all.
All that matters is that the system responds in a way that (1) approaches the desired state, and (2) won’t overshoot—i.e., won’t reach the desired state so quickly that the system can’t turn off the response in time. These seem to be what were missing with your sailing students.
That isn’t the case with the control systems in the OP.
From the OP
If this was only about cruise controls and room thermostats, it would just be a minor conundrum. But it is also about people, and all living organisms.
My point was that features of the thermostat that the OP attempted to generalize to control systems used by people do not actually generalize. A thermostat is a simple system to solve a simple problem (though it still takes some evidence, that a given device cools or heats a room). A more complex problem requires a more complex solution, and more evidence to calibrate.
All that matters is that the system responds in a way that (1) approaches the desired state, and (2) won’t overshoot—i.e., won’t reach the desired state so quickly that the system can’t turn off the response in time. These seem to be what were missing with your sailing students.
While technically true at a certain level of abstraction, that is just not helpful. The reason why the students do not approach the desired state, or overshoot, is important. If I just told them “approach the desired course, but don’t overshoot”, it would not help. They already know they want to do that, but not how to do that. I need to tell them more precisely how to use the tiller to do that. I tell them, “pull the tiller towards you, a little more … now back in the center”, and get them to observe the effect this has on the boat. It is after going through this exercise a few times that they are able to implement the control system themselves, and process higher level instructions.
(2) won’t overshoot—i.e., won’t reach the desired state so quickly that the system can’t turn off the response in time. These seem to be what were missing with your sailing students.
But that’s a a result of the high responsiveness of the furnace vs. the low responsiveness of the boat. You couldn’t blindly let a thermostat control a boat or a missle, you would have to tune it.
It some situations it might need to turn itself back off before it’s input (heading) has noticeably changed.
The control system a person uses to steer a car would fail if it were not calibrated by processing evidence in a manner idealized by Bayescraft. Knowing the correct amount to turn the wheel to correct a deviation of the perceived direction from the desired direction depends on one’s previous experience turning the wheel, the evidence of how the car reacts to turning the wheel a given amount.
I often help to teach sailing classes, and I observe that inexperienced students have the problems with steering that would be expected for one unfamiliar with steering their boat. They are either too timid on the helm, allowing the boat to stay off course, or too aggressive, overshooting the desired course, and then over correcting again the other way. As they gain experience, that is, as they process the evidence of how the boat reacts to their use of the tiller, their control improves to the point that they can maintain their desired course. This is one reason we like students to start with smaller, more responsive boats, which give the evidence more quickly and obviously than larger boats that take time to react.
Control systems are useful, but they are useful because we use evidence to select the particular control system that wins.
That isn’t the case with the control systems in the OP. A thermostat doesn’t know how long it will need to stay on to reach the desired temperature from the current temperature. Even its designers didn’t necessarily know that. It just
(1) turns on;
(2) checks the temperature
(3) stays on if still hasn’t reached desired temperature; else turns off.
Moreover, it doesn’t even learn from this experience. The next time it finds itself with exactly the same disparity between current and desired temperature, it will go through exactly the same procedure, without benefiting from its previous experience at all.
All that matters is that the system responds in a way that (1) approaches the desired state, and (2) won’t overshoot—i.e., won’t reach the desired state so quickly that the system can’t turn off the response in time. These seem to be what were missing with your sailing students.
Edited to correct format
From the OP
My point was that features of the thermostat that the OP attempted to generalize to control systems used by people do not actually generalize. A thermostat is a simple system to solve a simple problem (though it still takes some evidence, that a given device cools or heats a room). A more complex problem requires a more complex solution, and more evidence to calibrate.
While technically true at a certain level of abstraction, that is just not helpful. The reason why the students do not approach the desired state, or overshoot, is important. If I just told them “approach the desired course, but don’t overshoot”, it would not help. They already know they want to do that, but not how to do that. I need to tell them more precisely how to use the tiller to do that. I tell them, “pull the tiller towards you, a little more … now back in the center”, and get them to observe the effect this has on the boat. It is after going through this exercise a few times that they are able to implement the control system themselves, and process higher level instructions.
(2) won’t overshoot—i.e., won’t reach the desired state so quickly that the system can’t turn off the response in time. These seem to be what were missing with your sailing students.
But that’s a a result of the high responsiveness of the furnace vs. the low responsiveness of the boat. You couldn’t blindly let a thermostat control a boat or a missle, you would have to tune it. It some situations it might need to turn itself back off before it’s input (heading) has noticeably changed.
Consciousness explained.