Actually, you couldn’t. At least, it wouldn’t work very well, not nearly as well as a system that simply measures the actual temperature and raises or lowers it as necessary.
Ah, but if I deliberately created an artificial scenario designed to make FF control work, then FF control would look rockin’.
You know, like the programs you linked do, except that they pimp feedback instead ;-)
Yes, feedback control is usually better; my point was the excessive extrapolation from that program.
“Feedforward control loop” is pretty much a contradiction in terms. Look at anything described as feedforward control, and you’ll find that it’s wrapped inside a feedback loop, even if only a human operator keeping the feedforward system properly tuned.
Yes, very true, which reminds me: I saw a point in the demo1 program (link when I get a chance) on the site pjeby linked where they have you try to control a system using either a) your knowledge of the disturbance (feedforward), or b) your knowledge of the error (feedback), and you inevitably do better with b).
Here’s the thing though: it noted that you can get really good at a) if you practice it and get a good feel for how the disturbance relates to how you should move the mouse. BUT it didn’t use this excellent opportunity to point out that even then, such improvement is itself due to another feedback loop! Specifically, one that takes past performace as the feedback, and desired performance as the reference.
my point was the excessive extrapolation from that program.
PCT is not derived from the demos; the demos are derived from PCT.
even then, such improvement is itself due to another feedback loop
So you see, wherever you look in the behaviour of living organisms, you find feedback control!
If that seems trivial to you, then it is probably because you are not an experimental psychologist, which is the area in most need of the insight that living organisms control their perception. You probably also do not work in AI, most of whose practitioners (of strong or weak AI) are using such things as reinforcement learning, planning, modelling, and so on. Robotics engineers—some of them—are about the only exception, and they have a better track record of making things that work.
BTW, I’m not touting PCT or anything else as the secret of real AI. Any better understanding of how real brains operate, whether it comes from PCT or anything else, will presumably facilitate making artificial ones, and I have used it as the basis of a fairly good (but only simulated) walking robot, but strong AI is not my mission.
Ah, but if I deliberately created an artificial scenario designed to make FF control work, then FF control would look rockin’.
You know, like the programs you linked do, except that they pimp feedback instead ;-)
Yes, feedback control is usually better; my point was the excessive extrapolation from that program.
Yes, very true, which reminds me: I saw a point in the demo1 program (link when I get a chance) on the site pjeby linked where they have you try to control a system using either a) your knowledge of the disturbance (feedforward), or b) your knowledge of the error (feedback), and you inevitably do better with b).
Here’s the thing though: it noted that you can get really good at a) if you practice it and get a good feel for how the disturbance relates to how you should move the mouse. BUT it didn’t use this excellent opportunity to point out that even then, such improvement is itself due to another feedback loop! Specifically, one that takes past performace as the feedback, and desired performance as the reference.
PCT is not derived from the demos; the demos are derived from PCT.
So you see, wherever you look in the behaviour of living organisms, you find feedback control!
If that seems trivial to you, then it is probably because you are not an experimental psychologist, which is the area in most need of the insight that living organisms control their perception. You probably also do not work in AI, most of whose practitioners (of strong or weak AI) are using such things as reinforcement learning, planning, modelling, and so on. Robotics engineers—some of them—are about the only exception, and they have a better track record of making things that work.
BTW, I’m not touting PCT or anything else as the secret of real AI. Any better understanding of how real brains operate, whether it comes from PCT or anything else, will presumably facilitate making artificial ones, and I have used it as the basis of a fairly good (but only simulated) walking robot, but strong AI is not my mission.