Without models

Fol­lowup to: What is con­trol the­ory?

I men­tioned in my post test­ing the wa­ter on this sub­ject that con­trol sys­tems are not in­tu­itive un­til one has learnt to un­der­stand them. The point I am go­ing to talk about is one of those non-in­tu­itive fea­tures of the sub­ject. It is (a) ba­sic to the very idea of a con­trol sys­tem, and (b) some­thing that al­most ev­ery­one gets wrong when they first en­counter con­trol sys­tems.

I’m go­ing to ad­dress just this one point, not in or­der to ig­nore the rest, but be­cause the dis­cus­sion aris­ing from my last post has shown that this is presently the most im­por­tant thing.

There is a great temp­ta­tion to think that to con­trol a vari­able—that is, to keep it at a de­sired value in spite of dis­turb­ing in­fluences—the con­trol­ler must con­tain a model of the pro­cess to be con­trol­led and use it to calcu­late what ac­tions will have the de­sired effect. In ad­di­tion, it must mea­sure the dis­tur­bances or bet­ter still, pre­dict them in ad­vance and what effect they will have, and take those into ac­count in de­cid­ing its ac­tions.

In terms more fa­mil­iar here, the temp­ta­tion to think that to bring about de­sired effects in the world, one must have a model of the rele­vant parts of the world and pre­dict what ac­tions will pro­duce the de­sired re­sults.

How­ever, this is ab­solutely wrong. This is not a minor mis­take or a small mi­s­un­der­stand­ing; it is the pons as­ino­rum of the sub­ject.

Note the word “must”. It is not dis­puted that one can use mod­els and pre­dic­tions, only that one must, that the task in­her­ently re­quires it.

A con­trol sys­tem can work with­out hav­ing any model of what it is con­trol­ling.

The de­signer will have a model. For the room ther­mo­stat, he must know that the heat­ing should turn on when the room is too cold and off when it is too hot, rather than the other way around, and he must ar­range that the source of heat is pow­er­ful enough. The con­trol­ler he de­signs does not know that; it merely does that. (Com­pare the similar re­la­tion­ship be­tween evolu­tion and evolved or­ganisms. How evolu­tion works is not how the evolved or­ganism works, nor is how a de­signer works how the de­signed sys­tem works.) For a cruise con­trol, he must choose the pa­ram­e­ters of the con­trol­ler, tak­ing into ac­count the en­g­ine’s re­sponse to the ac­cel­er­a­tor pedal. The re­sult­ing con­trol sys­tem, how­ever, con­tains no rep­re­sen­ta­tion of that. Ac­cord­ing to the HowStuffWorks ar­ti­cle, they typ­i­cally use noth­ing more com­pli­cated than pro­por­tional or PID con­trol. The pa­ram­e­ters are cho­sen by the de­signer ac­cord­ing to his knowl­edge about the sys­tem; the pa­ram­e­ters them­selves are not some­thing the con­trol­ler knows about the sys­tem.

It is pos­si­ble to de­sign con­trol sys­tems that do con­tain mod­els, but it is not in­her­ent to the task of con­trol. This is what model-based con­trol­lers look like. (Thanks to Tom Talbot for that refer­ence.) Pick up any book on model-based con­trol to see more ex­am­ples. There are sig­nals within the con­trol sys­tem that are de­signed to re­late to each other in the same way as do cor­re­spond­ing prop­er­ties of the world out­side. That is what a model is. There is noth­ing even slightly re­sem­bling that in a ther­mo­stat or a cruise con­trol. Nor is there in the knee-jerk ten­don re­flex. Whether there are mod­els el­se­where in the hu­man body is an em­piri­cal mat­ter, to be de­cided by in­ves­ti­ga­tions such as those in the linked pa­per. To merely be en­tan­gled with the out­side world is not what it is, to be a model.

Within the Alien Space Bat Pri­son Cell, the ther­mo­stat is flick­ing a switch one way when the nee­dle is to the left of the mark, and the other when it is to the right. The cruise con­trol is turn­ing a knob by an amount pro­por­tional to the dis­tance be­tween the nee­dle and the mark. Nei­ther of them knows why. Nei­ther of them knows what is out­side the cell. Nei­ther of them cares whether what they are do­ing is work­ing. They just do it, and they work.

A con­trol sys­tem can work with­out hav­ing any knowl­edge of the ex­ter­nal dis­tur­bances.

The ther­mo­stat does not know that the sun is shin­ing in through the win­dow. It only knows the cur­rent tem­per­a­ture. The cruise con­trol does not sense the gra­di­ent of the road, nor the head wind. It senses the speed of the car. It may be tuned for some broad char­ac­ter­is­tics of the ve­hi­cle, but it does not it­self know those char­ac­ter­is­tics, or sense when they change, such as when pas­sen­gers get in and out.

Again, it is pos­si­ble to de­sign con­trol­lers that do sense at least some of the dis­tur­bances, but it is not in­her­ent to the task of con­trol.

A con­trol sys­tem can work with­out mak­ing any pre­dic­tions about any­thing.

The room ther­mo­stat does not know that the sun is shin­ing, nor the cruise con­trol the gra­di­ent. A for­tiori, they do not pre­dict that the sun will come out in a few min­utes, nor that there is a hill in the dis­tance.

It is pos­si­ble to de­sign con­trol­lers that make pre­dic­tions, but it is not an in­her­ent re­quire­ment of the task of con­trol. The fact that a con­trol­ler works does not con­sti­tute a pre­dic­tion, by the con­trol­ler, that it will work. I am be­labour­ing this point, be­cause the er­ror has already been be­laboured.

But (it was main­tained) doesn’t the con­trol sys­tem have an im­plicit model, im­plicit knowl­edge, and im­plic­itly make pre­dic­tions?

No. None of these things are true. The very con­cepts of im­plicit model, im­plicit knowl­edge, and im­plicit pre­dic­tion are prob­le­matic. The phrases do have sen­si­ble mean­ings in some other con­texts, but not here. An im­plicit model is one in which func­tional re­la­tion­ships are ex­pressed not as ex­plicit func­tions y=f(x), but as re­la­tions g(x,y)=k. Im­plicit knowl­edge is knowl­edge that one has but can­not ex­press in words. Im­plicit pre­dic­tion is an unar­tic­u­lated be­lief about the effect of the ac­tions one is tak­ing.

In the pre­sent con­text, “im­plicit” is in­dis­t­in­guish­able from “not”. Just be­cause a sys­tem was made a cer­tain way in or­der to in­ter­act with some other sys­tem a cer­tain way, it does not make the one a model of the other. As well say that a ham­mer is a model of a nail. The ex­am­ples I am us­ing, the ther­mo­stat and the cruise con­trol, sense tem­per­a­ture and speed re­spec­tively, com­pare them with their set points, and ap­ply a rule for de­ter­min­ing their ac­tion. In the rule for a pro­por­tional con­trol­ler:

out­put = con­stant × (refer­ence—per­cep­tion)

there is no model of any­thing. The gain con­stant is not a model. The per­cep­tion, the refer­ence, and the out­put are not mod­els. The equa­tion re­lat­ing them is my model of the con­trol­ler. It is not the con­trol­ler’s model of any­thing: it is what the con­trol­ler is.

The only knowl­edge these sys­tems have is their per­cep­tions and their refer­ences, for tem­per­a­ture or speed. They con­tain no “im­plicit knowl­edge”.

They do not “im­plic­itly” make pre­dic­tions. The de­signer can pre­dict that they will work. The con­trol­lers them­selves pre­dict noth­ing. They do what they do whether it works or not. Some­times, in fact, these sys­tems do not work. The ther­mo­stat will fail to con­trol if the out­side tem­per­a­ture is above the set point. The cruise con­trol will fail to con­trol on a suffi­ciently steep down­hill gra­di­ent. They will not no­tice that they are not work­ing. They will not be­have any differ­ently as a re­sult. They will just carry on do­ing o=c×(r-p), or what­ever their out­put rule is.

I don’t know if any­one tried my robot simu­la­tion ap­plet that I linked to, but I’ve no­ticed that peo­ple I show it to read­ily an­thro­po­mor­phise it. (BTW, if its in­ter­face ap­pears scram­bled, re­size the browser win­dow a lit­tle and it should sort it­self out.) They see the robot ap­par­ently go­ing around the side of a hill to get to a food par­ti­cle and think it planned that, when in fact it knows ab­solutely noth­ing about the shape of the ter­rain ahead. They see it go to one food par­ti­cle rather than an­other and think it made a de­ci­sion, when in fact it does not know how many food par­ti­cles there are or where. There is al­most noth­ing in­side the robot, com­pared to what peo­ple imag­ine: no plan­ning, no adap­ta­tion, no pre­dic­tion, no sens­ing of dis­tur­bances, and no model of any­thing but its own ge­om­e­try. The 6-legged ver­sion con­tains 44 pro­por­tional con­trol­lers. The 44 gain con­stants are not a model, they merely work.

(A tan­gent: peo­ple look at other peo­ple and think they can see those other peo­ple’s pur­poses, thoughts, and feel­ings. Are their pro­jec­tions any more ac­cu­rate than they are when they look at that robot? If you think that they are, how do you know?)

Now, I am not ex­plain­ing con­trol sys­tems merely to ex­plain con­trol sys­tems. The rele­vance to ra­tio­nal­ity is that they fun­nel re­al­ity into a nar­row path in con­figu­ra­tion space by en­tirely ara­tional means, and thus con­sti­tute a proof by ex­am­ple that this is pos­si­ble. This must raise the ques­tion, how much of the neu­ral func­tion­ing of a liv­ing or­ganism, hu­man or lesser, op­er­ates by similar means? And how much of the func­tion­ing of an ar­tifi­cial or­ganism must be de­signed to use these means? It ap­pears in­escapable that all of what a brain does con­sists of con­trol sys­tems. To what ex­tent these may be model-based is an em­piri­cal ques­tion, and is not im­plied merely by the fact of con­trol. Like­wise, the ex­tent to which these meth­ods are use­ful in the de­sign of ar­tifi­cial sys­tems em­body­ing the Ul­ti­mate Art.

Evolu­tion op­er­ates statis­ti­cally; I would be en­tirely un­sur­prised by Bayesian analy­ses of evolu­tion. But how evolu­tion works is not how the evolved or­ganism works. That must be stud­ied sep­a­rately.

I may post some­thing more on the re­la­tion­ship be­tween Bayesian rea­son­ing and con­trol sys­tems nei­ther de­signed by nor perform­ing the same when I’ve di­gested the ma­te­rial that Steve_Ray­hawk pointed to. For the mo­ment, though, I’ll just re­mark that “Bayes!” is merely a mys­te­ri­ous an­swer, un­less backed up by ac­tual math­e­mat­i­cal ap­pli­ca­tion to the spe­cific case.

Ex­er­cises.

1. A room ther­mo­stat is set to turn the heat­ing on at 20 de­grees and off at 21. The am­bi­ent tem­per­a­ture out­side is 10 de­grees. You place a can­dle near the ther­mo­stat, whose effect is to raise its tem­per­a­ture 5 de­grees rel­a­tive to the body of the room. What will hap­pen to (a) the tem­per­a­ture of the room and (b) the tem­per­a­ture of the ther­mo­stat?

2. A cruise con­trol is set to main­tain the speed at 50 mph. It is me­chan­i­cally con­nected to the ac­cel­er­a­tor pedal—it moves it up and down, op­er­at­ing the throt­tle just as you would be do­ing if you were con­trol­ling the speed your­self. It is de­signed to dis­en­gage the mo­ment you de­press the brake. Sup­pose that that switch fails: the cruise con­trol con­tinues to op­er­ate when you ap­ply the brake. As you gen­tly ap­ply the brake, what will hap­pen to (a) the ac­cel­er­a­tor pedal, and (b) the speed of the car? What will hap­pen if you at­tempt to keep the speed down to 40 mph?

3. An em­ployee is paid an hourly rate for how­ever many hours he wishes to work. What will hap­pen to the num­ber of hours per week he works if the rate is in­creased?

4. A tar­get is im­posed on a doc­tor’s prac­tice, of never hav­ing a wait­ing list for ap­point­ments more than four weeks long. What effect will this have on (a) how long a pa­tient must wait to see the doc­tor, and (b) the length of the ap­point­ments book?

5. What re­lates ques­tions 3 and 4 to the sub­ject of this ar­ti­cle?

6. Con­trol­ler: o = c×(r-p). En­vi­ron­ment: dp/​dt = k×o + d. o, r, and p as above; c and k are con­stants; d is an ar­bi­trary func­tion of time (the dis­tur­bance). How fast and how ac­cu­rately does this con­trol­ler re­ject the dis­tur­bance and track the refer­ence?