Gears Level & Policy Level

In­side view vs out­side view has been a fairly use­ful in­tu­ition-pump for ra­tio­nal­ity. How­ever, the di­chotomy has a lot of short­com­ings. We’ve just got­ten a whole se­quence about failures of a cluster of prac­tices called mod­est episte­mol­ogy, which largely over­laps with what peo­ple call out­side view. I’m not ready to stop cham­pi­oning what I think of as the out­side view. How­ever, I am ready for a name change. The term out­side view doesn’t ex­actly have a clear defi­ni­tion; or, to the ex­tent that it does have one, it’s “refer­ence class fore­cast­ing”, which is not what I want to point at. Refer­ence class fore­cast­ing has its uses, but many prob­lems have been noted.

I pro­pose gears level & policy level. But, be­fore I dis­cuss why these are ap­pro­pri­ate re­place­ments, let’s look at my mo­tives for find­ing bet­ter terms.

Is­sues with In­side vs Outside

Prob­lems with the con­cept of out­side view as it cur­rently ex­ists:

  • Refer­ence class fore­cast­ing tends to im­ply stop­ping at base-rate rea­son­ing, rather than start­ing at base-rate rea­son­ing. I want a con­cept of out­side view which helps over­come base-rate ne­glect, but which more ob­vi­ously con­notes com­bin­ing an out­side view with an in­side view (by anal­ogy to com­bin­ing a prior prob­a­bil­ity with a like­li­hood func­tion to get a pos­te­rior prob­a­bil­ity).

  • Refer­ence class fore­cast­ing lends it­self to refer­ence class ten­nis, IE, a game of choos­ing the refer­ence class which best makes your point for you. (That’s a link to the same ar­ti­cle as the pre­vi­ous bul­let point, since it origi­nated the term, but this Stu­art Arm­strong ar­ti­cle also dis­cusses it. Paul Chris­ti­ano dis­cusses rules and et­ti­quete of refer­ence class ten­nis, be­cause of course he does.) Refer­ence class ten­nis is both a pretty bad con­ver­sa­tion to have, which makes refer­ence class fore­cast­ing a poor choice for pro­duc­tive dis­cus­sion, and a po­nen­tially big source of bias if you do it to your­self. It’s closely re­lated to the worst ar­gu­ment in the world.

  • Refer­ence class fore­cast­ing is speci­fied at the ob­ject level: you find a class fit­ting the pre­dic­tion you want to make, and you check the statis­tics for things in that class to make your pre­dic­tion. How­ever, cen­tral ex­am­ples of the use­ful­ness of the out­side view oc­cur at the meta level. In ex­am­ples of plan­ning-fal­lacy cor­rec­tion, you don’t just note how close you usu­ally get to the dead­line be­fore finish­ing some­thing. You com­pare it to how close to the dead­line you usu­ally ex­pect to get. Why would you do that? To cor­rect your in­side view! As I men­tioned be­fore, the type of the out­side view should be such that it begs com­bi­na­tion with the in­side view, rather than stand­ing on its own.

  • Out­side view has the con­no­ta­tion of step­ping back and ig­nor­ing some de­tails. How­ever, we’d like to be able to use all the in­for­ma­tion at our dis­posal—so long as we can use it in the right way. Tak­ing base rates into ac­count can look like ig­nor­ing in­for­ma­tion: walk­ing by the prover­bial hun­dred-dol­lar bill on the gorund in times square, or prepar­ing for a large flood de­spite there be­ing none in liv­ing mem­ory. How­ever, while ac­count­ing for base rates does in­deed tend to smooth out be­hav­ior and make it de­pend less on ev­i­dence, that’s be­cause we’re work­ing with more in­for­ma­tion, not less. A con­cept of out­side view which con­notes bring­ing in more in­for­ma­tion, rather than less, would be an im­prove­ment.

The ex­ist­ing no­tion of in­side view is also prob­le­matic:

  • The in­side-view vs out­side-view dis­tinc­tion does dou­ble duty as a de­scrip­tive di­chotomy and a pre­scrip­tive tech­nique. This is es­pe­cially harm­ful in the case of in­side view, which gets be­lit­tled as the naive thing you do be­fore you learn to move to out­side view. (We could similarly ma­lign the out­side view as what you have be­fore you have a true in­side-view un­der­stand­ing of a thing.) On the con­trary, there are sig­nifi­cant skills in form­ing a high-qual­ity in­side view. I pri­mar­ily want to point at those, rather than the de­scrip­tive cluster.

The Gears Level and the Policy Level

Gears-level un­der­stand­ing is a term from CFAR, so you can’t blame me for it. Well, I’m en­dors­ing it, so I sup­pose you can blame me a lit­tle. In any case, I like the term, and I think it fits my pur­poses. Some fea­tures of gears-level rea­son­ing:

The policy level is not a CFAR con­cept. It is similar to the CFAR con­cept of the strate­gic level, which I sus­pect is based on Nate Soares’ Star­ing Into Re­grets. In any case, here are some things which point in the right di­rec­tion:

  • Plac­ing your­self as an in­stance of a class.

  • Ac­count­ing for knock-on effects, in­clud­ing con­sis­tency effects. Choos­ing an ac­tion re­ally is a lot like set­ting your fu­ture policy.

  • What game the­o­rists mean by policy: a func­tion from ob­ser­va­tions to ac­tions, which is (ideally) in equil­ibrium with the poli­cies of all other agents. A good policy lets you co­or­di­nate sucess­fully with your­self and with oth­ers. Choos­ing a policy illus­trates the idea of choos­ing at the meta level: you aren’t se­lect­ing an ac­tion, but rather, a func­tion from situ­a­tions to ac­tions.

  • Time­less de­ci­sion the­ory /​ up­date­less de­ci­sion the­ory /​ func­tional de­ci­sion the­ory. Roughly, choos­ing a policy from be­hind a Rawlsian veil of ig­no­rance. As I men­tioned with ac­count­ing for base rates, it might seem from one per­spec­tive like this kind of rea­son­ing is throw­ing in­for­ma­tion away; but ac­tu­ally, it is much more pow­er­ful. It al­lows you to set up ar­bi­trary func­tions from in­for­ma­tion states to strate­gies. You are not ac­tu­ally throw­ing in­for­ma­tion away; you always have the op­tion of re­spond­ing to it as usual. You are gain­ing the op­tion of ig­nor­ing it, or re­act­ing to it in a differ­ent way, based on larger con­sid­er­a­tions.

  • Cog­ni­tive re­duc­tions, in Jes­sica Tay­lor’s sense (points five and six here). Tak­ing the out­side view should not en­tail giv­ing up on hav­ing a gears-level model. The virtues of good mod­els at the gears level are still virtues at the policy level. Rather, the policy level asks you to make a gears-level model of your own cog­ni­tive pro­cess. When you go to the policy level, you take your nor­mal way of think­ing and do­ing as an ob­ject. You think about the causes and effects of your nor­mal ways of be­ing.

Most of the ex­ist­ing ideas I can point to are about ac­tions: game the­ory, de­ci­sion the­ory, the plan­ning fal­lacy. That’s prob­a­bly the worst prob­lem with the ter­minol­ogy choice. Policy-level think­ing has a very in­stru­men­tal char­ac­ter, be­cause it is about pro­cess. How­ever, at its core, it is epistemic. Gears level think­ing is the prac­tice of good map-mak­ing. The out­put is a high-qual­ity map. Policy-level think­ing, on the other hand, is the the­ory of map-mak­ing. The out­put is a re­fined strat­egy for mak­ing maps.

The stan­dard ex­am­ple with the plan­ning fal­lacy illus­trates this: al­though the goal is to im­prove plan­ning, which sounds in­stru­men­tal, the key is notic­ing the mis­cal­ibra­tion of time es­ti­mates. The same trick works for any kind of men­tal mis­cal­ibra­tion: if you know about it, you can ad­just for it.

This is not just refer­ence class fore­cast­ing, though. You don’t ad­just your time es­ti­mates for pro­jects up­ward and stop there. The fact that you nor­mally un­der­es­ti­mate how long things will take makes you think about your model. “Hm, that’s in­ter­est­ing. My plans al­most never come out as stated, but I always be­lieve in them when I’m mak­ing them.” You shouldn’t be satis­fied with this state of af­fairs! You can slap on a cor­rec­tion fac­tor and keep plan­ning like you always have, but this is a sort of para­dox­i­cal men­tal state to main­tain. If you do man­age keep the dis­par­ity be­tween your past pre­dic­tions and ac­tual events ac­tively in mind, I think it’s more nat­u­ral to start con­sid­er­ing which parts of your plans are most likely to go wrong.

If I had to spell it out in steps:

  1. No­tice that a thing is hap­pen­ing. In par­tic­u­lar, no­tice that a thing is hap­pen­ing to you, or that you’re do­ing a thing. This step is skipped in ex­per­i­ments on the plan­ning fal­lacy; ex­per­i­menters frame the situ­a­tion. In some re­spects, though, it’s the most im­por­tant part; nam­ing the situ­a­tion as a situ­a­tion is what lets you jump out­side of it. This is what lets you go off-script, or be anti-sphex­ish.

  2. Make a model of the in­put-out­put re­la­tions in­volved. Why did you say what you just said? Why did you think what you just thought? Why did you do what you just did? What are the typ­i­cal effects of these thoughts, words, ac­tions? This step is most similar to refer­ence class fore­cast­ing. Figur­ing out the in­put-out­put re­la­tion is a com­bi­na­tion of re­fin­ing the refer­ence class to be the most rele­vant one, and think­ing of the base-rates of out­comes in the refer­ence class.

  3. Ad­just your policy. Is there a sys­tem­atic bias in what you’re cur­rently do­ing? Is there a risk you weren’t ac­count­ing for? Is there an ex­tra vari­able you could use to differ­en­ti­ate be­tween two cases you were treat­ing as the same? Ch­ester­ton-fenc­ing your old strat­egy is im­por­tant here. Be gen­tle with policy changes—you don’t want to make a bucket er­ror or fall into a hufflepuff trap. If you no­tice re­sistence in your­self, be sure to leave a line of re­treat by vi­su­al­iz­ing pos­si­ble wor­lds. (Yes, I think all those links are ac­tu­ally rele­vant. No, you don’t have to read them to get the point.)

I don’t know quite what I can say here to con­vey the im­por­tance of this. There is a skill here; a very im­por­tant skill, which can be done in a split sec­ond. It is the skill of go­ing meta.

Gears-Leves and Policy-Level Are Not Opposites

The sec­ond-most con­fus­ing thing about my pro­posed terms is prob­a­bly that they are not op­po­sites of each other. They’d be snap­pier if they were; “in­side view vs out­side view” had a nice sound to it. On the other hand, I don’t want the con­cepts to be op­posed. I don’t want a di­chotomy that serves as a dis­crip­tive clus­ter­ing of ways of think­ing; I want to point at skills of think­ing. As I men­tioned, the vir­tu­ous fea­tures of gears-level think­ing are still pre­sent when think­ing at the policy level; un­like in refer­ence class fore­cast­ing, the ideal is still to get a good causal model of what’s go­ing on (IE, a good causal model of what is pro­duc­ing sys­tem­atic bias in your way of think­ing).

The op­po­site of gears-level think­ing is un-gears-like think­ing: rea­son­ing by anal­ogy, loose ver­bal ar­gu­ments, rules of thumb. Policy-level think­ing will of­ten be like this when you seek to make sim­ple cor­rec­tions for bi­ases. But, re­mem­ber, these are er­ror mod­els in the er­rors-vs-bugs di­chotomy; real skill im­prove­ment re­lies on bug mod­els (as stud­ies in de­liber­ate prac­tice sug­gest).

The op­po­site of policy-level think­ing? Stim­u­lus-re­sponse; re­in­force­ment learn­ing; habit; scripted, sphex­ish be­hav­ior. This, too, has its place.

Still, like in­side and out­side view, gears and policy think­ing are made to work to­gether. Learn­ing the prin­ci­ples of strong gears-level think­ing helps you fill in the in­tri­cate struc­ture of the uni­verse. It al­lows you to get past so­cial rea­son­ing about who said what and what you were taught and whay you’re sup­posed to think and be­lieve, and in­stead, get at what’s true. Policy-level think­ing, on the other hand, helps you to not get lost in the de­tails. It pro­vides the rud­der which can keep you mov­ing in the right di­rec­tion. It’s bet­ter at co­op­er­at­ing with oth­ers, main­tain­ing san­ity be­fore you figure out how it all adds up to nor­mal­ity, and op­ti­miz­ing your daily life.

Gears and poli­cies both con­sti­tute mo­ment-to-mo­ment ways of look­ing at the world which can change the way you think. There’s no sim­ple place to go to learn the skil­lsets be­hind each of them, but if you’ve been around LessWrong long enough, I sus­pect you know what I’m ges­tur­ing at.