Gears in understanding

Some (literal, phys­i­cal) roadmaps are more use­ful than oth­ers. Some­times this is be­cause of how well the map cor­re­sponds to the ter­ri­tory, but some­times it’s be­cause of fea­tures of the map that are ir­re­spec­tive of the ter­ri­tory. E.g., maybe the lines are fat and smudged such that you can’t tell how far a road is from a river, or maybe it’s un­clear which road a name is try­ing to in­di­cate.

In the same way, I want to point at a prop­erty of mod­els that isn’t about what they’re mod­el­ing. It in­ter­acts with the clar­ity of what they’re mod­el­ing, but only in the same way that smudged lines in a roadmap in­ter­act with the clar­ity of the roadmap.

This prop­erty is how de­ter­minis­ti­cally in­ter­con­nected the vari­ables of the model are. There are a few tests I know of to see to what ex­tent a model has this prop­erty, though I don’t know if this list is ex­haus­tive and would be a lit­tle sur­prised if it were:

  1. Does the model pay rent? If it does, and if it were falsified, how much (and how pre­cisely) could you in­fer other things from the falsifi­ca­tion?

  2. How in­co­her­ent is it to imag­ine that the model is ac­cu­rate but that a given vari­able could be differ­ent?

  3. If you knew the model were ac­cu­rate but you were to for­get the value of one vari­able, could you red­erive it?

I think this is a re­ally im­por­tant idea that ties to­gether a lot of differ­ent top­ics that ap­pear here on Less Wrong. It also acts as a pre­req­ui­site frame for a bunch of ideas and tools that I’ll want to talk about later.

I’ll start by giv­ing a bunch of ex­am­ples. At the end I’ll sum­ma­rize and ges­ture to­ward where this is go­ing as I see it.


Ex­am­ple: Gears in a box

Let’s look at this col­lec­tion of gears in an opaque box:

(Draw­ing cour­tesy of my col­league, Dun­can Sa­bien.)

If we turn the left­hand gear coun­ter­clock­wise, it’s within our model of the gears on the in­side that the right­hand gear could turn ei­ther way. The model we’re able to build for this sys­tem of gears does poorly on all three tests I named ear­lier:

  • The model barely pays rent. If you spec­u­late that the right­hand gear turns one way and you dis­cover it turns the other way, you can’t re­ally in­fer very much. All you can mean­ingly in­fer is that if the sys­tem of gears is pretty sim­ple (e.g., noth­ing that makes the right­hand gear al­ter­nate as the left­hand gear ro­tates coun­ter­clock­wise), then the di­rec­tion that the right­hand gear turns de­ter­mines whether the to­tal num­ber of gears is even or odd.

  • The gear on the right­hand side could just as well go ei­ther way. Your ex­pec­ta­tions aren’t con­strained.

  • Right now you don’t know which way the right­hand gear turns, and you can’t de­rive it.

Sup­pose that Joe peeks in­side the box and tells you “Oh, the right­hand gear will ro­tate clock­wise.” You imag­ine that Joe is more likely to say this if the right­hand gear turns clock­wise than if it doesn’t, so this seems like rele­vant ev­i­dence that the right­hand gear turns clock­wise. This gets stronger the more peo­ple like Joe who look in the box and re­port the same thing.

Now let’s peek in­side the box:

…and now we have to won­der what’s up with Joe.

The sec­ond test stands out for me es­pe­cially strongly. There is no way that the ob­vi­ous model about what’s go­ing on here could be right and Joe is right. And it doesn’t mat­ter how many peo­ple agree with Joe in terms of the logic of this state­ment: Either all of them are wrong, or your model is wrong. This logic is im­mune to so­cial pres­sure. It means that there’s a chance that you can ac­cu­mu­late ev­i­dence about how well your map matches the ter­ri­tory here, and if that con­verges on your map be­ing ba­si­cally cor­rect, then you are on firm epistemic foot­ing to dis­re­gard the opinion of lots of other peo­ple. Gather­ing ev­i­dence about the map/​ter­ri­tory cor­re­spon­dence has higher lev­er­age for see­ing the truth than does gath­er­ing ev­i­dence about what oth­ers think.

The first test shows some­thing in­ter­est­ing too. Sup­pose the gear on the right re­ally does move clock­wise when you move the left gear coun­ter­clock­wise. What does that im­ply? Well, it means your ini­tial model (if it’s what I imag­ine it is) is wrong — but there’s a limited space of pos­si­bil­ities about ways in which it can be wrong. For in­stance, maybe the sec­ond gear from the left is on a ver­ti­cal track and moves up­ward in­stead of ro­tat­ing. By com­par­i­son, some­thing like “Gears work in mys­te­ri­ous ways” just won’t cut it.

If we com­bine the two, we end up star­ing at Joe and notic­ing that we can be a lot more pre­cise than just “Joe is wrong”. We know that ei­ther Joe’s model of the gears is wrong (e.g., he thinks some gear is on a ver­ti­cal track), Joe’s model of the gears is vague and isn’t con­strained the way ours is (e.g., he was just count­ing gears and made a mis­take), or Joe is ly­ing. The first two give us testable pre­dic­tions: If his model is wrong, then it’s wrong in some spe­cific way; and if it’s vague, then there should be some place where it does poorly on the three tests of model in­ter­con­nect­ed­ness. If we start zoom­ing in on these two pos­si­bil­ities while talk­ing to Joe and it turns out that nei­ther of those are true, then it be­comes a lot more ob­vi­ous that Joe is just bul­lshit­ting (or we failed to think of a fourth op­tion).

Be­cause of this ex­am­ple, in CFAR we talk about how “Gears-like” or how “made of Gears” a model is. (I cap­i­tal­ize “Gears” to em­pha­size that it’s an anal­ogy.) When we no­tice an in­ter­con­nec­tion, we talk about “find­ing Gears”. I’ll use this lan­guage go­ing for­ward.


Ex­am­ple: Arithmetic

If you add 25+18 us­ing the stan­dard ad­di­tion al­gorithm, you have to carry a 1, usu­ally by mark­ing that above the 2 in 25.

Fun fact: it’s pos­si­ble to get that right with­out hav­ing any clue what the 1 rep­re­sents or why you write it there.

This is ac­tu­ally a pretty ma­jor is­sue in math ed­u­ca­tion. There’s an in-prac­tice ten­sion be­tween (a) mem­o­riz­ing and drilling al­gorithms that let you com­pute an­swers quickly, and (b) “re­ally un­der­stand­ing” why those al­gorithms work.

Un­for­tu­nately, there’s a kind of philo­soph­i­cal de­bate that of­ten hap­pens in ed­u­ca­tion when peo­ple talk about what “un­der­stand” means, and I find it pretty an­noy­ing. It goes some­thing like this:

  • Per­son A: “The stu­dent said they carry the 1 be­cause that’s what their teacher told them to do. So they don’t re­ally un­der­stand the ad­di­tion al­gorithm.”

  • Per­son B: “What do you mean by re­al­lyun­der­stand’?Wˆ′swrong­with­the­jus­tifi­ca­tionofA per­son who knows this sub­ject re­ally well says this works, and I be­lieve them’?”

  • A: “But that rea­son isn’t about the math­e­mat­ics. Their jus­tifi­ca­tion isn’t math­e­mat­i­cal. It’s so­cial.”

  • B: “Math­e­mat­i­cal jus­tifi­ca­tion is so­cial. The style of proof that topol­o­gists use wouldn’t be ac­cepted by an­a­lysts. What con­sti­tutes a pr∞f’ora­jus­tifi­ca­tion’ in math is so­cially agreed upon.”

  • A: “Oh, come on. We can’t just agree that e=3 and make that true. Sure, maybe the way we talk about math is so­cially con­structed, but we’re talk­ing about some­thing real.”

  • B: “I’m not sure that’s true. But even if it were, how could you know whether you’re talk­ing about that `some­thing real’ as op­posed to one of the so­cial con­structs we’re us­ing to share per­spec­tives about it?”

Et cetera.

(I would love to see de­bates like this hap­pen in a mi­lieu of mu­tual truth-seek­ing. Un­for­tu­nately, that’s not what academia re­wards, so it prob­a­bly isn’t go­ing to hap­pen there.)

I think Per­son A is try­ing to ges­ture at the claim that the stu­dent’s model of the ad­di­tion al­gorithm isn’t made of Gears (and im­plic­itly that it’d be bet­ter if it were). I think this clar­ifies both what A is say­ing and why it mat­ters. In terms of the tests:

  • The ad­di­tion al­gorithm to­tally pays rent. E.g., if you count out 25 to­kens and an­other 18 to­kens and you then count the to­tal num­ber of to­kens you get, that num­ber should cor­re­spond to what the al­gorithm out­puts. If it turned out that the stu­dent does the al­gorithm but the an­swer doesn’t match the to­ken count, then the stu­dent can only con­clude that the ad­di­tion al­gorithm isn’t use­ful for the to­kens. There isn’t a lot else they can de­duce. (By way of con­trast, if I no­ticed this, then I’d con­clude that ei­ther I’d made a mis­take in run­ning the al­gorithm or I’d made a mis­take in count­ing, and I’d be very con­fi­dent that at least one of those two things is true.)

  • The stu­dent could prob­a­bly read­ily imag­ine a world in which you aren’t sup­posed to carry the 1 but the al­gorithm still works. This means their model isn’t very con­strained, at least as we’re imag­in­ing it. (Whereas at­tempt­ing to think about car­ry­ing be­ing wrong to do for get­ting the right an­swer makes my head ex­plode.)

  • If the stu­dent for­got what their teacher said about what to do when a column adds up to more than nine, we imag­ine they wouldn’t spon­ta­neously no­tice the need to carry the 1. (If I for­got about car­ry­ing, though, I’d get con­fused about what to do with this ex­tra ten and would come up with some­thing math­e­mat­i­cally equiv­a­lent to “car­ry­ing the 1”.)

I find this to be a use­ful taboo­ing of the word “un­der­stand” in this con­text.


Ex­am­ple: My mother

My mother re­ally likes learn­ing about his­tory.

Right now, this is prob­a­bly an unattached ran­dom fact in your mind. Maybe a month down the road I could ask you “How does my mother feel about learn­ing his­tory?” and you could try to re­mem­ber the an­swer, but you could just as well be­lieve the world works an­other way.

But for me, that’s not true at all. If I for­got her feel­ings about learn­ing about his­tory, I could make a pretty ed­u­cated guess based on my over­all sense of her. I wouldn’t be Earth-shat­ter­ingly shocked to learn that she doesn’t like read­ing about his­tory, but I’d be re­ally con­fused, and it’d throw into ques­tion my sense of why she likes work­ing with herbs and why she likes hang­ing out with her fam­ily. It would make me think that I hadn’t quite un­der­stood what kind of per­son my mother is.

As you might have no­ticed, this is an ap­pli­ca­tion of tests 1 and 3. In par­tic­u­lar, my model of my mom isn’t to­tally made of Gears in the sense that I could tell you what she’s feel­ing right now or whether she de­faults to think­ing in terms of par­ti­tive di­vi­sion or quo­ta­tive di­vi­sion. But the tests illus­trate that my model of my mother is more Gears-like than your model of her.

Part of the point I’m mak­ing with this ex­am­ple is that “Gears-ness” isn’t a bi­nary prop­erty of mod­els. It’s more like a spec­trum, from “ran­dom smat­ter­ing of un­con­nected facts” to “clear ax­io­matic sys­tem with well-defined log­i­cal de­duc­tions”. (Or at least that’s how I’m cur­rently imag­in­ing the spec­trum!)

Also, I spec­u­late that this is part of what we mean when we talk about “get­ting to know” some­one: it in­volves in­creas­ing the Gears-ness of our model of them. It’s not about just get­ting some iso­lated facts about where they work and how many kids they have and what they like do­ing for hob­bies. It’s about flesh­ing out an abil­ity to be sur­prised if you were to learn some new fact about them that didn’t fit your model of them.

(There’s also an em­piri­cal ques­tion in get­ting to know some­one of how well your Gears-ish model ac­tu­ally matches that per­son, but that’s about the map/​ter­ri­tory cor­re­spon­dence. I want to be care­ful to keep talk­ing about prop­er­ties of maps here.)

This lightly Gears-ish model of peo­ple is what I think lets you de­duce what Mr. Rogers prob­a­bly would have thought about, say, peo­ple mis­treat­ing cats on Hal­loween even though I don’t know if he ever talked about it. As per test #2, you’d prob­a­bly be pretty shocked and con­fused if you were given com­pel­ling ev­i­dence that he had joined in, and I imag­ine it’d take a lot of ev­i­dence. And then you’d have to up­date a lot about how you view Mr. Rogers (as per test #1). I think a lot of peo­ple had this kind of “Who even is this per­son?” ex­pe­rience when lots of crim­i­nal charges came out against Bill Cosby.


Ex­am­ple: Gyroscopes

Most peo­ple feel visceral sur­prise when they watch how gy­ro­scopes be­have. Even if they log­i­cally know the sus­pended gy­ro­scope will ro­tate in­stead of fal­ling, they usu­ally feel like it’s bizarre some­how. Even peo­ple who get gy­ro­scopes’ be­hav­ior into their in­tu­itions prob­a­bly had to train it for a while first and found them sur­pris­ing and coun­ter­in­tu­itive.

Some­how, for most peo­ple, it seems co­her­ent to imag­ine a world in which physics works ex­actly the same way ex­cept that when you sus­pend one end of a gy­ro­scope, it falls like a non-spin­ning ob­ject would and just keeps spin­ning.

If this is true of you, that means your model of the physics around gy­ro­scopes does poorly on test #2 of how Gears-like it is.

The rea­son gy­ro­scopes do what they do is ac­tu­ally some­thing you can de­rive from New­ton’s Laws of Mo­tion. Like the gears ex­am­ple, you can’t ac­tu­ally have a co­her­ent model of ro­ta­tion that al­lows (a) New­ton’s Laws and (b) a gy­ro­scope that doesn’t ro­tate in­stead of fal­ling when sus­pended on one end in a grav­i­ta­tional field. So if both (a) and (b) seem plau­si­ble to you, then your model of ro­ta­tion isn’t co­her­ent. It’s miss­ing Gears.

This is one of the beau­tiful (to me) things about physics: ev­ery­thing is made of Gears. Physics is (I think) the sys­tem of Gears you get when you stare at any phys­i­cal ob­ject’s be­hav­ior and ask “What makes you do that?” in a Gears-seek­ing kind of way. It’s a differ­ent level of ab­strac­tion than the “Gears of peo­ple” thing, but we kind of ex­pect that even­tu­ally, at least in the­ory, a suffi­cient ex­ten­sion of physics will con­nect the Gears of me­chan­ics to the Gears of what makes a ro­man­tic re­la­tion­ship last while feel­ing good to be in.

I want to rush to clar­ify that I’m not say­ing that the world is made of Gears. That’s a type er­ror. I’m sug­gest­ing that the prop­erty of Gears-ness in mod­els is track­ing a true thing about the world, which is why mak­ing mod­els more Gears-like can be so pow­er­ful.


Gears-ness is not the same as goodness

I want to em­pha­size that, while I think that more Gears are bet­ter all else be­ing equal, there are other prop­er­ties of mod­els that I think are worth­while.

The ob­vi­ous one is ac­cu­racy. I’ve been in­ten­tion­ally sidestep­ping that prop­erty through­out most of this post. This is where the ra­tio­nal­ist virtue of em­piri­cism be­comes crit­i­cal, and I’ve ba­si­cally ig­nored (but hope­fully never defied!) em­piri­cism here.

Another is gen­er­a­tivity. Does the model in­spire a way of ex­pe­rienc­ing in ways that are use­ful (what­ever “use­ful” means)? For in­stance, many be­liefs in God or the di­v­ine or similar are too ab­stract to pay rent, but some peo­ple still find them helpful for re­fram­ing how they emo­tion­ally ex­pe­rience beauty, mean­ing, and other peo­ple. I know of a few ex-athe­ists who say that hav­ing be­come Chris­tian causes them to be nicer peo­ple and has made their re­la­tion­ships bet­ter. I think there’s rea­son for epistemic fear here to the ex­tent that those re­li­gious frame­works sneak in claims about how the world ac­tu­ally works — but if you’re epistem­i­cally care­ful, it seems pos­si­bly worth­while to ex­plore how to tap the power of faith with­out tak­ing epistemic dam­age.

I also think that even if you’re try­ing to lean on the Gears-like power of a model, lack­ing Gears doesn’t mean that the ac­tivity is worth­less. In fact, I think this is all we can do most of the time, be­cause most of our mod­els don’t con­nect all the way down to physics. E.g., I’m think­ing of get­ting my mother a par­tic­u­lar book as a gift be­cause I think she’ll re­ally like it, but I can also come up with a within-my-model-of-her story about why she might not re­ally care about it. I don’t think the fact that my model of her is weakly con­strained means that (a) I shouldn’t use the model or that (b) it’s not worth­while to ex­plore the “why” be­hind both my be­ing right and my be­ing wrong. (I think of it as a bit of pre-com­pu­ta­tion: whichever way the world goes, my model be­comes a lit­tle more “crisp”, which is to say, more Gears-like. It just so hap­pens that I know in what way be­fore­hand.)

I men­tion this be­cause some­times in ra­tio­nal­ist con­texts, I’ve felt a pres­sure to not talk about mod­els that are miss­ing Gears. I don’t like that. I think that Gears-ness is a re­ally su­per im­por­tant thing to track, and I think there’s some­thing epistem­i­cally dan­ger­ous about failing to no­tice a lack of Gears. Clearly not­ing, at least in your own mind, where there are and aren’t Gears seems re­ally good to me. But I think there are other ca­pac­i­ties that are also im­por­tant when we’re try­ing to get episte­mol­ogy right.

Gears seem valuable to me for a rea­son. I’d like us to keep that rea­son in mind rather than get­ting too fix­ated on Gears-ness.


Go­ing forward

I think this frame of Gears-ness of mod­els is su­per pow­er­ful for cut­ting through con­fu­sion. It helps our un­der­stand­ing of the world be­come im­mune to so­cial fool­ish­ness and de­mands a kind of rigor to our think­ing that I see as unify­ing lots of ideas in the Se­quences.

I’ll want to build on this frame as I high­light other ideas. In par­tic­u­lar, I haven’t spo­ken to how we know Gears are worth look­ing for. So while I view this as a pow­er­ful weapon to use in our war against san­ity drought, I think it’s also im­por­tant to ex­am­ine the smithy in which it was forged. I sus­pect that won’t be my very next post, but it’s one I have in mind up­com­ing.