Pattern-botching: when you forget you understand

It’s all too easy to let a false un­der­stand­ing of some­thing re­place your ac­tual un­der­stand­ing. Some­times this is an over­sim­plifi­ca­tion, but it can also take the form of an over­com­pli­ca­tion. I have an illu­mi­nat­ing story:

Years ago, when I was young and fool­ish, I found my­self in a par­tic­u­lar ro­man­tic re­la­tion­ship that would later end for epistemic rea­sons, when I was slightly less young and slightly less fool­ish. Any­way, this par­tic­u­lar girlfriend of mine was very into healthy eat­ing: raw, or­ganic, home-cooked, etc. Dur­ing her vis­its my diet would change sub­stan­tially for a few days. At one point, we got in a tiny fight about some­thing, and in a not-ac­tu­ally-des­per­ate chance to pla­cate her, I semi-jok­ingly offered: “I’ll go veg­e­tar­ian!”

“I don’t care,” she said with a sneer.

…and she didn’t. She wasn’t a veg­e­tar­ian. Duhhh… I knew that. We’d made some ground beef to­gether the day be­fore.

So what was I think­ing? Why did I say “I’ll go veg­e­tar­ian” as an at­tempt to ap­peal to her val­ues?

(I’ll in­vite you to take a mo­ment to come up with your own model of why that hap­pened. You don’t have to, but it can be helpful for evad­ing hind­sight bias of ob­vi­ous­ness.)

(Got one?)

Here’s my take: I pat­tern-matched a bunch of ac­tual prefer­ences she had with a gen­eral “healthy-eat­ing” cluster, and then I went and pul­led out some­thing ran­dom that felt vaguely as­so­ci­ated. It’s tel­ling, I think, that I don’t even ex­plic­itly be­lieve that veg­e­tar­i­anism is healthy. But to my pat­tern-matcher, they go to­gether nicely.

I’m go­ing to call this pat­tern-botch­ing.† Pat­tern-botch­ing is when you pat­tern-match a thing “X”, as fol­low­ing a cer­tain model, but then im­plicit queries to that model re­turn prop­er­ties that aren’t true about X. What makes this differ­ent from just hav­ing false be­liefs is that you know the truth, but you’re for­get­ting to use it be­cause there’s a botched model that is eas­ier to use.

†Maybe this already has a name, but I’ve read a lot of stuff and it feels like a dis­tinct con­cept to me.

Ex­am­ples of pat­tern-botching

So, that’s pat­tern-botch­ing, in a nut­shell. Now, ex­am­ples! We’ll start with some sim­ple ones.

Calm­ness and pre­tend­ing to be a zen master

In my Again­st­ness Train­ing video, past!me tries a bunch of things to calm down. In the pur­suit of “calm”, I tried things like...

  • dissociating

  • try­ing to imi­tate a zen master

  • speak­ing re­ally quietly and timidly

None of these are the de­sired state. The de­sired state is pre­sent, au­then­tic, and can pro­ject well while speak­ing as­sertively.

But that would re­quire ac­tu­ally be­ing in a differ­ent state, which to my brain at the time seemed hard. So my brain con­structed a pat­tern around the tar­get state, and said “what’s easy and looks vaguely like this?” and gen­er­ated the list above. Not as a list, of course! That would be too easy. It gen­er­ated each one in­di­vi­d­u­ally as a plau­si­ble course of ac­tion, which I then tried, and which Val then called me out on.

Per­son­al­ity Types

I’m quite gre­gar­i­ous, ex­traverted, and gen­er­ally un­flap­pable by noise and so­cial situ­a­tions. Many peo­ple I know de­scribe them­selves as HSPs (Highly Sen­si­tive Per­sons) or as very in­tro­verted, or as “not hav­ing a lot of spoons”. Th­ese con­cepts are re­lated—or per­haps not re­lated, but at least cor­re­lated—but they’re not the same. And even if these three terms did all mean the same thing, in­di­vi­d­ual peo­ple would still vary in their needs and prefer­ences.

Just this past week, I found my­self talk­ing with an HSP friend L, and not­ing that I didn’t re­ally know what her needs were. Like I knew that she was eas­ily star­tled by loud noises and of­ten found them painful, and that she found mo­tion in her periph­ery dis­tract­ing. But be­yond that… yeah. So I told her this, in the con­text of a more gen­eral con­ver­sa­tion about her HSP­ness, and I said that I’d like to learn more about her needs.

L re­sponded pos­i­tively, and sug­gested we talk about it at some point. I said, “Sure,” then added, “though it would be helpful for me to know just this one thing: how would you feel about me ask­ing you about a spe­cific need in the mid­dle of an in­ter­ac­tion we’re hav­ing?”

“I would love that!” she said.

“Great! Then I sus­pect our fu­ture in­ter­ac­tions will go more smoothly,” I re­sponded. I re­al­ized what had hap­pened was that I had con­flated L’s HSP­ness with… some­thing else. I’m not ex­actly sure what, but a prefer­ence for in­di­rect com­mu­ni­ca­tion, per­haps? I have an­other friend, who is also some­times short on spoons, who I model as find­ing that kind of ques­tion stress­ful be­cause it would kind of put them on the spot.

I’ve only just re­cently been re­al­iz­ing this, so I sus­pect that I’m still do­ing a ton of this pat­tern-botch­ing with peo­ple, that I haven’t speci­fi­cally no­ticed.

Of course, hav­ing clusters makes it eas­ier to have heuris­tics about what peo­ple will do, with­out know­ing them too well. A loose cluster is bet­ter than noth­ing. I think the is­sue is when we do know the per­son well, but we’re still rely­ing on this cluster-based model of them. It’s tel­ling that I was not ac­tu­ally sur­prised when L said that she would like it if I asked about her needs. On some level I kind of already knew it. But my botched pat­tern was mak­ing me doubt what I knew.

False aversions

CFAR teaches a tech­nique called “Aver­sion Fac­tor­ing”, in which you try to break down the rea­sons why you don’t do some­thing, and then con­sider each rea­son. In some cases, the rea­sons are sound rea­sons, so you de­cide not to try to force your­self to do the thing. If not, then you want to make the rea­sons go away. There are three types of rea­sons, with differ­ent ap­proaches.

One is for when you have a le­gi­t­i­mate is­sue, and you have to re­design your plan to avert that is­sue. The sec­ond is where the thing you’re averse to is real but isn’t ac­tu­ally bad, and you can kind of ig­nore it, or maybe use ex­po­sure ther­apy to get your­self more com­fortable with it. The third is… when the out­come would be an is­sue, but it’s not ac­tu­ally a nec­es­sary out­come of the thing. As in, it’s a fear that’s vaguely as­so­ci­ated with the thing at hand, but the thing you’re afraid of isn’t real.

All of these share a struc­tural similar­ity with pat­tern-botch­ing, but the third one in par­tic­u­lar is a great ex­am­ple. The aver­sion is gen­er­ated from a prop­erty that the thing you’re averse to doesn’t ac­tu­ally have. Un­like a mis­cal­ibrated aver­sion (#2 above) it’s usu­ally pretty ob­vi­ous un­der care­ful in­spec­tion that the fear it­self is based on a botched model of the thing you’re averse to.

Tak­ing the train­ing wheels off of your model

One other place this struc­ture shows up is in the differ­ence be­tween what some­thing looks like when you’re learn­ing it ver­sus what it looks like once you’ve learned it. Many peo­ple learn to ride a bike while ac­tu­ally rid­ing a four-wheeled ve­hi­cle: train­ing wheels. I don’t think any­one makes the mis­take of think­ing that the ul­ti­mate bike will have train­ing wheels, but in other con­texts it’s much less ob­vi­ous.

The re­main­ing three ex­am­ples look at how pat­tern-botch­ing shows up in learn­ing con­texts, where peo­ple im­plic­itly for­get that they’re only part­way there.

Ra­tion­al­ity as a way of thinking

CFAR runs 4-day ra­tio­nal­ity work­shops, which cur­rently are evenly split be­tween spe­cific tech­niques and how to ap­proach things in gen­eral. Let’s con­sider what kinds of be­havi­ours spring to mind when some­one en­coun­ters a prob­lem and asks them­selves: “what would be a ra­tio­nal ap­proach to this prob­lem?”

  • some­one with a re­ally naïve model, who hasn’t ac­tu­ally learned much about ap­plied ra­tio­nal­ity, might pat­tern-match “ra­tio­nal” to “hy­per-log­i­cal”, and think “What Would Spock Do?”

  • some­one who is some­what fa­mil­iar with CFAR and its in­struc­tors but who still doesn’t know any ra­tio­nal­ity tech­niques, might com­plete the pat­tern with some­thing that they think of as be­ing archety­pal of CFAR-folk: “What Would Anna Sala­mon Do?”

  • CFAR alumni, es­pe­cially new ones, might pat­tern-match “ra­tio­nal” as “us­ing these ra­tio­nal­ity tech­niques” and con­clude that they need to “goal fac­tor” or “use trig­ger-ac­tion plans”

  • some­one who gets ra­tio­nal­ity would sim­ply ap­ply that par­tic­u­lar struc­ture of think­ing to their problem

In the case of a bike, we see hun­dreds of peo­ple bik­ing around with­out train­ing wheels, and so that be­comes the ob­vi­ous ex­am­ple from which we gen­er­al­ize the pat­tern of “bike”. In other learn­ing con­texts, though, most peo­ple—in­clud­ing, some­times, the peo­ple at the lead­ing edge—are still in the early learn­ing phases, so the train­ing wheels are the rule, not the ex­cep­tion.

So peo­ple start think­ing that the figu­ra­tive bikes are sup­posed to have train­ing wheels.

In­ci­den­tally, this can also be the grounds for straw­man ar­gu­ments where de­trac­tors of the thing say, “Look at these bikes [with train­ing wheels]! How are you sup­posed to get any­where on them?!”

Effec­tive Altruism

We po­ten­tially see a similar effect with top­ics like Effec­tive Altru­ism. It’s a move­ment that is still in its in­fancy, which means that no­body has it all figured out. So when try­ing to an­swer “How do I be an effec­tive al­tru­ist?” our pat­tern-match­ers might pull up a bunch of ex­am­ples of things that EA-iden­ti­fied peo­ple have been com­monly ob­served to do.

  • donat­ing 10% of one’s in­come to a strate­gi­cally se­lected charity

  • go­ing to a cod­ing boot­camp and switch­ing ca­reers, in or­der to Earn to Give

  • start­ing a new or­ga­ni­za­tion to serve an un­met need, or to serve a need more efficiently

  • sup­port­ing the Against Malaria Fund

...and this gen­er­ated list might be helpful for var­i­ous things, but be wary of think­ing that it rep­re­sents what Effec­tive Altru­ism is. It’s pos­si­ble—it’s al­most in­evitable—that we don’t ac­tu­ally know what the most effec­tive in­ter­ven­tions are yet. We will po­ten­tially never ac­tu­ally know, but we can ex­pect that in the fu­ture we will gen­er­ally know more than at pre­sent. Which means that the cur­rent sam­pling of good EA be­havi­ours likely does not ac­tu­ally even cluster around the ul­ti­mate set of be­havi­ours we might ex­pect.

Creat­ing a new (plat­form for) culture

At my in­ten­tional com­mu­nity in Water­loo, we’re build­ing a new cul­ture. But that’s ac­tu­ally a by-product: our goal isn’t to build this par­tic­u­lar cul­ture but to build a plat­form on which many cul­tures can be built. It’s like how as a com­pany you don’t just want to be build­ing the product but rather build­ing the com­pany it­self, or “the ma­chine that builds the product,” as Foursquare founder Den­nis Crowley puts it.

What I started to no­tice though, is that we started to con­fused the par­tic­u­lar, tran­si­tionary cul­ture that we have at our house, with ei­ther (a) the par­tic­u­lar, tar­get cul­ture, that we’re aiming for, or (b) the more ab­stract range of cul­tures that will be con­structable on our plat­form.

So from a train­ing wheels per­spec­tive, we might to­tally erad­i­cate words like “should”. I did this! It was re­ally helpful. But once I had re­moved the word from my idiolect, it be­came un­helpful to still be treat­ing it as be­ing a touchy word. Then I heard my men­tor use it, and I re­mem­bered that the point of re­mov­ing the word wasn’t to not ever use it, but to train my brain to think with­out a par­tic­u­lar struc­ture that “should” rep­re­sented.

This shows up on much larger scales too. Val from CFAR was talk­ing about a par­tic­u­lar kind of fierce­ness, “hel­lfire”, that he sees as fun­da­men­tal and im­por­tant, and he noted that it seemed to be in­com­pat­i­ble with the kind of cul­ture my group is build­ing. I ini­tially agreed with him, which was kind of dis­so­nant for my brain, but then I re­al­ized that hel­lfire was only in­com­pat­i­ble with our train­ing cul­ture, not the en­tire set of cul­tures that could ul­ti­mately be built on our plat­form. That is, en­gag­ing with hel­lfire would po­ten­tially in­terfere with the learn­ing pro­cess, but it’s not ul­ti­mately pro­scribed by our cul­ture plat­form.

Con­scious cargo-culting

I think it might be helpful to re­peat the defi­ni­tion:

Pat­tern-botch­ing is you pat­tern-match a thing “X”, as fol­low­ing a cer­tain model, but then but then im­plicit queries to that model re­turn prop­er­ties that aren’t true about X. What makes this differ­ent from just hav­ing false be­liefs is that you know the truth, but you’re for­get­ting to use it be­cause there’s a botched model that is eas­ier to use.

It’s kind of like if you were do­ing a cargo-cult, ex­cept you knew how air­planes worked.

(Cross-posted from mal­col­mo­cean.com)