Levers, Emotions, and Lazy Evaluators:

Lev­ers, Emo­tions, and Lazy Eval­u­a­tors: Post-CFAR 2

[This is a trio of top­ics fol­low­ing from the first post that all use the idea of on­tolo­gies in the men­tal sense as a bounc­ing off point. I ex­am­ine why nam­ing con­cepts can be helpful, listen­ing to your emo­tions, and hu­mans as lazy eval­u­a­tors. I think this post may also be of in­ter­est to peo­ple here. Posts 3 and 4 are less so, so I’ll prob­a­bly skip those, un­less some­one ex­presses in­ter­est. Lastly, the be­low ex­pressed views are my own and don’t re­flect CFAR’s in any way.]

Lev­ers:

When I was at the CFAR work­shop, some­one men­tioned that some­thing like 90% of the cur­ricu­lum was just mak­ing up fancy new names for things they already sort of did. This got some laughs, but I think it’s worth ex­plor­ing why even just nam­ing things can be pow­er­ful.

Our minds do lots of things; they carry many thoughts, and we can re­call many mem­o­ries. Some of these phe­nom­ena may be more helpful for our goals, and we may want to name them.

When we name a phe­nomenon, like fo­cus­ing, we’re es­sen­tially draw­ing a bound­ary around the thing, high­light­ing at­ten­tion on it. We’ve made it con­cep­tu­ally dis­crete. This trans­for­ma­tion, in turn, al­lows us to more con­cretely iden­tify which things among the sea of our men­tal ac­tivity cor­re­spond to Fo­cus­ing.

Fo­cus­ing can then be­come a con­cept that floats in our un­der­stand­ing of things our minds can do. We’ve taken a men­tal ac­tion and pack­aged it into a “thing”. This can be es­pe­cially helpful if we’ve iden­ti­fied a phe­nom­ena that con­sists of sev­eral steps which usu­ally aren’t found to­gether.

By draw­ing cer­tain pat­terns around a thing with a name, we can hope­fully help oth­ers rec­og­nize them and per­haps do the same for other men­tal mo­tions, which seems to be one more way that we find new ra­tio­nal­ity tech­niques.

This then means that we’ve cre­ated a new ac­tion that is ex­plic­itly available to our on­tol­ogy. This no­tion of “ac­tions I can take” is what I think forms the idea of lev­ers in our mind. When CFAR teaches a ra­tio­nal­ity tech­nique, the tech­nique it­self seems to be point­ing at a se­quence of things that hap­pen in our brain. Last post, I men­tioned that I think CFAR tech­niques up­grade peo­ple’s mind­sets by chang­ing their sense of what is pos­si­ble.

I think that lev­ers are a core part of this be­cause they give us the feel­ing of, “Oh wow! That thing I some­times do has a name! Now I can re­fer to it and think about it in a much nicer way. I can call it ‘fo­cus­ing’, rather than ‘that thing I some­times do when I try to figure out why I’m feel­ing sad that in­volves look­ing into my­self’.”

For ex­am­ple, once you un­der­stand that a large part of ha­bit­u­a­tion is sim­ply “if-then” loops (ala TAPs, aka Trig­ger Ac­tion Plans), you’ve now not only un­der­stood what it means to learn some­thing as a habit, but you’ve in­ter­nal­ized the very con­cept of ha­bit­u­a­tion it­self. You’ve gone one meta-level up, and you can now rea­son about this ab­stract men­tal pro­cess in a far more ex­plicit way.

Names haves power in the same way that ab­strac­tion bar­ri­ers have power in a pro­gram­ming lan­guage—they change how you think about the phe­nom­ena it­self, and this in turn can af­fect your be­hav­ior.

Emo­tions:

CFAR teaches a class called “Un­der­stand­ing Shoulds”, which is about see­ing your “shoulds”, the parts of your­self that feel like obli­ga­tions, as data about things you might care about. This is a lit­tle differ­ent from Nate Soares’s Re­plac­ing Guilt se­ries, which tries to move past guilt-based mo­ti­va­tion.

In fur­ther con­ver­sa­tions with staff, I’ve seen the even deeper view that all emo­tions should be con­sid­ered in­for­ma­tion.

The ba­sic premise seems to be based off the un­der­stand­ing that differ­ent parts of us may need differ­ent things to func­tion. Our con­scious un­der­stand­ing of our own needs may some­times be limited. Thus, our im­plicit emo­tions (and other S1 pro­cesses) can serve as a way to in­form our­selves about what we’re miss­ing.

In this way, all emo­tions seem chan­nels where in­for­ma­tion can be passed on from im­plicit parts of you to the fore­front of “meta-you”. This idea of “emo­tions as a data trove” is yet an­other on­tol­ogy that pro­duces differ­ent ra­tio­nal­ity tech­niques, as it’s op­er­at­ing on, once again, a men­tal model that is built out of a differ­ent type of ab­strac­tion.

Many of the skills based on this on­tol­ogy fo­cus on com­mu­ni­ca­tion be­tween differ­ent pieces of the self.

I’m very sym­pa­thetic to this view­point, as it form the ba­sis of the In­ter­nal Dou­ble Crux (IDC) tech­nique, one of my fa­vorite CFAR skills. In short, IDC as­sumes that akra­sia-es­que prob­lems are caused by a dis­agree­ment be­tween differ­ent parts of you, some of which might be in the im­plicit parts of your brain.

By “dis­agree­ment”, I mean that some part of you en­dorses an ac­tion for some well-mean­ing rea­sons, but some other part of you is against the ac­tion and also has jus­tifi­ca­tions. To re­solve the prob­lem, IDC has us “di­alogue” be­tween the con­flict­ing parts of our­selves, treat­ing both sides as valid. If done right, with­out “rig­ging” the di­alogue to bias one side, IDC can be a pow­er­ful way to source in­ter­nal mo­ti­va­tion for our tasks.

While I do seem to do some com­mu­ni­ca­tion be­tween my emo­tions, I haven’t fully in­te­grated them as in­ter­nal ad­vi­sors in the IFS sense. I’m not ready to adopt a wor­ld­view that might po­ten­tially hand over ex­ec­u­tive con­trol to all the parts of me. Meta-me still deems some of my im­plicit de­sires as “fool­ish”, like the part of me that craves video games, for ex­am­ple. In or­der to avoid slip­pery slopes, I have a blan­ket pre­com­mit­ment on cer­tain things in life.

For the mean­time, I’m fine stick­ing with these pre­com­mit­ments. The mod­ern world is filled with su­per­stim­uli, from milk­shakes to in­sight porn (and the nor­mal kind) to mo­bile games, that can hi­jack our well-mean­ing re­ward sys­tems.

Lastly, I be­lieve that with­out cer­tain men­tal pre­req­ui­sites, some on­tolo­gies can be ac­tively harm­ful. Nate’s Re­solv­ing Guilt se­ries can leave peo­ple with­out ad­di­tional mo­ti­va­tion for their ac­tions; guilt can be a use­ful mo­ti­va­tor. Similarly, Nihilism is an­other ex­am­ple of an on­tol­ogy that can be crip­pling un­less paired with ideas like hu­man­ism.

Lazy Eval­u­a­tors:

In In Defense of the Ob­vi­ous, I gave a prac­ti­cal ar­gu­ment as to why ob­vi­ous ad­vice was very good. I brought this point up up sev­eral times dur­ing the work­shop, and peo­ple seemed to like the point.

While that es­say fo­cused on listen­ing to ob­vi­ous ad­vice, there ap­pears to be a similar thing where merely ask­ing some­one, “Did you do all the ob­vi­ous things?” will of­ten un­cover helpful solu­tions they have yet to do.

My cur­rent hy­poth­e­sis for this (apart from “hu­mans are pro­grams that wrote them­selves on com­put­ers made of meat”, which is a great work­shop quote) is that peo­ple tend to be lazy eval­u­a­tors. In pro­gram­ming, lazy eval­u­a­tion is a way of solv­ing for the value of ex­pres­sions at the last minute, not un­til the an­swers are ab­solutely needed.

It seems like some­thing similar hap­pens in peo­ple’s heads, where we sim­ply don’t ask our­selves ques­tions like “What are mul­ti­ple ways I could ac­com­plish this?” or “Do ac­tu­ally I want to do this thing?” un­til we need to…Ex­cept that most of the time, we never need to—Life put­ters on, whether or not we’re win­ning at it.

I think this is part of what makes “pair de­bug­ging”, a CFAR ac­tivity where a group of peo­ple try to help one per­son with their “bugs”, effec­tive. When we have some­one else tak­ing an out­side view ask­ing us these ques­tions, it may even be the first time we see these ques­tions our­selves.

There­fore, it looks like a helpful skill is to con­stantly ask our­selves ques­tions and cul­ti­vate a sense of cu­ri­os­ity about how things are. Anna Sala­mon refers to this skill of “bog­gling”. I think bog­gling can help with both coun­ter­act­ing lazy eval­u­a­tion and ac­tu­ally do­ing ob­vi­ous ac­tions.

Look­ing at why ob­vi­ous ad­vice is ob­vi­ous, like “What the heck does ‘ob­vi­ous’ even mean?” can help break the im­me­di­ate dis­mis­sive ve­neer our brain puts on ob­vi­ous in­for­ma­tion.

EX: “If I want to learn more about cod­ing, it prob­a­bly makes sense to ask some coder friends what good re­sources are.”

“Nah, that’s so ob­vi­ous; I should in­stead just stick to this ab­struse book that ba­si­cally no one’s heard of—wait, I just re­jected some­thing that felt ob­vi­ous.”

“Huh…I won­der why that thought felt ob­vi­ous…what does it even mean for some­thing to be dubbed ‘ob­vi­ous’?”

“Well…ob­vi­ous thoughts seem to have a gen­er­ally ‘self-ev­i­dent’ tag on them. If they aren’t out­right tau­tolog­i­cal or cir­cu­larly defined, then there’s a sense where the ob­vi­ous things seems to be the short­est paths to the goal. Like, I could fold my clothes or I could build a Rube Gold­berg ma­chine to fold my clothes. But the first op­tion seems so much more ‘ob­vi­ous’…”

“Aside from that, there also seems to be a sense where if I search my brain for ‘ob­vi­ous’ things, I’m us­ing a ‘faster’ mode of think­ing (ala Sys­tem 1). Also, aside from fa­vor­ing sim­pler solu­tions, also seems to be in­fluenced by so­cial norms (what do peo­ple ‘typ­i­cally’ do). And my ‘ob­vi­ous ac­tion gen­er­a­tor’ seems to also be built off my un­der­stand­ing of the world, like, I’m think­ing about things in terms of causal chains that ac­tu­ally ex­ist in the world. As in, when I’m think­ing about ‘ob­vi­ous’ ways to get a job, for in­stance, I’m think­ing about ac­tions I could take in the real world that might plau­si­bly ac­tu­ally get me there…”

“Whoa…that means that ob­vi­ous ad­vice is so much more than some sort of self-ev­i­dent tag. There’s a huge amount of in­for­ma­tion that’s be­ing com­pressed when I look at it from the sur­face…’Ob­vi­ous’ re­ally means some­thing like ’that which my brain quickly dis­misses be­cause it is sim­ple, com­plies with so­cial norms, and/​or runs off my in­ter­nal model of how the uni­verse works.”

The goal is to re­duce the sort of “ac­cli­ma­tion” that hap­pens with ob­vi­ous ad­vice by peer­ing deeper into it. Ideally, if you’re bog­gling at your own ac­tions, you can force your­self to eval­u­ate ear­lier. Other­wise, it can hope­fully at least make ob­vi­ous ad­vice more ap­peal­ing.

I’ll end with a quote of mine from the work­shop:

“You still yet fail to grasp the weight of the Ob­vi­ous.”