Thoughts on the January CFAR workshop

So, the Cen­ter for Ap­plied Ra­tion­al­ity just ran an­other work­shop, which Anna kindly in­vited me to. Below I’ve writ­ten down some thoughts on it, both to or­ga­nize those thoughts and be­cause it seems other LWers might want to read them. I’ll also in­vite other par­ti­ci­pants to write down their thoughts in the com­ments. Apolo­gies if what fol­lows isn’t par­tic­u­larly well-or­ga­nized.

Feel­ings and other squishy things

The work­shop was to­tally awe­some. This is ad­mit­tedly not strong ev­i­dence that it ac­com­plished its goals (cf. Yvain’s com­ment here), but be­ing around peo­ple mo­ti­vated to im­prove them­selves and the world was to­tally awe­some, and learn­ing with and from them was also to­tally awe­some, and that seems like a good thing.

Also, the venue was fan­tas­tic. CFAR in­struc­tors re­ported that this work­shop was more awe­some than most, and while I don’t want to dis­count im­prove­ments in CFAR’s cur­ricu­lum and its se­lec­tion pro­cess for par­ti­ci­pants, I think the venue counted for a lot. It was uniformly beau­tiful and there were a lot of soft things to sit down or take naps on, and I think that helped ev­ery­body be more com­fortable with and re­laxed around each other.

Main takeaways

Here are some gen­eral in­sights I took away from the work­shop. Some of them I had already been aware of on some ab­stract in­tel­lec­tual level but hadn’t fully pro­cessed and/​or got­ten drilled into my head and/​or seen the im­pli­ca­tions of.

  1. Epistemic ra­tio­nal­ity doesn’t have to be about big things like sci­en­tific facts or the ex­is­tence of God, but can be about much smaller things like the de­tails of how your par­tic­u­lar mind works. For ex­am­ple, it’s quite valuable to un­der­stand what your ac­tual mo­ti­va­tions for do­ing things are.

  2. In­tro­spec­tion is un­re­li­able. Con­se­quently, you don’t have di­rect ac­cess to in­for­ma­tion like your ac­tual mo­ti­va­tions for do­ing things. How­ever, it’s pos­si­ble to ac­cess this in­for­ma­tion through less di­rect means. For ex­am­ple, if you be­lieve that your pri­mary mo­ti­va­tion for do­ing X is that it brings about Y, you can perform a thought ex­per­i­ment: imag­ine a world in which Y has already been brought about. In that world, would you still feel mo­ti­vated to do X? If so, then there may be rea­sons other than Y that you do X.

  3. The mind is em­bod­ied. If you con­sis­tently model your mind as sep­a­rate from your body (I have in ret­ro­spect been do­ing this for a long time with­out ex­plic­itly re­al­iz­ing it), you’re prob­a­bly un­der­es­ti­mat­ing the pow­er­ful in­fluence of your mind on your body and vice versa. For ex­am­ple, dom­i­nance of the sym­pa­thetic ner­vous sys­tem (which gov­erns the fight-or-flight re­sponse) over the parasym­pa­thetic ner­vous sys­tem is un­pleas­ant, un­healthy, and can pre­vent you from ex­plic­itly mod­el­ing other peo­ple. If you can no­tice and con­trol it, you’ll prob­a­bly be hap­pier, and if you get re­ally good, you can de­velop aik­ido-re­lated su­per­pow­ers.

  4. You are a so­cial an­i­mal. Just as your mind should be mod­eled as a part of your body, you should be mod­eled as a part of hu­man so­ciety. For ex­am­ple, if you don’t think you care about so­cial ap­proval, you are prob­a­bly wrong, and think­ing that will cause you to have in­cor­rect be­liefs about things like your ac­tual mo­ti­va­tions for do­ing things.

  5. Emo­tions are data. Your emo­tional re­sponses to stim­uli give you in­for­ma­tion about what’s go­ing on in your mind that you can use. For ex­am­ple, if you learn that a cer­tain stim­u­lus re­li­ably makes you an­gry and you don’t want to be an­gry, you can re­move that stim­u­lus from your en­vi­ron­ment. (This point should be un­der­stood in com­bi­na­tion with point 2 so that it doesn’t sound triv­ial: you don’t have di­rect ac­cess to in­for­ma­tion like what stim­uli make you an­gry.)

  6. Emo­tions are tools. You can trick your mind into hav­ing spe­cific emo­tions, and you can trick your mind into hav­ing spe­cific emo­tions in re­sponse to spe­cific stim­uli. This can be very use­ful; for ex­am­ple, trick­ing your mind into be­ing more cu­ri­ous is a great way to mo­ti­vate your­self to find stuff out, and trick­ing your mind into be­ing happy in re­sponse to do­ing cer­tain things is a great way to con­di­tion your­self to do cer­tain things. Re­ward your in­ner pi­geon.

    Here are some spe­cific ac­tions I am go­ing to take /​ have already taken be­cause of what I learned at the work­shop.

    1. Write a lot more stuff down. What I can think about in my head is limited by the size of my work­ing mem­ory, but a piece of pa­per or a WorkFlowy doc­u­ment don’t have this limi­ta­tion.

    2. Start us­ing a bet­ter GTD sys­tem. I was pre­vi­ously us­ing RTM, but badly. I was us­ing it ex­clu­sively from my iPhone, and when adding some­thing to RTM from an iPhone the due date de­faults to “to­day.” When adding some­thing to RTM from a browser the due date de­faults to “never.” I had never done this, so I didn’t even re­al­ize that “never” was an op­tion. That re­sulted in hav­ing due dates at­tached to RTM items that didn’t ac­tu­ally have due dates, and it also made me re­luc­tant to add items to RTM that re­ally didn’t look like they had due dates (e.g. “look at this in­ter­est­ing thing some­time”), which was bad be­cause that meant RTM wasn’t col­lect­ing a lot of things and I stopped trust­ing my own due dates.

    3. Start us­ing Boomerang to send timed email re­minders to fu­ture ver­sions of my­self. I think this might work bet­ter than us­ing, say, cal­en­dar alerts be­cause it should help me con­cep­tu­al­ize past ver­sions of my­self as peo­ple I don’t want to break com­mit­ments to.

    I’m also plan­ning to take var­i­ous ac­tions that I’m not writ­ing above but in­stead putting into my GTD sys­tem, such as prac­tic­ing spe­cific ra­tio­nal­ity tech­niques (the work­shop in­cluded many use­ful work­sheets for do­ing this) and in­ves­ti­gat­ing spe­cific top­ics like speed-read­ing and med­i­ta­tion.

    The arc word (TVTropes warn­ing) of this work­shop was “agen­ti­ness.” (“Agen­ti­ness” is more fun­tac­u­lar than “agency.”) The CFAR cur­ricu­lum as a whole could be sum­ma­rized as teach­ing a col­lec­tion of tech­niques to be more agenty.


    A dis­t­in­guish­ing fea­ture the peo­ple I met at the work­shop seemed to have in com­mon was the abil­ity to go meta. This is not a skill which was ex­plic­itly men­tioned or taught (al­though it was fre­quently im­plicit in the kind of jokes peo­ple told), but it strikes me as an im­por­tant foun­da­tion for ra­tio­nal­ity: it seems hard to progress with ra­tio­nal­ity un­less the thought of us­ing your brain to im­prove how you use your brain, and also to im­prove how you im­prove how you use your brain, is both un­der­stand­able and ap­peal­ing to you. This prob­a­bly elimi­nates most peo­ple as can­di­dates for ra­tio­nal­ity train­ing un­less it’s paired with or maybe pre­ceded by meta train­ing, what­ever that looks like.

    One prob­lem with the work­shop was lack of sleep, which seemed to wear out both par­ti­ci­pants and in­struc­tors by the last day (classes started early in the day and con­ver­sa­tions of­ten con­tinued late into the night be­cause they were un­usu­ally fun /​ high-value). Offer­ing ev­ery­one modafinil or some­thing at the be­gin­ning of fu­ture work­shops might help with this.


    Over­all, while it’s too soon to tell how big an im­pact the work­shop will have on my life, I an­ti­ci­pate a big im­pact, and I strongly recom­mend that as­piring ra­tio­nal­ists at­tend fu­ture work­shops.