Thoughts on the January CFAR workshop

So, the Center for Applied Rationality just ran another workshop, which Anna kindly invited me to. Below I’ve written down some thoughts on it, both to organize those thoughts and because it seems other LWers might want to read them. I’ll also invite other participants to write down their thoughts in the comments. Apologies if what follows isn’t particularly well-organized.

Feelings and other squishy things

The workshop was totally awesome. This is admittedly not strong evidence that it accomplished its goals (cf. Yvain’s comment here), but being around people motivated to improve themselves and the world was totally awesome, and learning with and from them was also totally awesome, and that seems like a good thing.

Also, the venue was fantastic. CFAR instructors reported that this workshop was more awesome than most, and while I don’t want to discount improvements in CFAR’s curriculum and its selection process for participants, I think the venue counted for a lot. It was uniformly beautiful and there were a lot of soft things to sit down or take naps on, and I think that helped everybody be more comfortable with and relaxed around each other.

Main takeaways

Here are some general insights I took away from the workshop. Some of them I had already been aware of on some abstract intellectual level but hadn’t fully processed and/​or gotten drilled into my head and/​or seen the implications of.

  1. Epistemic rationality doesn’t have to be about big things like scientific facts or the existence of God, but can be about much smaller things like the details of how your particular mind works. For example, it’s quite valuable to understand what your actual motivations for doing things are.

  2. Introspection is unreliable. Consequently, you don’t have direct access to information like your actual motivations for doing things. However, it’s possible to access this information through less direct means. For example, if you believe that your primary motivation for doing X is that it brings about Y, you can perform a thought experiment: imagine a world in which Y has already been brought about. In that world, would you still feel motivated to do X? If so, then there may be reasons other than Y that you do X.

  3. The mind is embodied. If you consistently model your mind as separate from your body (I have in retrospect been doing this for a long time without explicitly realizing it), you’re probably underestimating the powerful influence of your mind on your body and vice versa. For example, dominance of the sympathetic nervous system (which governs the fight-or-flight response) over the parasympathetic nervous system is unpleasant, unhealthy, and can prevent you from explicitly modeling other people. If you can notice and control it, you’ll probably be happier, and if you get really good, you can develop aikido-related superpowers.

  4. You are a social animal. Just as your mind should be modeled as a part of your body, you should be modeled as a part of human society. For example, if you don’t think you care about social approval, you are probably wrong, and thinking that will cause you to have incorrect beliefs about things like your actual motivations for doing things.

  5. Emotions are data. Your emotional responses to stimuli give you information about what’s going on in your mind that you can use. For example, if you learn that a certain stimulus reliably makes you angry and you don’t want to be angry, you can remove that stimulus from your environment. (This point should be understood in combination with point 2 so that it doesn’t sound trivial: you don’t have direct access to information like what stimuli make you angry.)

  6. Emotions are tools. You can trick your mind into having specific emotions, and you can trick your mind into having specific emotions in response to specific stimuli. This can be very useful; for example, tricking your mind into being more curious is a great way to motivate yourself to find stuff out, and tricking your mind into being happy in response to doing certain things is a great way to condition yourself to do certain things. Reward your inner pigeon.

    Here are some specific actions I am going to take /​ have already taken because of what I learned at the workshop.

    1. Write a lot more stuff down. What I can think about in my head is limited by the size of my working memory, but a piece of paper or a WorkFlowy document don’t have this limitation.

    2. Start using a better GTD system. I was previously using RTM, but badly. I was using it exclusively from my iPhone, and when adding something to RTM from an iPhone the due date defaults to “today.” When adding something to RTM from a browser the due date defaults to “never.” I had never done this, so I didn’t even realize that “never” was an option. That resulted in having due dates attached to RTM items that didn’t actually have due dates, and it also made me reluctant to add items to RTM that really didn’t look like they had due dates (e.g. “look at this interesting thing sometime”), which was bad because that meant RTM wasn’t collecting a lot of things and I stopped trusting my own due dates.

    3. Start using Boomerang to send timed email reminders to future versions of myself. I think this might work better than using, say, calendar alerts because it should help me conceptualize past versions of myself as people I don’t want to break commitments to.

    I’m also planning to take various actions that I’m not writing above but instead putting into my GTD system, such as practicing specific rationality techniques (the workshop included many useful worksheets for doing this) and investigating specific topics like speed-reading and meditation.

    The arc word (TVTropes warning) of this workshop was “agentiness.” (“Agentiness” is more funtacular than “agency.”) The CFAR curriculum as a whole could be summarized as teaching a collection of techniques to be more agenty.

    Miscellaneous

    A distinguishing feature the people I met at the workshop seemed to have in common was the ability to go meta. This is not a skill which was explicitly mentioned or taught (although it was frequently implicit in the kind of jokes people told), but it strikes me as an important foundation for rationality: it seems hard to progress with rationality unless the thought of using your brain to improve how you use your brain, and also to improve how you improve how you use your brain, is both understandable and appealing to you. This probably eliminates most people as candidates for rationality training unless it’s paired with or maybe preceded by meta training, whatever that looks like.

    One problem with the workshop was lack of sleep, which seemed to wear out both participants and instructors by the last day (classes started early in the day and conversations often continued late into the night because they were unusually fun /​ high-value). Offering everyone modafinil or something at the beginning of future workshops might help with this.

    Overall

    Overall, while it’s too soon to tell how big an impact the workshop will have on my life, I anticipate a big impact, and I strongly recommend that aspiring rationalists attend future workshops.