TSR #3 Entrainment: Discussion
Epitsemic effort: 3 hours of writing, 15 min thinking on the outline at the end of the post.
Sebastian Marshall (used to post to lesswrong as lionhearted) writes The Strategic Review (TSR), which has recently migrated from a newsletter to a Medium blog. I don’t know how many LW people already read it, but I find it to be consistently quality. Most of TSR is focused on how to be more effective, though I feel like it’s grossly underselling Sebastian to just say he writes about productivity. One thing I enjoy about his writing is that it is very much anti-insight porn. It’s hard to read TSR without thinking about how I can better implement the ideas into how I operate.
Anywho, I’m going to be writing a weekly post intended to prompt thinking around whatever the topic of the latest edition of the TSR happens to be. The format will be something like:
Super brief summary of the article.
Draw some connections between the content of the article and ideas floating around in the rational-sphere.
Pose some questions that I think would greatly benefit one to have an answer to.
My attempt at answering some of those questions.
I’m going to test run this for at least the rest of the series (7 weeks?), and then reassess if I want to continue. Comment or otherwise let me know if you find these useful.
*also, yes, this post is about the 3rd article in this TSR series. I only had this idea partway through last week*
Summary Background Ops #3: Entrainment
Training is freedom
Key parts of any training regime:
Tips and ideas
Build the general skill of taking an unexamined behaviour and entraining the best practices.
Begin with easy targets, so your energy and focus can go into properly following a training regime.
A key tipping point is when you become excited about and look forward to opportunities to train.
It seems useful and true to think of rationality as a collection of habits of thought. Conor Moreton described it as 1000 heads in a row. If that’s how things are, then habit cultivation is at the core of becoming a stronger rationalist.
I really like the graph that Ramon used in a recent post, describing two axes to think about what it takes to acquire skills.
As Ramon mentioned, LW has historically been most concerned with epiphanies, though it seems like over the past few years Tortoises skill are on the rise, as reflected by the formalizations of tools like TAPs.
Wizardry is the most eye catching to me, mostly because I’ve always wanted to be a wizard. Specifically, I’m really interested by a particular subset of wizardry. I’ll try to point out this subset by using TAPs as a lens.
If we think of a skill as a TAP, then you can think of a Tortoise skill as one whose Action is easy/simple to perform. The key difficulty lies in the noticing, and having a system in place to help you tell how often you have or haven’t been noticing. Wizardry, on the other hand, is when the Action is itself difficult and complex to perform.
The subset of wizardry I’d like to call out is the wizardry whose underlying action is one that can be practiced in isolation, outside of the performance context it is normally meant to apply to.
It’s obvious to spot this sort of wizardry in different martial arts. You take a particular movement, you break it down, you go through it slowly, and you concentrate. This is where the principles of entrainment come in. You entrain the movement in a calm environment so that you will later be able to do it in the middle of a fight.
I get that for a lot of people, just gaining Tortoise skills would be a huge win and more than enough to fix a lot of things in their lives. But let’s say you magically had an extra 30 min a day to work on entraining your rationalist wizardry. What might that look like? What would be the particular skill you would practice in isolation? How would you use the general principles of entrainment that Sebastian invokes?
I’ve got a skeleton structure worked out for one particular skill, though it’s far from complete. I’ll hopefully be testing it out over winter break.
Skill: Double cruxing yourself (being able to accurately answer “Why do you believe what you believe?”)
Identify the correct fundamentals
I think I’d want to arrive at these iteratively. At first I would “just freestyle” and explore a belief while recording my thought process. After I felt like I was done, I would look over the thought process I just went through and try to make explicit the implicit process I was using. Next time, I would explore a belief by following the explicit process. After that session, I’d examine what seemed to be the parts of that process that were the most useful, and try to streamline it.
Iterate until satisfied?
What a given review session would look like, once I had the correct fundamentals outlined.
Pick a belief, go through the steps slowly and intentionally. My focus would be on making sure that my mind goes from A to B to C with as few interruptions in between as possible. I’d probably be writing out my stream of consciousness in word doc to have a record.
Any time I notice that my mind either got stuck, or started going down an unhelpful tangent, backtrack to the last step I was on and focus my attention on how I went wrong. Then I’d keep on mindfully going through the steps.
Every so often (set a timer, have a trigger?) I’d ask myself, “Is that it? Are you really done?” If 5 seconds of introspection gave no red-flags, I’d put my work aside for the 5 minutes/till the next session.
Later, I’d come back to my written out thought process, and more thoroughly question, “Is this belief really fully explored?” Here I’d go through a checklist:
Can you clearly identify what the key supporting evidence for this belief is?
Can you imagine what it would take to change your mind about this?
Can you imagine seeing the above evidence, and doing so doesn’t produce any feelings of fear?
Are there any small voices trying to whisper to you that something isn’t quite right?
This would hopefully train my gut sense of whether or not the job is complete.
Undecided issues, and doubts
How to pick the beliefs to explore? Random selection from a “jar of beliefs”?
Is typing to slow to track a thought process like this? (oh no, I guess someone will have to test it)
Discussion: Does the above training plan seem to have any merit? What rationalist skills do you think are conducive to this sort of entrainment? Do you even think that it makes sense to apply principles of entrainment to rationalist skills?