This reply is extremely late, but I’m annoyed at myself for not having responded at the time, so I’ll do it now in case anyone runs across this at some point in the future:
I guess I feel a little trepidation or edge-of-my-seat feeling when I first run a test (I have surprisingly often ended up crossing my fingers), but I try to write tests in a nice modular way, so that I’m never writing more than ~5-10 lines of code before I can test again. I feel a lot more trepidation when I break this pattern, and have a big chunk of new code that hasn’t been tested at all yet.
(This is an entirely meta post, which feels like it might not be helpful, but I’ll post it anyway because I’m trying to have weaker babble filters. Feel free to ignore if it’s useless.
I generally enjoy your writing style, and think it’s evocative and clear-in-aggregate. But I find this comment entirely inscrutable. I think there’s something about the interaction between your “gesturing” style and a short comment, that doesn’t work as well for me as a reader compared to that style in a longer piece where the I can get into the flow of what you’re saying and figure out your referents inductively.
Either that or you’re referencing things I haven’t read or don’t remember.)
Agreed. I strongly identify with the description of the Red Knight (and somewhat the description of both the other two knights as well), and was therefore Not Interested in Dragon Army. To the point that I posted some strong critiques of the idea, though hopefully in a constructive manner.
I would be interested in a retrospective of how the people who inhabited that role ended up joining Dragon Army. What was the bug there? I though Duncan was admirably clear about how much it was a non-Red Knight-friendly zone.
I’ve had a similar experience. IDC was by far my favorite technique at CFAR, and I’ve maybe done it twice since then? I think some of it is that the formal technique fell away pretty quickly for me: once I learned to pay attention to other internal voices, I found it pretty natural to do that all the time in the flow of my normal thinking, and setting aside structured time for it felt less necessary. (And when I do set aside larger chunks of time, I usually end up just inhabiting the part that gets less “airtime” for a while, rather than having an explicit dialogue between it and another part.)
As a separate comment since it feels like a pretty different thread:
I do have a vague hypothesis that the very first part of the Looking skill might be a prerequisite for IDC and frankly a lot of CFAR techniques. I don’t think you need a lot of it, but there feels like there’s a first insight that makes further conversations about things downstream from it a million times easier. (For programmers: it feels similar to whatever the insight is that separates people who just can’t get the concept of a function from people who can.) It annoys me a lot that I don’t yet have a consistent tool for helping people quickly get the first skillpoint in Looking, and fixing that is one of my top pedagogical priorities at the moment.
For me at least, the multiple agents framework isn’t the natural, obvious one, but rather a really useful theoretical frame that helps me solve problems that used to seem insoluble. Something like how it becomes much easier to precisely deal with change over time once you learn calculus. (As I use it more, it becomes more intuitive, again like calculus, but it’s still not my default frame.)
Before I did my first CFAR workshop, I had a lot of issues that felt like, “I’m really confused about this thing” or “I’m overwhelmed when I try to think about this thing” or “I know the right thing to do but I mysteriously don’t actually do it”. The CFAR IDC class recommended I model these situations as “I have precise and detailed beliefs and desires, I just happen to have many of them and they sometimes contradict each other.” When I tried out this framework, I found that a lot of previously unsolvable problems became surprisingly easy to solve. For example, “I’m really torn about my job” became, “I am really excited about precisely this aspect of my job, and really unhappy about precisely this aspect”. Then it’s possible to adjudicate between those two perspectives, find compromises or collaborations, etc.
It would be rude of me to assume that your mind works the same as mine, so take the following strictly as a hypothesis. But I would guess that what’s going on for you is that you identify really strongly with one set of preferences/desires/beliefs in your mind, and experience other preferences/desires/beliefs as “pain, pleasure, stupidity, and ignorance”. The experiment this suggests is to try spending a few minutes pretending those things are the “real you”, and the “agenty” part is the annoying external interloper caused by corrupted hardware. If I’m right, the sign would be that you find there is some detail and coherence to the “identity” of those things that feel like flaws, even if you’re not sure it’s an identity you approve of.
Note that I don’t think the multiple agents thing is the one true ontology. I find that as I learn to integrate the parts better, they start feeling more like a single working system. But it’s a really helpful theoretical tool for me.
It definitely doesn’t take years of practicing meditation. Though I’m hesitant to speculate on how long it would take on average, because how prepared for the idea people are varies a lot. The hardest step is the first one: realizing that people are talking about things you don’t yet understand.
Hmm, maybe this is part of the motivation for test-first programming? Since I was originally trained to do test-first, I don’t have this problem, because there are always already tests before I write any code. And I pretty much always know my code works, because it wouldn’t be done if the tests weren’t passing yet.
I’ve stuck to no fiction. (I unthinkingly read a few paragraphs of a short story that came across my Twitter, but otherwise have been consistent.)
It’s mostly been fairly easy. It’s really obvious now that it’s a social pica. I think some of the time I would have spent on it has been going to increased use of LessWrong and Facebook, which are also social picas, but those are both more genuinely social, and harder to lose 8 hours at a time to.
There was at least one night where I was pretty unhappy, and didn’t have access to any actual friends to spend time with, and really wanted to lose myself in a book. I probably think that ordinarily it would have been an ok thing to do as a coping mechanism, but it was useful to observe how badly I needed the coping mechanism. That makes it obvious how much I need the real thing.
There are also a couple things I’m genuinely looking forward to reading when Lent is over. (Murphy’s Quest, for one.) But I’d say those things are probably ~1/4 or less the amount of fiction I would have read this month without Lent.
This has been an especially exciting/productive/momentum-filled month for me. This probably makes it easier than normal to not read fiction. Though maybe there’s some causality the other direction as well?
I’m still not 100% sure I understand Val’s definition of Looking, so I’m not quite willing to commit to the claim that it’s the same as Kaj’s definition. But I do think it’s not that hard to square Kaj’s definition with those quotes, so I’ll try to do that.
Kaj’s definition is:
being able to develop the necessary mental sharpness to notice slightly lower-level processing stages in your cognitive processes, and study the raw concepts which then get turned into higher-level cognitive content, rather than only seeing the high-level cognitive content.
Everything you experience, no matter the object, is experienced via your own cognitive processes. When you’re doing math, or talking to a friend, or examining the world, that is an experience you are having, which is being filtered by your cognitive processes, and therefore to which the structure of your mind is relevant.
As Kaj describes, the part of your thought processes you normally have conscious access to are a tiny fragment of what is actually happening. When you practice the skill of making more of it conscious and making finer and finer discriminations in mental experience, you find that there is a lot of information that your conscious mind would normally skip over. This includes plenty of information about “the world”.
So consider the last quote as an example:
A while back I was interacting with a friend of a friend (distant from this community). His demeanor was very forceful as he pushed on wanting feedback about how to make himself more productive. I felt funny about the situation and a little disoriented, so I Looked at him. My sense of him as an experiencing being deepened, and I started noticing sensations in my own body/emotion system that were tagged as “resonant” (which is something I’ve picked up mostly from Circling). I also could clearly see the social dynamics he was playing at. When my mind put the pieces together, I got an impression of a person whose social strategies had his inner emotional world hurting a lot but also suppressed below his own conscious awareness. This gave me some things to test out that panned out pretty on-the-nose.
A fictionalized expansion of that, based on my experiences, might be:
“I was running my usual algorithms for helping someone, but I felt funny about the situation and a little disoriented. In the past I would have just kept trying, or maybe just jumped over to a coping mechanism like trying to get out of the situation. However, I had enough mental sharpness to notice the feeling as it arose, so instead I decided to study my experience of the situation. Specifically, I tried to pay attention to how my mind was constructing the concept of “him”. (Though since my moment-to-moment experience doesn’t distinguish between “him” and “my concept of him”, and since I have no unmediated access to the “him” that is presumably a complex quantum wavefunction, that mental motion might better be described as just “paying attention to my experience of him”, or even “paying attention to him”.) When I did that, I was able to see past the slightly dehumanizing category I was subconsciously putting him in, and was able to pick up on the parts of my mind that were interacting with him on a more human, agent-to-agent level. I was able to notice somatic markers in my body that were part of a process of modeling and empathizing with him, from which I derived both more emotional investment in him and also more information about the social dynamics of the situation, as processed by my system 1, which my conscious mind had been mostly ignoring. I was able to use all of this information to put together an intuitively appealing story about why he was acting this way, and what was going on beneath the surface. This hypothesis immediately suggested some experiments to try, which panned out as the hypothesis predicted.”
I’ve been thinking about this since I posted it, and I came to similar conclusions. There are a cluster of behaviors that seem to mean discomfort and therefore low status: fidgeting, jumpiness, talking too fast, certain eye contact patterns (staring at the person and then looking away fast when they turn to look at you), ums and ers.
Some of them feel hard to disentangle. Whether you hold your head high seems mostly about status, but also a little about size. This seems like it might be inherent in the territory: There’s a fine line between credibly signaling that you’re powerful and implicitly threatening to use that power. (Schelling’s The Strategy of Conflict comes to mind here.)
This is really interesting and helpful, thank you.
My original introduction to status was in Impro, which describes it in the context of an improv scene. This means (as I recall) that it mostly focuses on things that are directly observable in body language, like eye contact and taking up space.
Since you suggest we think of most of those things as being about “size” rather than “status”, I’m curious whether you think there are body language indicators of high/low status, or whether that’s inherently contextual and based on actual power.
(One hypothesis: signs of nervousness like talking too quickly or fidgeting might be markers of low status?)
This is excellent, thank you for writing it.
I’m not as advanced as you, but I’ve gotten many of the earlier benefits you describe and think you’ve described them well. That said, I have some confusion about how stuff like this paragraph works:
And because those emotions no longer felt aversive, I didn’t have a reason to invest in not feeling those things—unless I had some other reason than the intrinsic aversiveness of an emotion to do so.
What does it mean to have another reason beyond the intrinsic aversiveness of an emotion? Who’s the “I” who might have such a reason, and what form does such a reason take?
This is a specific question that comes out of a more general confusion, which is: why do descriptions of enlightenment and other advanced states so often seem to claim that enlightenment is almost epiphenomenal? If it were really the case that it didn’t change anything, how would we know people had experienced it?
I am really happy that this post was written, and mildly annoyed by the same things you’re annoyed by.
To explain rather than excuse, there’s a good reason that meditation teachers historically avoid giving clear answers like this. That’s because their goal is not to help you intellectually understand meditation, but rather to help you do meditation.
It’s very easy to mentally slip from “I intellectually understand what sort of thing this is” to “I understand the thing itself”, and so meditation teachers hit this problem with a hammer by just refusing to explain it, so you’re forced to try it instead. This problem is what the “get out of the car” section is talking about.
I have some worry that this post will make it easier for people to make errors like:
“I’m angry, because X is a jerk. Aha, I should try the thing Kaj was talking about, and notice that feeling angry is not helping me with my goal of utterly destroying X.”
(This is exaggerated, but mistakes of this shape are really, really easy to make.)
I think it’s definitely worth the cost, but it is a cost.
Otherwise, what was the point of writing the thing in the first place? Are you trying to communicate, or aren’t you?
What if they don’t have the skill necessary to explain it more clearly, but suspect that some percentage of the reading audience is willing to do enough interpretive work to understand what they’re communicating anyway? In that case, their options are:
1) Don’t post until they’ve developed enough skill to explain themselves to 100% of the audience. (Which in practice means: don’t post ever, since the way you get the skill is by trying.)
2) Post anyway, and hope that some people get it.
#1 communicates with no one, and #2 communicates with at least some people, so if the goal is communication, #2 is the dominant strategy.
(For the record, I do think it’s possible to explain this stuff better than most people do, and that it’s annoying that this is done fairly rarely. But I also notice that it’s a relatively small subset of readers here who are consistently the ones who don’t understand.)
This model assumes that truth and politeness are in a simple tradeoff relationship, and if that were true I would absolutely agree that truth is more important. But I don’t think the territory is that simple.
Our goal is not just to maximize the truth on the website at this current moment, but to optimize the process of discovering and sharing truth. One effect of a comment is to directly share some truth, and so removing comments or banning people does, in the short term, reduce the amount of truth produced. However, another effect of a comment is to incentivize or disincentivize other posters, by creating a welcoming or hostile environment. Since those posters may also produce comments that contain truth, a comment can in this way indirectly encourage or discourage the later production of truth.
The downstream effects of the incentivization/disincentivization of comments containing truth will, I think, often swamp the short-term effect of the specific truth shared in the specific comment. (This has some similarities to the long-termist view in altruism.)
This analysis explains why 4chan is not at the forefront of scientific discovery.
Wordpress seems like a very apt comparison, since LessWrong is also being conceptualized as a bunch of individual blogs with varying moderation policies.
… which once again points to the critical necessity of being able to tell when (and how often, etc.) someone is using such a power; hence the need for a moderation log.
Does Wordpress have such a system?
(To be clear, I support the idea of a moderation log. I’m just curious whether it’s actually as necessary as you claim.)
And that problem is exactly what Scott refers to as Moloch.
This sounds like a really good idea. For my personal tastes, I think this would hit the sweet spot of getting to focus attention on stuff I cared about, without feeling like I was being too mean by deleting well-intentioned comments.
That doesn’t address the fact that Qiaochu has a different instinctive reaction.
The goal of this proposal is to deal with the fact that different people are different.