Eliezer two-boxes on Newcomb’s problem, and both boxes contain money.
Liron
It’s not renting a house vs. owning a house, it’s renting a house vs. renting a bunch of money from the bank.
-- Salman Khan, Khan Academy
If I reply with the naive factual response, “Yes I’m stocking up to prep for the virus”, and leave it at that, there’s a palpable awkwardness because all participants and witnesses in the conversation are at some level aware that this carries the subtext, “Yes I’m smartly taking action to protect myself from a big threat while you are ignorantly exposing yourself to danger”, which means a listener has to wonder if they’re stupid or I’m crazy. Even if the listener is curious and doesn’t take any offense to the conversation, they know that I’ve made a social error in steering the conversation to this awkward state, because it’s mutual knowledge that a savvy conversationalist needs to be aware of the first-order subtext of the naive factual response. The objective social tactlessness of my naive response provides valid evidence to update them toward me being the crazy one.
I think a more tactful response is, “Yeah, I know a lot of people say it’s not a big deal and I hope they’re right, but I think there’s enough risk that extra supplies might come in handy”.
If I first acknowledge and validate or “pace” the background beliefs of mainstream society, then it’s socially graceful to segue to answering with my honest beliefs. Now I’ve portrayed myself as an empathetic character, where any listener can follow my reasoning and see that it’s potentially valid, even if it doesn’t identically match theirs.
If I were to ask you on a date, would your answer be the same as your answer to this question?
How about this: The process of conscious thought has no causal relationship with human actions. It is a self-contained, useless process that reflects on memories and plans for the future. The plans bear no relationship to future actions, but we deceive ourselves about this after the fact. Behavior is an emergent property that cannot be consciously understood.
I read this post on my phone in the subway, and as I walked back to my apartment thinking of something to post, it felt different because I was suspicious that every experience was a mass self-deception.
This content is a real gem. It’s easy and fun to read, yet offers a high density of CFAR’s unique insights which are worth knowing to improve your life. It also manages to speak to all rationality skill levels and knowledge.
I would love to see this content featured on CFAR’s site, which is currently kind of a black box in terms of what specific “rationality” they teach. There’s this FAQ answer that lists the topics of the workshop, but I suspect it’s better to let prospective workshop attendees dive in more ahead of time if they’re curious.
I was so inspired by how this handbook makes CFAR look good that we’re now working on the same thing at my startup Relationship Hero, a public-facing handbook that will make our coaching less of a black box. Update: It’s live here.
Reminds me of that old LW April Fools where they ran the whole site as a Reddit fork
The new design looks good to me.
I just want to say overall the LessWrong team is killing it! You folks successfully revived a mostly-dormant community and now it’s lively and special, the community whose feedback I seek and respect the most on many topics. Thanks for making the software lively and special too.
Talking to people afterwards, I could tell they thought it was a really fun program and a good addition to their event. They seemed to feel that the content was deep.
Unfortunately, many of them seemed to not grasp the central principles. When I asked them what they thought the main idea was, they said something like: “Your experience is what you make of it, like how you feel in social situations is under your control”—apparently rounding to the nearest cached wisdom (although not a bad one).
I consider that a failure on my part to make the concepts clear and accessible enough. It was unreasonable to think that people would remember the definition of “heuristic”, for example, the way I presented it in passing during the original presentation.
After I did the presentation, I spent a couple more hours tweaking and reorganizing the slides before posting to LW. Now that I’ve improved the slides, and now that I’ve had practice with presenting the material, I’m optimistic about being able to achieve more comprehension the next time I find an audience for this.
And even when the ideas are over some people’s heads, I think that as long as they’re entertained, it’s good to expose them to an impressive display of realist philosophy at an early age.
I feel like now I have a really deep understanding. Basically everything is highly interconnected.
So you simply ask them: “What do you want to do”? And maybe you add “I’m completely fine with anything!” to ensure you’re really introducing no constraints whatsoever and you two can do exactly what your friend desires.
This error reminds me of people on a dating app who kill the conversation by texting something like “How’s your week going?”
When texting on a dating app, if you want to keep the conversation flowing nicely instead of getting awkward/strained responses or nothing, I believe the key is to anticipate that a couple seconds of low-effort processing on the recipient’s part will allow them to start typing their response to your message.
“How’s your week going?” is highly cognitively straining. Responding to it requires remembering and selecting info about one’s week (or one’s feelings about one’s week), and then filtering or modifying the selection so as to make one sound like an interesting conversationalist rather than an undifferentiated bore, while also worrying that one’s selection about how to answer doesn’t implicitly reveal them as being too eager to brag, or complain, or obsess about a particular topic.
You can be “conversationally generous” by intentionally pre-computing some of their cognitive work, i.e. narrowing the search space. For instance:
“I’m gonna try cooking myself 3 eggs/day for lunch so I don’t go crazy on DoorDash. How would you cook them if you were me?”
With a text like this (ideally adjusted to your actual life context), they don’t have to start by narrowing down a huge space of possible responses. They can immediately just ask themselves how they’d go about cooking an egg. And they also have some context of “where the conversation is going”: it’s about your own lifestyle. So it’s not just two people interviewing each other, it has this natural motion/momentum.
Using this computational kindness technique is admittedly kind of contrived on your end, but on their end, it just feels effortless and serendipitous. For naturally contrived nerds like myself looking for a way to convert IQ points into social skills, it’s a good trade.
The computational kindness principle in these conversations works much like the rule of improv that says you’re supposed to introduce specific elements to the scene (“My little brown poodle is digging for his bone”) rather than prompting your scene partners to do the cognitive work (“What’s that over there?”).
Oh and all this is not just a random piece of advice, it’s yet another Specificity Power.
I <3 Specificity
For years, I’ve been aware of myself “activating my specificity powers” multiple times per day, but it’s kind of a lonely power to have. “I’m going to swivel my brain around and ride it in the general→specific direction. Care to join me?” is not something you can say in most group settings. It’s hard to explain to people that I’m not just asking them to be specific right now, in this one context. I wish I could make them see that specificity is just this massively under-appreciated cross-domain power. That’s why I wanted this sequence to exist.
I gratuitously violated a bunch of important LW norms
As Kaj insightfully observed last year, choosing Uber as the original post’s object-level subject made it a political mind-killer.
On top of that, the original post’s only role model of a specificity-empowered rationalist was this repulsive “Liron” character who visibly got off on raising his own status by demolishing people’s claims.
Many commenters took me to task on the two issues above, as well as raising other valid issues, like whether the post implies that specificity is always the right power to activate in every situation.
The voting for this post was probably a rare combination: many upvotes, many downvotes, and presumably many conflicted non-voters who liked the core lesson but didn’t want to upvote the norm violations. I’d love to go back in time and launch this again without the double norm violation self-own.
I’m revising it
Today I rewrote a big chunk of my dialogue with Steve, with the goal of making my character a better role model of a LessWrong-style rationalist, and just being overall more clearly explained. For example, in the revised version I talk about how asking Steve to clarify his specific point isn’t my sneaky fully-general argument trick to prove that Steve’s wrong and I’m right, but rather, it’s taking the first step on the road to Double Crux.
I also changed Steve’s claim to be about a fictional company called Acme, instead of talking about the politically-charged Uber.
I think it’s worth sharing
Since writing this last year, I’ve received a dozen or so messages from people thanking me and remarking that they think about it surprisingly often in their daily lives. I’m proud to help teach the world about specificity on behalf of the LW community that taught it to me, and I’m happy to revise this further to make it something we’re proud of.
A particularly troubling quote from the post:
I think the relation between breadth of intelligence and depth of empathy is a subtle issue which none of us fully understands (yet). It’s possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences. But I’m not terribly certain of this, any more than I’m terribly certain of its opposite.
The obvious truth is that mind-design space contains every combination of intelligence and empathy.
For a minute there I actually thought you were serious about reading a book
I feel like this conversation is getting unstuck because there are fresh angles and analogies. Great balance of meta-commentary too. Please keep at it.
I also think this is creating an important educational and historical document. It’s a time capsule of this community’s beliefs on AGI trajectory, as well as of the current state-of-the-art in rational discourse.
Eliezer’s post focuses on the distinction between two concepts a person can believe (hereby called “narratives”):
“God is real.”
“I have something that qualifies as a ‘belief in God’.”
Either narrative will be associated with positive things in the person’s mind. And the person, particularly with narrative #2, often forms a meta-narrative:
3. “My belief in God has positive effects in my life.”
But: Unlike the meta-narrative, our analysis should not proceed as if the relationship between narrative and effects is a simple causal link.
The actual cognitive process that determines the narrative might go something like this:
Notice that the desirable aspects of life enjoyed by religious people in the community conflict with undesirable properties (e.g. falsehood, silliness, uselessness) of religious beliefs.
Trigger a search: “How do I make the undesirable properties go away while keeping benefits?”
Settle on a local optimum way of thinking, according to some evaluation algorithm that is attracted by predictions of certain consequences and repulsed by others.
The search can have a very different character from one individual to another. For example, if the idea of not having a defensible narrative isn’t repulsive, then the person says: “I’m happy in my religious community, so I don’t think too hard about my religion.” The kind of thing they are actually repulsed by would be “for me or my peers to believe that I am not a fully committed member of my in-group”.
Or, if the person is given to conscious reasoning, then it would be extremely repulsive to not have a defensible narrative. What their search evaluation algorithm is actually repulsed by might be something like, “the self-doubt that I am not a capable reasoner”, or “the loss of respect and status among other intellectuals”. So the quick fix is: Add more layers of justification and arguments surrounding religion, so that both you and your peers can plausibly feel that you are a capable reasoner occupying a justifyable stance on a complex issue.
So regarding Eliezer’s post, it’s not surprising that someone with narrative #2 can get a “placebo” version of the positive effects that come with narrative #1. The narrative doesn’t independently cause the positive effects; the narrative is shaped by a cognitive algorithm that predicts the benefits of believing it.
Writing about your personal experience made the post more clear, meaningful and engaging.
I listened the audiobook and fully endorse this review. It’s much better than what I would have written.
I really love Pinker’s other books so I was looking forward to this but unfortunately I already had all the fun spoiled by reading LW sequences, like literally all. The Sequences are a superset of this book, longer and quirkier, deeper and more insightful. But I agree that Pinker’s book is a good fit for someone who wants a more compact and mainstream-sounding intro to rationality.
Nice, thanks for sharing.
I was just thinking how “rationalist” has been an increasingly positive-associated label that we can be proud to self-identify as, and it’s currently serving as a useful positive signal when others apply it to themselves (for me at least).
The LW community and related rationalist clusters have been doing influential work in various fields: AI and AI safety, the theory and practice of science, journalism, effective altruism, entrepreneurship, investing, pedagogical fanfiction, and more. As a result, the term “rationalist” has been building momentum, and I expect we’ll increasingly see high-performers associating with it in various domains, the way “evidence-based” has gotten to the point of sounding good in every domain.
I’ve also noticed that using the longer label “aspiring rationalist” is less common these days, and I’m glad we’re just settling on “rationalist”, since anyway rationalists are many things besides epistemically humble and goal-oriented.
FWIW I’ve never known a character of high integrity who I could imagine writing the phrase “your career in EA would be over with a few DMs”.