Eliezer’s post focuses on the distinction between two concepts a person can believe (hereby called “narratives”):
“God is real.”
“I have something that qualifies as a ‘belief in God’.”
Either narrative will be associated with positive things in the person’s mind. And the person, particularly with narrative #2, often forms a meta-narrative:
3. “My belief in God has positive effects in my life.”
But: Unlike the meta-narrative, our analysis should not proceed as if the relationship between narrative and effects is a simple causal link.
The actual cognitive process that determines the narrative might go something like this:
Notice that the desirable aspects of life enjoyed by religious people in the community conflict with undesirable properties (e.g. falsehood, silliness, uselessness) of religious beliefs.
Trigger a search: “How do I make the undesirable properties go away while keeping benefits?”
Settle on a local optimum way of thinking, according to some evaluation algorithm that is attracted by predictions of certain consequences and repulsed by others.
The search can have a very different character from one individual to another. For example, if the idea of not having a defensible narrative isn’t repulsive, then the person says: “I’m happy in my religious community, so I don’t think too hard about my religion.” The kind of thing they are actually repulsed by would be “for me or my peers to believe that I am not a fully committed member of my in-group”.
Or, if the person is given to conscious reasoning, then it would be extremely repulsive to not have a defensible narrative. What their search evaluation algorithm is actually repulsed by might be something like, “the self-doubt that I am not a capable reasoner”, or “the loss of respect and status among other intellectuals”. So the quick fix is: Add more layers of justification and arguments surrounding religion, so that both you and your peers can plausibly feel that you are a capable reasoner occupying a justifyable stance on a complex issue.
So regarding Eliezer’s post, it’s not surprising that someone with narrative #2 can get a “placebo” version of the positive effects that come with narrative #1. The narrative doesn’t independently cause the positive effects; the narrative is shaped by a cognitive algorithm that predicts the benefits of believing it.
Also note the historical benefits to religion being in a ‘separate magisterium’ - scientists could go about the business of science without being hassled by religious conflicts (internal and external) and people in Europe didn’t feel so much of a need to kill each other over heresy anymore. (cf. The Baby-Eaters)
The narrative doesn’t independently cause the positive effects; the narrative is shaped by a cognitive algorithm that predicts the benefits of believing it.
Great point! Very insightful of you.
I wonder if there are other examples of this that can be found in human psychology.
Eliezer’s post focuses on the distinction between two concepts a person can believe (hereby called “narratives”):
“God is real.”
“I have something that qualifies as a ‘belief in God’.”
Either narrative will be associated with positive things in the person’s mind. And the person, particularly with narrative #2, often forms a meta-narrative:
3. “My belief in God has positive effects in my life.”
But: Unlike the meta-narrative, our analysis should not proceed as if the relationship between narrative and effects is a simple causal link.
The actual cognitive process that determines the narrative might go something like this:
Notice that the desirable aspects of life enjoyed by religious people in the community conflict with undesirable properties (e.g. falsehood, silliness, uselessness) of religious beliefs.
Trigger a search: “How do I make the undesirable properties go away while keeping benefits?”
Settle on a local optimum way of thinking, according to some evaluation algorithm that is attracted by predictions of certain consequences and repulsed by others.
The search can have a very different character from one individual to another. For example, if the idea of not having a defensible narrative isn’t repulsive, then the person says: “I’m happy in my religious community, so I don’t think too hard about my religion.” The kind of thing they are actually repulsed by would be “for me or my peers to believe that I am not a fully committed member of my in-group”.
Or, if the person is given to conscious reasoning, then it would be extremely repulsive to not have a defensible narrative. What their search evaluation algorithm is actually repulsed by might be something like, “the self-doubt that I am not a capable reasoner”, or “the loss of respect and status among other intellectuals”. So the quick fix is: Add more layers of justification and arguments surrounding religion, so that both you and your peers can plausibly feel that you are a capable reasoner occupying a justifyable stance on a complex issue.
So regarding Eliezer’s post, it’s not surprising that someone with narrative #2 can get a “placebo” version of the positive effects that come with narrative #1. The narrative doesn’t independently cause the positive effects; the narrative is shaped by a cognitive algorithm that predicts the benefits of believing it.
Also note the historical benefits to religion being in a ‘separate magisterium’ - scientists could go about the business of science without being hassled by religious conflicts (internal and external) and people in Europe didn’t feel so much of a need to kill each other over heresy anymore. (cf. The Baby-Eaters)
EDIT: fixed spelling of cf.
Great point! Very insightful of you.
I wonder if there are other examples of this that can be found in human psychology.