First of all, I think that the main mistake the hypothetical apostate is making is a bucket error. In his mind, there is a single variable labeled “Christianity” which contains a boolean value: True or False. This single variable serves as an answer to many distinct questions, such as:
I think it’s important to tease apart feelings from beliefs. If you’re standing on that diving platform, I think it’s important to simultaneously know you have a 17% chance of victory, and fill yourself with the excitement, focus, and confidence of the second swimmer. Become able to tap into conviction, without any need for the self-deception.
Here, Nate is specifically talking about the epistemic/instrumental trade-off presented by “believe in yourself!” and noting that it doesn’t have to be a trade-off. You can have the feeling of “confidence” and have an accurate model.
Likewise with, “Should I convert to Christianity?”, there is a difference between using Christianity as a “genie” to make decisions for you, and taking the premises of Christianity to be true. When we look at the problem this way, the next obvious question is, “Why should I use particular framework, fake or otherwise?”.
The biggest worry seems to be that you won’t be able to “take off” a certain framework if you use it too much. It doesn’t seem like that is a problem with frameworks in general. I haven’t heard any stories of software/hardware engineers getting “trapped” at their level of abstraction, and insisting that they’re framework is “actually super duper true”.
Though there does seem to be some hazard in situations like converting to a religion. I think that a fruitful area of investigation would be to study what qualities of a framework lend it to being “sticky”. Here are some conjectures.
Frameworks that come with a social context are more sticky
Frameworks which insist on you professing belief in the framework
Frameworks that basically disguised fill in the blanks, where your intuitions do all the work
Frameworks that try to answer any possible question (and thus you are encouraged to use it more and more often)
I have heard plenty of stories (and seen examples) of software engineers who only know how to make software using the particular frameworks and tools they are familiar with, and flounder if e.g. given just a text editor containing an empty document and asked to write code to do some simple task. (Or if asked to do some more complicated task for which the tools they know are ill suited.)
That seems not a million miles for what being unable to “take off” a framework looks like when translated from “frameworks for thinking” to “frameworks for developing software”.
Your example helped me draw out a useful distinction.
I can imagine the programmers you’re alluding to. In the put-in-front-of-a-blank-doc scenario, I can guess a few thoughts they could be thinking:
1. “I don’t actually have the skills to do task ABC without my pet framework”
2. “I can’t even imagine how one would go about task ABC without my pet framework”
3. “I declare it to be a general impossibility for one to do task ABC without my pet framework”
#1 and #2 seem to be failures of skill/training. #3 is the sneaky one that is bad epistemic hygiene.
Christians rarely say, “I’m not clever enough to see how morality could work without God”, but instead say, “You can’t have morality without God.”
I’d be very surprised to find examples of software engineers who claimed #3.
I’d guess that the fact that most people know or at least have heard of someone who is way more competent than they are makes it harder for them to claim #3.
In fact, it seems like the Christian lady in EY example “got it” by accident:
She doesn’t really believe in god, but says her belief is useful to her.
To me, to be effective and useful, self-deception should occur in System 1 (fast, intuitive), but not in System 2 (slow, analytical). It seems applied rationality helps a lot with questions of motivation, or having useful intuitions to make progress towards a goal. And since System 2 is not affected, “fake beliefs” installed in System 1 are open for re-evaluation.
This seems like the crux of the issue to me.
Relevant, from Conviction Without Self-deception:
Here, Nate is specifically talking about the epistemic/instrumental trade-off presented by “believe in yourself!” and noting that it doesn’t have to be a trade-off. You can have the feeling of “confidence” and have an accurate model.
Likewise with, “Should I convert to Christianity?”, there is a difference between using Christianity as a “genie” to make decisions for you, and taking the premises of Christianity to be true. When we look at the problem this way, the next obvious question is, “Why should I use particular framework, fake or otherwise?”.
The biggest worry seems to be that you won’t be able to “take off” a certain framework if you use it too much. It doesn’t seem like that is a problem with frameworks in general. I haven’t heard any stories of software/hardware engineers getting “trapped” at their level of abstraction, and insisting that they’re framework is “actually super duper true”.
Though there does seem to be some hazard in situations like converting to a religion. I think that a fruitful area of investigation would be to study what qualities of a framework lend it to being “sticky”. Here are some conjectures.
Frameworks that come with a social context are more sticky
Frameworks which insist on you professing belief in the framework
Frameworks that basically disguised fill in the blanks, where your intuitions do all the work
Frameworks that try to answer any possible question (and thus you are encouraged to use it more and more often)
I have heard plenty of stories (and seen examples) of software engineers who only know how to make software using the particular frameworks and tools they are familiar with, and flounder if e.g. given just a text editor containing an empty document and asked to write code to do some simple task. (Or if asked to do some more complicated task for which the tools they know are ill suited.)
That seems not a million miles for what being unable to “take off” a framework looks like when translated from “frameworks for thinking” to “frameworks for developing software”.
Your example helped me draw out a useful distinction.
I can imagine the programmers you’re alluding to. In the put-in-front-of-a-blank-doc scenario, I can guess a few thoughts they could be thinking:
1. “I don’t actually have the skills to do task ABC without my pet framework”
2. “I can’t even imagine how one would go about task ABC without my pet framework”
3. “I declare it to be a general impossibility for one to do task ABC without my pet framework”
#1 and #2 seem to be failures of skill/training. #3 is the sneaky one that is bad epistemic hygiene.
Christians rarely say, “I’m not clever enough to see how morality could work without God”, but instead say, “You can’t have morality without God.”
I’d be very surprised to find examples of software engineers who claimed #3.
I’d guess that the fact that most people know or at least have heard of someone who is way more competent than they are makes it harder for them to claim #3.
I agree that this is a useful distinction.
Very much seconded.
In fact, it seems like the Christian lady in EY example “got it” by accident:
She doesn’t really believe in god, but says her belief is useful to her.
To me, to be effective and useful, self-deception should occur in System 1 (fast, intuitive), but not in System 2 (slow, analytical). It seems applied rationality helps a lot with questions of motivation, or having useful intuitions to make progress towards a goal. And since System 2 is not affected, “fake beliefs” installed in System 1 are open for re-evaluation.