You might be offended, angry, hurt, or otherwise emotionally compromised. Similarly, you might be sleepy, inebriated, hungry, or otherwise physically compromised. You might be overconfident in your ability to handle a certain type of problem or situation, and hence not bother to think of other ways that might work better.[1]
This is in principle good advice, but I’d like to add a note of caution here—I feel that most “rationalists” actually follow it too closely, and end up losing (and rationalists should win).
Evolutionary process have produced a brain which has different specialized modules for dealing with different situations, and the “purpose” of these modules is more in line with instrumental rationality, not epistemic rationality. Consequentially, a good epistemic rationalist often must suppress the contribution of many of these modules (overconfidence, emotion, etc).
The instrumental rationalist, on the other hand, better play close attention to emotions and overconfidence. Don’t forget Egan’s law—given human cognitive limitations, someone who applies sound epistemic rationality to full effect is not going to behave too differently from the highly successful person next to them who does not care about epistemic rationalisty at all. In other words, subtracting within reasonable bounds the effects of luck and privilege, anyone who you’d gladly trade most aspects of your life with is a superior instrumental rationalist, regardless of intelligence or learning.
Although I do think overall, instrumental rationalism does improve when epistemic rationality improves, I think that some of the tensions between them have the unfortunate result of making strong epistemic rationalists err in systematic ways when it comes to instrumental rationality.
What does this mean practically? It means you have emotions for a reason. The parts of your brain which generate emotion are the ones which are calibrated for social behavior. If you feel yourself getting angry, it is likely that the behaviors which anger produces (confronting the aggressor) will in fact produce a positive result. Similarly, if you are sad, sad behaviors (crying, seeking support or temporarily withdrawing from the social scene, depending on the situation) will likely produce a positive result.
Same goes for cognitive biases. Fundamental attribution error produces positive results because it’s better to assume that actions are innate to people rather than a result of random circumstances, since the latter don’t hold any predictive value. The action resulting from overconfidence bias (risk taking) produces positive results as well. I can’t even think of any biases that don’t follow this pattern.
Behaviorally speaking, an instrumental rationalist should not correct a bias unless they have understood the reason the bias evolved and have adjusted the other variables accordingly. For example, if you are epistemically well calibrated in confidence, take care not to let that translate into instrumentalunderconfidence. I think the notion that the portions of your psyche which are useful when it comes to logic, reason, epistemic rationalisty, etc...will understand enough and react quickly enough to match the performance of systems which are specialized for this purpose is a bit misguided, and it is extremely important to let the appropriate systems guide behavior when comes to instrumental rationality.
Caveat—Of course, your brain is designed to make viable offspring in the ancestral environment. 1) The environment has changed and 2) your goal isn’t necessarily to have offspring. But still—there is a good deal of overlap between the two utility functions.
Yes, it is an overwhelming exception. In the real world these differences always exist, and you’ll have to use your intuition to correct for them.
I’m trying to make the least convenient possible world where two randomly selected people are pulled from a crowd and are given the same, luckless task and one does better. Existing differences in brain-biology, priors, and previously gained knowledge still apply, while differences in resources and non-brain-related biology should be factored out. In these unnatural conditions, when it comes to that specific task, the one who did better is by definition a superior instrumental rationalist.
Agreed, but actually I would call a world where if people who chew gum get more throat abscesses one could reliably conclude that refraining to chew gum is the right choice to prevent throat abscesses a more convenient world than ours.
given human cognitive limitations, someone who applies sound epistemic rationality to full effect is not going to behave too differently from the highly successful person next to them who does not care about epistemic rationalisty at all
If it increases the probability of winning like that highly successful irrational person, it’s still worth doing. I mean, if an irrational person has a 20% chance of becoming highly successful, and a rationality training could increase it to 40%, then I would prefer to take that rationality training, even if the rewards for the “winners” in both categories are the same.
But yes, we should remember that we use the human hardware, so we don’t consistently overestimate the benefits of learning some rationality. Ideas which would work great for a self-improving AI may have less impressive impact on the sapient apes.
If it increases the probability of winning like that highly successful irrational person, it’s still worth doing. I mean, if an irrational person has a 20% chance of becoming highly successful, and a rationality training could increase it to 40%, then I would prefer to take that rationality training, even if the rewards for the “winners” in both categories are the same.
The idea here is that even if “rationality training” (or even general intelligence) gives people an overall advantage, there is a possibility that there are systematic disadvantages in some areas which arise when a person repeatedly uses reason to override emotion and instinct.
Relying on reason and suppressing emotion and instinct is a cultural value, especially for people who call themselves “rationalists”. We need to be aware of the pitfalls of doing that too much, because instrumentally speaking instinct and emotion do play a part in “computing” rational behavior.
This is in principle good advice, but I’d like to add a note of caution here—I feel that most “rationalists” actually follow it too closely, and end up losing (and rationalists should win).
Evolutionary process have produced a brain which has different specialized modules for dealing with different situations, and the “purpose” of these modules is more in line with instrumental rationality, not epistemic rationality. Consequentially, a good epistemic rationalist often must suppress the contribution of many of these modules (overconfidence, emotion, etc).
The instrumental rationalist, on the other hand, better play close attention to emotions and overconfidence. Don’t forget Egan’s law—given human cognitive limitations, someone who applies sound epistemic rationality to full effect is not going to behave too differently from the highly successful person next to them who does not care about epistemic rationalisty at all. In other words, subtracting within reasonable bounds the effects of luck and privilege, anyone who you’d gladly trade most aspects of your life with is a superior instrumental rationalist, regardless of intelligence or learning.
Although I do think overall, instrumental rationalism does improve when epistemic rationality improves, I think that some of the tensions between them have the unfortunate result of making strong epistemic rationalists err in systematic ways when it comes to instrumental rationality.
What does this mean practically? It means you have emotions for a reason. The parts of your brain which generate emotion are the ones which are calibrated for social behavior. If you feel yourself getting angry, it is likely that the behaviors which anger produces (confronting the aggressor) will in fact produce a positive result. Similarly, if you are sad, sad behaviors (crying, seeking support or temporarily withdrawing from the social scene, depending on the situation) will likely produce a positive result.
Same goes for cognitive biases. Fundamental attribution error produces positive results because it’s better to assume that actions are innate to people rather than a result of random circumstances, since the latter don’t hold any predictive value. The action resulting from overconfidence bias (risk taking) produces positive results as well. I can’t even think of any biases that don’t follow this pattern.
Behaviorally speaking, an instrumental rationalist should not correct a bias unless they have understood the reason the bias evolved and have adjusted the other variables accordingly. For example, if you are epistemically well calibrated in confidence, take care not to let that translate into instrumental underconfidence. I think the notion that the portions of your psyche which are useful when it comes to logic, reason, epistemic rationalisty, etc...will understand enough and react quickly enough to match the performance of systems which are specialized for this purpose is a bit misguided, and it is extremely important to let the appropriate systems guide behavior when comes to instrumental rationality.
Caveat—Of course, your brain is designed to make viable offspring in the ancestral environment. 1) The environment has changed and 2) your goal isn’t necessarily to have offspring. But still—there is a good deal of overlap between the two utility functions.
That sounds like an overwhelming exception to me.
Yes, it is an overwhelming exception. In the real world these differences always exist, and you’ll have to use your intuition to correct for them.
I’m trying to make the least convenient possible world where two randomly selected people are pulled from a crowd and are given the same, luckless task and one does better. Existing differences in brain-biology, priors, and previously gained knowledge still apply, while differences in resources and non-brain-related biology should be factored out. In these unnatural conditions, when it comes to that specific task, the one who did better is by definition a superior instrumental rationalist.
Agreed, but actually I would call a world where if people who chew gum get more throat abscesses one could reliably conclude that refraining to chew gum is the right choice to prevent throat abscesses a more convenient world than ours.
If it increases the probability of winning like that highly successful irrational person, it’s still worth doing. I mean, if an irrational person has a 20% chance of becoming highly successful, and a rationality training could increase it to 40%, then I would prefer to take that rationality training, even if the rewards for the “winners” in both categories are the same.
But yes, we should remember that we use the human hardware, so we don’t consistently overestimate the benefits of learning some rationality. Ideas which would work great for a self-improving AI may have less impressive impact on the sapient apes.
The idea here is that even if “rationality training” (or even general intelligence) gives people an overall advantage, there is a possibility that there are systematic disadvantages in some areas which arise when a person repeatedly uses reason to override emotion and instinct.
Relying on reason and suppressing emotion and instinct is a cultural value, especially for people who call themselves “rationalists”. We need to be aware of the pitfalls of doing that too much, because instrumentally speaking instinct and emotion do play a part in “computing” rational behavior.