I’d say the benefits have to outweigh the costs. If you succeed in achieving your goal despite holding a significant number of false beliefs relevant to this goal, it means you got lucky: Your success wasn’t caused by your decisions, but by circumstances that just happened to be right.
That the human brain is wired in such a way that self-deception gives us an advantage in some situations may tip the balance a little bit, but it doesn’t change the fact that luck only favors us a small fraction of the time, by definition.
That the human brain is wired in such a way that self-deception gives us an advantage in some situations may tip the balance a little bit, but it doesn’t change the fact that luck only favors us a small fraction of the time, by definition.
On the contrary: “luck” is a function of confidence in two ways. First, people volunteer more information and assistance to those who are confident about a goal. And second, the confident are more likely to notice useful events and information relative to their goals.
Those two things are why people think the “law of attraction” has some sort of mystical power. It just means they’re confident and looking for their luck.
As the post hinted, self-deception can give you confidence which is useful in almost all real life situations, from soldier to socialite. Far from “tipping the balance a little bit”, a confidence upgrade is likely to improve your life much more than any amount of rationality training (in the current state of our Art).
Too vague. It’s not clear what is your argument’s denotation, but connotation (becoming overconfident is vastly better than trying to be rational) is a strong and dubious assertion that needs more support to move outside the realm of punditry.
IMO John_Maxwell_IV described the benefits of confidence quite well. For the other side see my post where I explicitly asked people what benefit they derive from the OB/LW Art of Rationality in its current state. Sorry to say, there weren’t many concrete answers. Comments went mostly along the lines of “well, no tangible benefits for me, but truth-seeking is so wonderful in itself”. If you can provide a more convincing answer, please do.
People who debate this often seem to argue for an all-or-nothing approach. I suspect the answer lies somewhere in the middle: be confident if you’re a salesperson but not if you’re a general, for instance. I might look like a member of the “always-be-confident” side to all you extreme epistemic rationalists, but I’m not.
People who debate this often seem to argue for an all-or-nothing approach. I suspect the answer lies somewhere in the middle: be confident if you’re a salesperson but not if you’re a general, for instance.
I think a better conclusion is: be confident if you’re being evaluated by other people, but cautious if you’re being evaluated by reality.
A lot of the confusion here seems to be people with more epistemic than instrumental rationality having difficulty with the idea of deliberately deceiving other people.
But there is another factor: humans are penalized by themselves for doubt. If they (correctly) estimate their ability as low, they may decide not to try at all, and therefore fail to improve. The doubt’s what I’m interested in, not tricking others.
A valid point! However, I think it is the decision to not try that should be counteracted, not the levels of doubt/confidence. That is, cultivate a healthy degree of hubris—figure out what you can probably do, then aim higher, preferably with a plan that allows a safe fallback if you don’t quite make it.
If I could just tell myself to do things and then do them exactly how I told myself, my life would be fucking awesome. Planning isn’t hard. It’s the doing that’s hard.
Someone could (correctly) estimate their ability as low and rationally give it a try anyway, but I think their effort would be significantly lower than someone who knew they could do something.
Edit: I just realized that someone reading the first paragraph might get the idea that I’m morbidly obese or something like that. I don’t have any major problems in my life—just big plans that are mostly unrealized.
You may be correct, and as someone with a persistent procrastination problem I’m in no position to argue with your point.
But still, I am hesitant to accept a blatant hack (actual self-deception) over a more elegant solution (finding a way to expend optimal effort while still having a rational evaluation of the likelihood of success).
For instance, I believe that another LW commenter, pjeby, has written about the issues related to planning vs. doing on his blog.
Yeah, I’ve read some of pjeby’s stuff, and I remember being surprised by how non-epistemically rational his tips were, given that he posts here. (If I had remembered any of the specific tips, I probably would have included them.)
If you change your mind and decide to take the self-deception route, I recommend this essay and subsequent essays as steps to indoctrinate yourself.
I’m not an epistemical rationalist, I’m an instrumental one. (At least, if I understand those terms correctly.)
That is, I’m interested in maps that help me get places, whether they “accurately” reflect the territory or not. Sometimes, having a too-accurate map—or spending time worrying about how accurate the map is—is detrimental to actually accomplishing anything.
As is probably clear, I am an epistemological rationalist in essence, attempting to understand and cultivate instrumental rationality, because epistemological rationality itself forces me to acknowledge that it alone is insufficient, or even detrimental, to accomplishing my goals.
Reading Less Wrong, and observing the conflicts between epistemological and instrumental rationality, has ironically driven home the point that one of the keys to success is carefully managing controlled self-deception.
I’m not sure yet what the consequences of this will be.
It’s not really self-deception—it’s selective attention. If you’re committed to a course of action, information about possible failure modes is only relevant to the extent that it helps you avoid them. And for the most useful results in life, most failures don’t happen so rapidly that you don’t get any warning, or so catastrophic as to be uncorrectable afterwards.
Humans are also biased towards being socially underconfident, because in our historic environment, the consequences of a social gaffe could be significant. In the modern era, though, it’s not that common for a minor error to produce severe consequences—you can always start over someplace else with another group of people. So that’s a very good example of an area where more factual information can lead to enhanced confidence.
A major difference between the confident and unconfident is that the unconfident focus on “hard evidence” in the past, while the confident focus on “possibility evidence” in the future. When an optimist says “I can”, it means, “I am able to develop the capability and will eventually succeed if I persist”. Whereas a pessimist may only feel comfortable saying “I can” if they mean, “I have done it before.”
Neither one of them is being “self-deceptive”—they are simply selecting different facts to attend to (or placing them in different contexts), resulting in different emotional and motivational responses. “I haven’t done this before” may well mean excitement and challenge to the optimist, but self-doubt and fear for the pessimist. (See also fixed vs. growth mindsets.)
Humans are also biased towards being socially underconfident, because in our historic environment, the consequences of a social gaffe could be significant.
Yeah, I’ve read some of pjeby’s stuff, and I remember being surprised by how non-epistemically rational his tips were, given that he posts here.
Nowhere is it guaranteed that, given the cognitive architecture humans have to work with, epistemic rationality is the easiest instrumentally rational manner to achieve a given goal.
But, personally, I’m still holding out for a way to get from the former to the latter without irrevocable compromises.
Nowhere is it guaranteed that, given the cognitive architecture humans have to work with, epistemic rationality is the easiest instrumentally rational manner to achieve a given goal.
But, personally, I’m still holding out for a way to get from the former to the latter without irrevocable compromises.
It’s easier than you think, in one sense. The part of you that worries about that stuff is significantly separate from—and to some extent independent of—the part of you that actually makes you do things. It doesn’t matter whether “you” are only 20% certain about the result as long as you convince the doing part that you’re 100% certain you’re going to be doing it.
Doing that merely requires that you 1) actually communicate with the doing part (often a non-trivial learning process for intellectuals such as ourselves), and 2) actually take the time to do the relevant process(es) each time it’s relevant, rather than skipping it because “you already know”.
Number 2, unfortunately, means that akrasia is quasi-recursive. It’s not enough to have a procedure for overcoming it, you must also overcome your inertia against applying that procedure on a regular basis. (Or at least, I have not yet discovered any second-order techniques to get myself or anyone else to consistently apply the first-order techniques… but hmmm… what if I applied a first-order technique to the second-order domain? Hmm.… must conduct experiments...)
It depends on the cost of overconfidence. Nothing ventured, nothing gained. But if the expected cost of venturing wrongly is greater than the expected return, it’s better to be careful what you attempt. If the potential loss is great enough, cautiousness is a virtue. If there’s little investment to lose, cautiousness is a vice.
OK, I see you don’t believe me that you should sometimes accept and sometimes reject epistemic rationality for a price. So here’s a simple mathematical model:
Let’s say agent A accepts the offer of increased epistemic rationality for a price, and agent N has not accepted it. P is the probability A will decide differently than N. F(A or N) is the expected value of N’s original course of action as a function of the agent who takes it, while S(A) is the expected value of the course of action that A might switch to. If there is a cost C associated with becoming agent A, then agent N should become agent A if and only if
(1 - P) F(A) + P S(A) - C >= F(N)
The left side of the equation is not bigger than the right side “by definition”; it depends on the circumstance. Eliezer’s dessert-ordering example is a situation where the above inequality does not hold.
If you complain that agent N can’t possibly know all the variables in the equation, then I agree with you. He will be estimating them somewhat poorly. However, that complaint in no way supports the view that the left side is in fact bigger. Someone once said that “Anything you need to quantify can be measured in some way that is superior to not measuring it at all.” Just like the difficulty of measuring utility is not a valid objection to utilitarianism, the difficulty of guessing what a better-informed self would do is not a valid objection to using this equation.
that luck only favors us a small fraction of the time, by definition.
Yes, the right side can be bigger, and occasionally it will be. If you get lucky.
If the information that N chooses to remain ignorant of happens to be of little relevance to any decision N will take in the future, and if his self-deception allows him to be more confident than he would have been otherwise, and if this increased confidence grants him a significant advantage, then the right side of the equation will be bigger than the left side.
That’s a funny definition of “luck” you’re using.
It is? Why do you think people are pleasantly surprised when they get lucky, if not because it’s a rare occurrence?
If the information that N chooses to remain ignorant of happens to be of little relevance to any decision N will take in the future, and if his self-deception allows him to be more confident than he would have been otherwise, and if this increased confidence grants him a significant advantage, then the right side of the equation will be bigger than the left side.
Not quite.
The information could be of high relevance, but it could so happen that it won’t cause him to change his mind.
He could be choosing among close alternatives, so switching to a slightly better alternative could be of limited value.
Remember also that failure to search for disconfirming evidence doesn’t necessarily constitute self-deception.
It is? Why do you think people are pleasantly surprised when they get lucky, if not because it’s a rare occurrence?
Sorry, I guess your definition of luck was reasonable. But in this case, it’s not necessarily true that the probability of the right side being greater is lower than 50%. In which case you wouldn’t always have to “get lucky”.
I’ve been thinking about this on and off for an hour, and I’ve come to the conclusion that you’re right.
My mistake comes from the fact that the examples I was using to think about this were all examples where one has low certainty about whether the information is irrelevant to one’s decision making. In this case, the odds are that being ignorant will yield a less than maximal chance of success. However, there are situations in which it’s possible to know with great certainty that some piece of information is irrelevant to one’s decision making, even if you don’t know what the information is. These situations are mostly those that are limited in scope and involve a short-term goal, like giving a favorable first impression, or making a good speech. For instance, you might suspect that your audience hates your guts, and knowing that this is in fact the case would make you less confident during your speech than merely suspecting it, so you’d be better off waiting after the speech to find out about this particular fact.
Although, if I were in that situation, and they did hate my guts, I’d rather know about it and find a way to remain confident that doesn’t involve willful ignorance. That said, I have no difficulty imagining a person who is simply incapable of finding such a way.
I wonder, do all situations where instrumental rationality conflicts with epistemic rationality have to do with mental states over which we have no conscious control?
I’ve been thinking about this on and off for an hour, and I’ve come to the conclusion that you’re right.
Wow, this must be like the 3rd time that someone on the internet has said that to me! Thanks!
Although, if I were in that situation, and they did hate my guts, I’d rather know about it and find a way to remain confident that doesn’t involve willful ignorance.
If you think of a way, please tell me about it.
I wonder, do all situations where instrumental rationality conflicts with epistemic rationality have to do with mental states over which we have no conscious control?
Information you have to pay money for doesn’t fit into this category.
I’d say the benefits have to outweigh the costs. If you succeed in achieving your goal despite holding a significant number of false beliefs relevant to this goal, it means you got lucky: Your success wasn’t caused by your decisions, but by circumstances that just happened to be right.
That the human brain is wired in such a way that self-deception gives us an advantage in some situations may tip the balance a little bit, but it doesn’t change the fact that luck only favors us a small fraction of the time, by definition.
On the contrary: “luck” is a function of confidence in two ways. First, people volunteer more information and assistance to those who are confident about a goal. And second, the confident are more likely to notice useful events and information relative to their goals.
Those two things are why people think the “law of attraction” has some sort of mystical power. It just means they’re confident and looking for their luck.
As the post hinted, self-deception can give you confidence which is useful in almost all real life situations, from soldier to socialite. Far from “tipping the balance a little bit”, a confidence upgrade is likely to improve your life much more than any amount of rationality training (in the current state of our Art).
Too vague. It’s not clear what is your argument’s denotation, but connotation (becoming overconfident is vastly better than trying to be rational) is a strong and dubious assertion that needs more support to move outside the realm of punditry.
IMO John_Maxwell_IV described the benefits of confidence quite well. For the other side see my post where I explicitly asked people what benefit they derive from the OB/LW Art of Rationality in its current state. Sorry to say, there weren’t many concrete answers. Comments went mostly along the lines of “well, no tangible benefits for me, but truth-seeking is so wonderful in itself”. If you can provide a more convincing answer, please do.
People who debate this often seem to argue for an all-or-nothing approach. I suspect the answer lies somewhere in the middle: be confident if you’re a salesperson but not if you’re a general, for instance. I might look like a member of the “always-be-confident” side to all you extreme epistemic rationalists, but I’m not.
I think a better conclusion is: be confident if you’re being evaluated by other people, but cautious if you’re being evaluated by reality.
A lot of the confusion here seems to be people with more epistemic than instrumental rationality having difficulty with the idea of deliberately deceiving other people.
But there is another factor: humans are penalized by themselves for doubt. If they (correctly) estimate their ability as low, they may decide not to try at all, and therefore fail to improve. The doubt’s what I’m interested in, not tricking others.
A valid point! However, I think it is the decision to not try that should be counteracted, not the levels of doubt/confidence. That is, cultivate a healthy degree of hubris—figure out what you can probably do, then aim higher, preferably with a plan that allows a safe fallback if you don’t quite make it.
If I could just tell myself to do things and then do them exactly how I told myself, my life would be fucking awesome. Planning isn’t hard. It’s the doing that’s hard.
Someone could (correctly) estimate their ability as low and rationally give it a try anyway, but I think their effort would be significantly lower than someone who knew they could do something.
Edit: I just realized that someone reading the first paragraph might get the idea that I’m morbidly obese or something like that. I don’t have any major problems in my life—just big plans that are mostly unrealized.
You may be correct, and as someone with a persistent procrastination problem I’m in no position to argue with your point.
But still, I am hesitant to accept a blatant hack (actual self-deception) over a more elegant solution (finding a way to expend optimal effort while still having a rational evaluation of the likelihood of success).
For instance, I believe that another LW commenter, pjeby, has written about the issues related to planning vs. doing on his blog.
Yeah, I’ve read some of pjeby’s stuff, and I remember being surprised by how non-epistemically rational his tips were, given that he posts here. (If I had remembered any of the specific tips, I probably would have included them.)
If you change your mind and decide to take the self-deception route, I recommend this essay and subsequent essays as steps to indoctrinate yourself.
I’m not an epistemical rationalist, I’m an instrumental one. (At least, if I understand those terms correctly.)
That is, I’m interested in maps that help me get places, whether they “accurately” reflect the territory or not. Sometimes, having a too-accurate map—or spending time worrying about how accurate the map is—is detrimental to actually accomplishing anything.
As is probably clear, I am an epistemological rationalist in essence, attempting to understand and cultivate instrumental rationality, because epistemological rationality itself forces me to acknowledge that it alone is insufficient, or even detrimental, to accomplishing my goals.
Reading Less Wrong, and observing the conflicts between epistemological and instrumental rationality, has ironically driven home the point that one of the keys to success is carefully managing controlled self-deception.
I’m not sure yet what the consequences of this will be.
It’s not really self-deception—it’s selective attention. If you’re committed to a course of action, information about possible failure modes is only relevant to the extent that it helps you avoid them. And for the most useful results in life, most failures don’t happen so rapidly that you don’t get any warning, or so catastrophic as to be uncorrectable afterwards.
Humans are also biased towards being socially underconfident, because in our historic environment, the consequences of a social gaffe could be significant. In the modern era, though, it’s not that common for a minor error to produce severe consequences—you can always start over someplace else with another group of people. So that’s a very good example of an area where more factual information can lead to enhanced confidence.
A major difference between the confident and unconfident is that the unconfident focus on “hard evidence” in the past, while the confident focus on “possibility evidence” in the future. When an optimist says “I can”, it means, “I am able to develop the capability and will eventually succeed if I persist”. Whereas a pessimist may only feel comfortable saying “I can” if they mean, “I have done it before.”
Neither one of them is being “self-deceptive”—they are simply selecting different facts to attend to (or placing them in different contexts), resulting in different emotional and motivational responses. “I haven’t done this before” may well mean excitement and challenge to the optimist, but self-doubt and fear for the pessimist. (See also fixed vs. growth mindsets.)
I wish I could upmod you twice for this.
Nowhere is it guaranteed that, given the cognitive architecture humans have to work with, epistemic rationality is the easiest instrumentally rational manner to achieve a given goal.
But, personally, I’m still holding out for a way to get from the former to the latter without irrevocable compromises.
It’s easier than you think, in one sense. The part of you that worries about that stuff is significantly separate from—and to some extent independent of—the part of you that actually makes you do things. It doesn’t matter whether “you” are only 20% certain about the result as long as you convince the doing part that you’re 100% certain you’re going to be doing it.
Doing that merely requires that you 1) actually communicate with the doing part (often a non-trivial learning process for intellectuals such as ourselves), and 2) actually take the time to do the relevant process(es) each time it’s relevant, rather than skipping it because “you already know”.
Number 2, unfortunately, means that akrasia is quasi-recursive. It’s not enough to have a procedure for overcoming it, you must also overcome your inertia against applying that procedure on a regular basis. (Or at least, I have not yet discovered any second-order techniques to get myself or anyone else to consistently apply the first-order techniques… but hmmm… what if I applied a first-order technique to the second-order domain? Hmm.… must conduct experiments...)
An excellent heuristic, indeed!
It depends on the cost of overconfidence. Nothing ventured, nothing gained. But if the expected cost of venturing wrongly is greater than the expected return, it’s better to be careful what you attempt. If the potential loss is great enough, cautiousness is a virtue. If there’s little investment to lose, cautiousness is a vice.
Right.
OK, I see you don’t believe me that you should sometimes accept and sometimes reject epistemic rationality for a price. So here’s a simple mathematical model:
Let’s say agent A accepts the offer of increased epistemic rationality for a price, and agent N has not accepted it. P is the probability A will decide differently than N. F(A or N) is the expected value of N’s original course of action as a function of the agent who takes it, while S(A) is the expected value of the course of action that A might switch to. If there is a cost C associated with becoming agent A, then agent N should become agent A if and only if
(1 - P) F(A) + P S(A) - C >= F(N)
The left side of the equation is not bigger than the right side “by definition”; it depends on the circumstance. Eliezer’s dessert-ordering example is a situation where the above inequality does not hold.
If you complain that agent N can’t possibly know all the variables in the equation, then I agree with you. He will be estimating them somewhat poorly. However, that complaint in no way supports the view that the left side is in fact bigger. Someone once said that “Anything you need to quantify can be measured in some way that is superior to not measuring it at all.” Just like the difficulty of measuring utility is not a valid objection to utilitarianism, the difficulty of guessing what a better-informed self would do is not a valid objection to using this equation.
That’s a funny definition of “luck” you’re using.
Yes, the right side can be bigger, and occasionally it will be. If you get lucky.
If the information that N chooses to remain ignorant of happens to be of little relevance to any decision N will take in the future, and if his self-deception allows him to be more confident than he would have been otherwise, and if this increased confidence grants him a significant advantage, then the right side of the equation will be bigger than the left side.
It is? Why do you think people are pleasantly surprised when they get lucky, if not because it’s a rare occurrence?
Not quite.
The information could be of high relevance, but it could so happen that it won’t cause him to change his mind.
He could be choosing among close alternatives, so switching to a slightly better alternative could be of limited value.
Remember also that failure to search for disconfirming evidence doesn’t necessarily constitute self-deception.
Sorry, I guess your definition of luck was reasonable. But in this case, it’s not necessarily true that the probability of the right side being greater is lower than 50%. In which case you wouldn’t always have to “get lucky”.
I’ve been thinking about this on and off for an hour, and I’ve come to the conclusion that you’re right.
My mistake comes from the fact that the examples I was using to think about this were all examples where one has low certainty about whether the information is irrelevant to one’s decision making. In this case, the odds are that being ignorant will yield a less than maximal chance of success. However, there are situations in which it’s possible to know with great certainty that some piece of information is irrelevant to one’s decision making, even if you don’t know what the information is. These situations are mostly those that are limited in scope and involve a short-term goal, like giving a favorable first impression, or making a good speech. For instance, you might suspect that your audience hates your guts, and knowing that this is in fact the case would make you less confident during your speech than merely suspecting it, so you’d be better off waiting after the speech to find out about this particular fact.
Although, if I were in that situation, and they did hate my guts, I’d rather know about it and find a way to remain confident that doesn’t involve willful ignorance. That said, I have no difficulty imagining a person who is simply incapable of finding such a way.
I wonder, do all situations where instrumental rationality conflicts with epistemic rationality have to do with mental states over which we have no conscious control?
Wow, this must be like the 3rd time that someone on the internet has said that to me! Thanks!
If you think of a way, please tell me about it.
Information you have to pay money for doesn’t fit into this category.