The key is simple: the downsides from de-compartmentalization stem from allowing a putative fact to overwrite other knowledge (e.g., letting one’s religious beliefs overwrite knowledge about how to successfully reason in biology, or letting a simplified ev. psych overwrite one’s experiences of what dating behaviors work). So, the solution is to be really damn careful not to let new claims overwrite old data.
This is leaving out the danger that realistic assessments of your ability can be hazardous to your ability to actually perform. People who over-estimate their ability accomplish more than people who realistically estimate it, and Richard Wiseman’s luck research shows that believing you’re lucky will actually make it so.
I think instrumental rationalists should perhaps follow a modified Tarski litany, “If I live in a universe where believing X gets me Y, and I wish Y, then I wish to believe X”. ;-)
Actually, more precisely: “If I live in a universe where anticipating X gets me Y, and I wish Y, then I wish to anticipate X, even if X will not really occur”. I can far/symbolically “believe” that life is meaningless and I could be killed at any moment, but if I want to function in life, I’d darn well better not be emotionally anticipating that my life is meaningless now or that I’m actually about to be killed by random chance.
(Edit to add a practical example: a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be… but in the process, achieves a better result than if s/he anticipated performing an average shot. Here, X is the perfect shot, and Y is the improved shot resulting from the visualization. The compartmentalization that must occur for this to work is that the “far” mind must not be allowed to break the golfer’s concentration by pointing out that the envisioned shot is a lie, and that one should therefore not be feeling the associated feelings.)
I think instrumental rationalists should perhaps follow a modified Tarski litany, “If I live in a universe where believing X gets me Y, and I wish Y, then I wish to believe X”. ;-)
Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.
You can believe a falsity for sake of utility while alieving a truth for sake of sanity. Deep down you know you’re not the best golfer, but there’s no reason to critically analyze your delusions if believing so’s been shown time and time again to make you a better golfer. The problems occur when your occupation is ‘FAI programmer’ or ‘neurosurgeon’ instead of ‘golfer’. But most of us aren’t FAI programmers or neurosurgeons, we just want to actually turn in our research papers on time.
It’s not even really that dangerous, as rationalists can reasonably expect their future selves to update on evidence that their past-inherited beliefs aren’t getting them utility (aren’t true): by this theory, passive avoidance of rationality is epistemically safer than active doublethink (which might not even be possible, as Eliezer points out). If something forces you to really pay attention to your false belief then the active process of introspection will lead to it being destroyed by the truth.
Added: You know, now that I think about it more, the real distinction in question isn’t aliefs and beliefs but instead beliefs and beliefs in beliefs; at least that’s how it works when I introspect. I’m not sure if studies show that performance is increased by belief in belief or if the effect is limited to ‘real’ belief. Therefore my whole first paragraph above might be off-base; anyone know the literature? I just have the secondhand CliffsNote pop-psy version. At any rate the second paragraph still seems reasonably clever… which is a bad sign.
Double added: Mike Blume’s post indicates my first paragraph may not have been off the mark. Belief in belief seems sufficient for performance enhancement. Actually, as far as I can tell, Blume’s post really just kinda wins the debate. Also see JamesAndrix’s comment.
Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.
Honestly, this sounds to me like compartmentalization to protect the belief that non-compartmentalism is useful, especially since the empirical evidence (both scientific experimentation and simple observation) is overwhelmingly in favor of instrumental advantages to the over-optimistic.
In any case, anticipating an experience has no truth value. I can anticipate having lunch now, for example; is that true or untrue? What if I have something different for lunch than I currently anticipate? Have I weakened my ability to think/care/act with my whole mind?
Also, if we are really talking about the whole mind, then one must consider the “near” mind as well as the “far” one… and they tend to be in resource competition for instrumental goals. To the extent that you think in a purely symbolic way about your goals, you weaken your motivation to actually do anything about them.
What I’m saying is, decompartmentalization of the “far” mind is all well and good, as is having consistency within the “near” mind, and in general, correlation of the near and far minds’ contents. But there are types of epistemic beliefs that we have scads of scientific evidence to show are empirically dangerous to one’s instrumental output, and should therefore be kept out of “near” anticipation.
The level of mental unity (I prefer this to “decompartmentalization”) that makes it impossible to focus productively on a learnable physical/computational performance task, is fortunately impossible to achieve, or at least easy to temporarily drop.
(Edit to add a practical example: a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be… but in the process, achieves a better result than if s/he anticipated performing an average shot. Here, X is the perfect shot, and Y is the improved shot resulting from the visualization. The compartmentalization that must occur for this to work is that the “far” mind must not be allowed to break the golfer’s concentration by pointing out that the envisioned shot is a lie, and that one should therefore not be feeling the associated feelings.)
It seems to me there are two categories of mental events that you are calling anticipations. One category is predictions (which can be true or false, and honest or self-deceptive); the other is declarations, or goals (which have no truth-values). To have a near-mode declaration that you will hit a hole-in-one, and to visualize it and aim toward it with every fiber of your being, is not at all the same thing as near-mode predicting that you will hit a hole-in-one (and so being shocked if you don’t, betting piles of money on the outcome, etc.). But you’ve done more experiments here than I have; do you think the distinction between “prediction” and “declaration/aim” exists only in far mode?
not at all the same thing as near-mode predicting that you will hit a hole-in-one (and so being shocked if you don’t, betting piles of money on the outcome, etc.).
To be clear, one is compartmentalizing - deliberately separating the anticipation of “this is what I’m going to feel in a moment when I hit that hole-in-one” from the kind of anticipation that would let you place a bet on it.
This example is one of only many where compartmentalizing your epistemic knowledge from your instrumental experience is a damn good idea, because it would otherwise interfere with your ability to perform.
do you think the distinction between “prediction” and “declaration/aim” exists only in far mode?
What I’m saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance.
To perform confidently and with motivation, it is often necessary to think and feel “as if” certain things were true, which may in fact not be true.
Note, though, that with respect to the declaration/prediction divide you propose, Wiseman’s luck research doesn’t say anything about people declaring intentions to be lucky, AFAICT, only anticipating being lucky. This expectation seems to prime unconscious perceptual fitlers as well as automatic motivations that do not occur when people do not expect to be lucky.
I suspect that one reason this works well for vague expectations such as “luck” is that the expectation can be confirmed by many possible outcomes, and is so more self-sustaining than more-specific beliefs would be.
We can also consider Dweck and Seligman’s mindset and optimism research under the same umbrella: the “growth” mindset anticipates only that the learner will improve with effort over time, and the optimist merely anticipates that setbacks are not permanent, personal, or pervasive.
In all cases, AFAICT, these are actual beliefs held by the parties under study, not “declarations”. (I would guess the same also applies to the medical benefits of believing in a personally-caring deity.)
What I’m saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance.
Compartmentalization only seems necessary when actually doing things; actually hitting golf balls or acting in a play or whatever. But during down time epistemic rationality does not seem to be harmed. Saying ‘optimists’ indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting, but switching to ‘realist’ mode as much as possible to ensure that the decompartmenalization algorithms are running at max capacity. This seems like plausible human behavior; at any rate, if realism as a trait doesn’t allow one to periodically be optimistic when necessary, then I worry that optimism as a trait wouldn’t allow one to periodically be realistic when necessary. The latter sounds more harmful, but I optimistically expect that such tradeoffs aren’t necessary.
Saying ‘optimists’ indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting,
I rather doubt that, since one of the big differences between the optimists and pessimists is the motivation to practice and improve, which needs to be active a lot more of the time than just while “doing something”.
If the choice is between, say, reading LessWrong and doing something difficult, my guess is the optimist will be more likely to work on the difficult thing, while the purely epistemic rationalist will get busy finding a way to justify reading LessWrong as being on task. ;-)
Don’t get me wrong, I never said I liked this characteristic of evolved brains. But it’s better not to fool ourselves about whether it’s better not to fool ourselves. ;-)
a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be… but in the process, achieves a better result than if s/he anticipated performing an average shot.
Suppose there is a lake between the tee and the hole, too big for the golfer to hit the ball all the way across. Should he envision/anticipate a hole in one, and waste his first stroke hitting the ball into the water, or should he acknowledge that this hole will take multiple strokes, and hit the ball around the lake?
Suppose there is a lake between the tee and the hole, too big for the golfer to hit the ball all the way across. Should he envision/anticipate a hole in one, and waste his first stroke hitting the ball into the water, or should he acknowledge that this hole will take multiple strokes, and hit the ball around the lake?
Whatever will produce the better result.
Remember that the instrumental litany I proposed is, “If believing X will get me Y and I wish Y, then I wish to believe X.” If believing I’ll get a hole in one won’t get me a good golf score, and I want to get a good score, then I wouldn’t want to believe it.
Suppose there is a lake between the tee and the hole, too big for the golfer to hit the ball all the way across. Should he envision/anticipate a hole in one, and waste his first stroke hitting the ball into the water, or should he acknowledge that this hole will take multiple strokes, and hit the ball around the lake?
Depends. Do you want to win or do you want to get the girl?
Edit to add a practical example: a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be… but in the process, achieves a better result than if s/he anticipated performing an average shot.
Really? That is, is that what the top golfers report doing, that the mediocre ones don’t?
If so, I am surprised. Aiming at a target does not mean believing I’m going to hit it. Aiming at a target means aiming at a target.
Really? That is, is that what the top golfers report doing, that the mediocre ones don’t?
My understanding is that top golfers do indeed pre-visualize every strike, though I doubt they visualize or expect holes-in-one. AFAIK, however, they do visualize something better than what they can reasonably expect to get, and performance always lags the visualization to some degree.
Aiming at a target does not mean believing I’m going to hit it.
What I’m saying is that if you really aim at it, this is functionally equivalent to believing, in that you are performing the same mental prerequisites: i.e., forming a mental image which you are not designating false, and acting as if it is true. That is more or less what “belief” is, at the “near” level of thinking.
To try to be more precise: the “acting as if” here is not acting in anticipation of hitting the target, but acting so as to bring it about—the purpose of envisioning the result (not just the action) is to call on the near system’s memory of previous successful shots in order to bring about the physical states (reference levels) that brought about the previous successes.
IOW, the belief anticipation here isn’t “I’m going to make this shot, so I should bet a lot of money”, it’s, “I’m going to have made this shot, therefore I need to stand in thus-and-such way and use these muscles like so while breathing like this” and “I’m going to make this shot, therefore I can be relaxed and not tense up and ruin it by being uncertain”.
It looks like a stretch to me, to call this a belief.
I’ve no experience of high-level golf, but I did at one time shoot on the county small-bore pistol team (before the law changed and the guns went away, but that’s even more of a mind-killing topic than politics in general). When I aim at a target with the intention of hitting it, belief that I will or won’t doesn’t come into the picture. Thinking about what is going to happen is just a distraction.
A month ago I made the longest cycle ride I have ever done. I didn’t visualise myself as having completed the ride or anything of that sort. I simply did the work.
Whatever wins, wins, of course, but I find either of the following more likely accounts of what this exercise of “belief” really is:
(1) What it feels like to single-mindedly pursue a goal.
(2) A technique to keep the mind harmlessly occupied and out of the way while the real work happens—what a coach might tell people to do, to produce that result.
In terms of control theory, a reference signal—a goal—is not an imagined perception. It is simply a reference signal.
It looks like a stretch to me, to call this a belief.
At which point, we’re arguing definitions, because AFAICT the rest of your comment is not arguing that the process consists of something other than “forming a mental image which you are not designating false, and acting as if it is true.” You seem to merely be arguing that this process should not be called “belief”.
What is relevant, however, is that this is a process of compartmentalizing one’s thinking, so as to ignore various facts about the situation. Whether you call this a belief or not isn’t relevant to the main point: decompartmentalization can be hazardous to performance.
As far as I can tell, you are not actually disputing that claim. ;-)
You can’t call black white and then say that to dispute that is to merely talk about definitions. “Acting as if one believes”, if it means anything at all, must mean doing the same acts one would do if one believed. But you explicitly excluded betting on the outcome, a paradigmatic test of belief on LW.
Aiming at a target is not acting as if one were sure to hit the target. Visualising hitting the target is not acting as if one believes one will. These are different things, whatever they are called.
You can’t call black white and then say that to dispute that is to merely talk about definitions.
Even if you call it “froobling”, it doesn’t change my point in any way, so I don’t see the relevance of your reply… which is still not disputing my point about compartmentalization.
a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be… but in the process, achieves a better result than if s/he anticipated performing an average shot… The compartmentalization that must occur for this to work is that the “far” mind must not be allowed to break the golfer’s concentration by pointing out that the envisioned shot is a lie, and that one should therefore not be feeling the associated feelings.
I think maybe the problem is that different neurological processes are being taken as the primary prototype of “compartmentalization” by Anna and yourself.
Performance enhancing direction of one’s attention so as not to be distracted in the N minutes prior to a critical performance seems much different to me than the way the same person might calculatingly speculate about their own performance three days in advance while placing a side bet on themselves.
Volitional control over the contents of one’s working memory, with a thoughtful eye to the harmonization of your performance, your moment-to-moment-mindstates, and your long-term-mind-structures (like skills and declarative knowledge and such) , seems like something that would help the golfer in both cases. In both cases there is some element of explicit calculating prediction (about the value of the bet or the golfing technique) that could be wrong, but whose rightness is likely to correlate with success in either the bet or the technique.
Part of the trick here seems to be that both the pro- and the anti-compartmentalization advice are abstract enough that both describe and might inspire good or bad behavior, and whether you think the advice is good or bad depends on which subsets of vaguely implied behavior are salient to you (based on skill estimates, typical situations, or whatever).
Rationalists, especially early on, still get hurt… they just shouldn’t get hurt twice in the same way if they’re “doing it right”.
Any mistake should make you double check both the theory and its interpretation. The core claim of advocates of rationality is simply that there is a “there” there, that’s worth pursuing… that seven “rational iterations” into a process, you’ll be in much better position than if you’d done ten things “at random” (two of which were basically repetitions of an earlier mistake).
In both cases there is some element of explicit calculating prediction (about the value of the bet or the golfing technique) that could be wrong, but whose rightness is likely to correlate with success in either the bet or the technique.
See Seligman’s optimism research. Optimists out-perform pessimists and realists in the long run, in any task that requires motivation to develop skill. This strongly implies that an epistemically accurate assessment of your ability is a handicap to actual performance in such areas.
These kinds of research can’t just be shrugged off with “seems like something that would help”, unless you want to drop epistemic rationality along with the instrumental. ;-)
I’m a fairly good calligrapher—the sort of good which comes from lots of attentive hours, though not focused experiments.
I’ve considered it a blessing that my ambition was always just a tiny bit ahead of what I was able to do. If I’d been able to see the difference between what I could do when I started and what I’m able to do now (let alone what people who are much better than I am are able to do), I think I would have given up. Admittedly, it’s a mixed blessing—it doesn’t encourage great ambition.
I hear about a lot of people who give up on making music because the difference between the sounds they can hear in their heads and the sounds they can produce at the beginning are simply too large.
In Effortless Mastery, Kenny Werner teaches thinking of every sound you make as the most beautiful sound, since he believes that the effort to sound good is a lot of what screws up musicians. I need to reread to see how he gets from there to directed practice, but he’s an excellent musician.
I’ve also gotten some good results on being able to filter out background noise by using “this is the most beautiful sound I’ve ever heard” rather than trying to make out particular voices in a noisy bar.
Steve Barnes recommends high goal-setting and a minute of meditation every three hours to lower anxiety enough to pursue the goals. It’s worked well for him and seems to work well for some people. I’ve developed a ugh field about my whole fucking life as a result of paying attention to his stuff, and am currently working on undoing it. Surprisingly, draining the certainty out of self-hatred has worked much better than trying to do anything about the hostility.
I’ve considered it a blessing that my ambition was always just a tiny bit ahead of what I was able to do. If I’d been able to see the difference between what I could do when I started and what I’m able to do now (let alone what people who are much better than I am are able to do), I think I would have given up. Admittedly, it’s a mixed blessing—it doesn’t encourage great ambition.
That reminds me of another way in which more epistemic accuracy isn’t always useful: projects that I never would have started/finished if I had realized in advance how much work they’d end up being. ;-)
(I did similarly with the Litany of Gendlin in my post):
If believing something that is false gets me utility, I desire to believe in that falsity; If believing something that is true gets me utility, I desire to believe in that truth; Let me not become attached to states of belief that do not get me utility.
I wrote a slightly less general version of the Litany of Gendlin on similar lines, based on the one specific case I know of where believing something can produce utility:
If I can X, then I desire to believe I can X If believing that I can not X would make it such that I could not X, and it is plausible that I can X, and there are no dire consequences for failure if I X, then I desire to believe I can X. It is plausible that I can X. There are no dire consequences for failure if I X.
The last two lines may be truncated off for some values of X, but usually shouldn’t be.
I’ve been wondering about this lately. I don’t have a crisp answer as yet, though for practical reasons I’m definitely working on it.
That said, I don’t think your golfer example speaks to me about the nature of the potential danger. This looks to me like it’s highlighting the value of concretely visualizing goals in some situations.
Here are a few potential examples of the kind of phenomenon that nags at me:
I’m under the impression that I’m as physically strong as I am because I learned early on how to use the try harder for physical tasks. I noticed when I was a really young kid that if I couldn’t make something physically budge and then I doubled my effort, I still had room to ramp up my effort but the object often gave way. (I would regularly test this around age 7 by trying to push buildings over.) Today this has cashed out as simple muscular strength, but when I hit resistance I can barely manage to move (such as moving a “portable” dance floor) my first instinct is still to use the try harder rather than to find an easier way of moving the thing.
This same instinct does not apply to endurance training, though. I do Tabata intervals and find my mind generating adamant reasons why three cycles is plenty. I attribute this to practicing thinking that I’m “bad at endurance stuff” from a young age.
Possibly relatedly, I don’t encounter injuries from doing weight-lifting at a gym, but every time I start a jogging regimen I get a new injury (illiotibal band syndrome, overstretching a tendon running inside my ankles, etc.). This could be coincidence, but it’s a weird one, and oddly consistent.
My impression is that I am emotionally capable of handling whatever I think I’m emotionally capable of handling, and conversely that I can’t handle what I think I can’t handle. For instance, when I’m in danger of being rejected in a social setting, I seem to have a good sense of whether that’s going to throw me emotionally off-kilter (being upset, feeling really hurt, having a harder time thinking clearly, etc.) and if so by roughly how much. That counts as evidence that I’m just good at knowing the range of emotional impacts I can handle—but the thing is, I seem to be able to game this. If I change how I think about the situation, I’m able to increase or decrease the emotional impact it has on me. Not without bound, but pretty significantly.
Whether I enjoy an outing with some friends seems to depend at least in part on my anticipation of how much fun we’re going to have. If I get excited enough, it takes some pretty major setbacks to keep me from enjoying myself.
I also faintly remember having heard of some research showing that people who think that a puzzle has been solved are better-able to solve it than those who are told it’s unsolved. But I could be misremembering this by quite a bit. I do know that some people speculate that the Manhattan Project might owe a lot of its success to rumors that the Nazis already had the bomb and that the Americans were playing catch-up before the Nazis could build one.
This is leaving out the danger that realistic assessments of your ability can be hazardous to your ability to actually perform. People who over-estimate their ability accomplish more than people who realistically estimate it, and Richard Wiseman’s luck research shows that believing you’re lucky will actually make it so.
I think instrumental rationalists should perhaps follow a modified Tarski litany, “If I live in a universe where believing X gets me Y, and I wish Y, then I wish to believe X”. ;-)
Actually, more precisely: “If I live in a universe where anticipating X gets me Y, and I wish Y, then I wish to anticipate X, even if X will not really occur”. I can far/symbolically “believe” that life is meaningless and I could be killed at any moment, but if I want to function in life, I’d darn well better not be emotionally anticipating that my life is meaningless now or that I’m actually about to be killed by random chance.
(Edit to add a practical example: a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be… but in the process, achieves a better result than if s/he anticipated performing an average shot. Here, X is the perfect shot, and Y is the improved shot resulting from the visualization. The compartmentalization that must occur for this to work is that the “far” mind must not be allowed to break the golfer’s concentration by pointing out that the envisioned shot is a lie, and that one should therefore not be feeling the associated feelings.)
Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.
You can believe a falsity for sake of utility while alieving a truth for sake of sanity. Deep down you know you’re not the best golfer, but there’s no reason to critically analyze your delusions if believing so’s been shown time and time again to make you a better golfer. The problems occur when your occupation is ‘FAI programmer’ or ‘neurosurgeon’ instead of ‘golfer’. But most of us aren’t FAI programmers or neurosurgeons, we just want to actually turn in our research papers on time.
It’s not even really that dangerous, as rationalists can reasonably expect their future selves to update on evidence that their past-inherited beliefs aren’t getting them utility (aren’t true): by this theory, passive avoidance of rationality is epistemically safer than active doublethink (which might not even be possible, as Eliezer points out). If something forces you to really pay attention to your false belief then the active process of introspection will lead to it being destroyed by the truth.
Added: You know, now that I think about it more, the real distinction in question isn’t aliefs and beliefs but instead beliefs and beliefs in beliefs; at least that’s how it works when I introspect. I’m not sure if studies show that performance is increased by belief in belief or if the effect is limited to ‘real’ belief. Therefore my whole first paragraph above might be off-base; anyone know the literature? I just have the secondhand CliffsNote pop-psy version. At any rate the second paragraph still seems reasonably clever… which is a bad sign.
Double added: Mike Blume’s post indicates my first paragraph may not have been off the mark. Belief in belief seems sufficient for performance enhancement. Actually, as far as I can tell, Blume’s post really just kinda wins the debate. Also see JamesAndrix’s comment.
Honestly, this sounds to me like compartmentalization to protect the belief that non-compartmentalism is useful, especially since the empirical evidence (both scientific experimentation and simple observation) is overwhelmingly in favor of instrumental advantages to the over-optimistic.
In any case, anticipating an experience has no truth value. I can anticipate having lunch now, for example; is that true or untrue? What if I have something different for lunch than I currently anticipate? Have I weakened my ability to think/care/act with my whole mind?
Also, if we are really talking about the whole mind, then one must consider the “near” mind as well as the “far” one… and they tend to be in resource competition for instrumental goals. To the extent that you think in a purely symbolic way about your goals, you weaken your motivation to actually do anything about them.
What I’m saying is, decompartmentalization of the “far” mind is all well and good, as is having consistency within the “near” mind, and in general, correlation of the near and far minds’ contents. But there are types of epistemic beliefs that we have scads of scientific evidence to show are empirically dangerous to one’s instrumental output, and should therefore be kept out of “near” anticipation.
The level of mental unity (I prefer this to “decompartmentalization”) that makes it impossible to focus productively on a learnable physical/computational performance task, is fortunately impossible to achieve, or at least easy to temporarily drop.
It seems to me there are two categories of mental events that you are calling anticipations. One category is predictions (which can be true or false, and honest or self-deceptive); the other is declarations, or goals (which have no truth-values). To have a near-mode declaration that you will hit a hole-in-one, and to visualize it and aim toward it with every fiber of your being, is not at all the same thing as near-mode predicting that you will hit a hole-in-one (and so being shocked if you don’t, betting piles of money on the outcome, etc.). But you’ve done more experiments here than I have; do you think the distinction between “prediction” and “declaration/aim” exists only in far mode?
To be clear, one is compartmentalizing - deliberately separating the anticipation of “this is what I’m going to feel in a moment when I hit that hole-in-one” from the kind of anticipation that would let you place a bet on it.
This example is one of only many where compartmentalizing your epistemic knowledge from your instrumental experience is a damn good idea, because it would otherwise interfere with your ability to perform.
What I’m saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance.
To perform confidently and with motivation, it is often necessary to think and feel “as if” certain things were true, which may in fact not be true.
Note, though, that with respect to the declaration/prediction divide you propose, Wiseman’s luck research doesn’t say anything about people declaring intentions to be lucky, AFAICT, only anticipating being lucky. This expectation seems to prime unconscious perceptual fitlers as well as automatic motivations that do not occur when people do not expect to be lucky.
I suspect that one reason this works well for vague expectations such as “luck” is that the expectation can be confirmed by many possible outcomes, and is so more self-sustaining than more-specific beliefs would be.
We can also consider Dweck and Seligman’s mindset and optimism research under the same umbrella: the “growth” mindset anticipates only that the learner will improve with effort over time, and the optimist merely anticipates that setbacks are not permanent, personal, or pervasive.
In all cases, AFAICT, these are actual beliefs held by the parties under study, not “declarations”. (I would guess the same also applies to the medical benefits of believing in a personally-caring deity.)
Compartmentalization only seems necessary when actually doing things; actually hitting golf balls or acting in a play or whatever. But during down time epistemic rationality does not seem to be harmed. Saying ‘optimists’ indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting, but switching to ‘realist’ mode as much as possible to ensure that the decompartmenalization algorithms are running at max capacity. This seems like plausible human behavior; at any rate, if realism as a trait doesn’t allow one to periodically be optimistic when necessary, then I worry that optimism as a trait wouldn’t allow one to periodically be realistic when necessary. The latter sounds more harmful, but I optimistically expect that such tradeoffs aren’t necessary.
I rather doubt that, since one of the big differences between the optimists and pessimists is the motivation to practice and improve, which needs to be active a lot more of the time than just while “doing something”.
If the choice is between, say, reading LessWrong and doing something difficult, my guess is the optimist will be more likely to work on the difficult thing, while the purely epistemic rationalist will get busy finding a way to justify reading LessWrong as being on task. ;-)
Don’t get me wrong, I never said I liked this characteristic of evolved brains. But it’s better not to fool ourselves about whether it’s better not to fool ourselves. ;-)
Suppose there is a lake between the tee and the hole, too big for the golfer to hit the ball all the way across. Should he envision/anticipate a hole in one, and waste his first stroke hitting the ball into the water, or should he acknowledge that this hole will take multiple strokes, and hit the ball around the lake?
Whatever will produce the better result.
Remember that the instrumental litany I proposed is, “If believing X will get me Y and I wish Y, then I wish to believe X.” If believing I’ll get a hole in one won’t get me a good golf score, and I want to get a good score, then I wouldn’t want to believe it.
Depends. Do you want to win or do you want to get the girl?
Really? That is, is that what the top golfers report doing, that the mediocre ones don’t?
If so, I am surprised. Aiming at a target does not mean believing I’m going to hit it. Aiming at a target means aiming at a target.
My understanding is that top golfers do indeed pre-visualize every strike, though I doubt they visualize or expect holes-in-one. AFAIK, however, they do visualize something better than what they can reasonably expect to get, and performance always lags the visualization to some degree.
What I’m saying is that if you really aim at it, this is functionally equivalent to believing, in that you are performing the same mental prerequisites: i.e., forming a mental image which you are not designating false, and acting as if it is true. That is more or less what “belief” is, at the “near” level of thinking.
To try to be more precise: the “acting as if” here is not acting in anticipation of hitting the target, but acting so as to bring it about—the purpose of envisioning the result (not just the action) is to call on the near system’s memory of previous successful shots in order to bring about the physical states (reference levels) that brought about the previous successes.
IOW, the belief anticipation here isn’t “I’m going to make this shot, so I should bet a lot of money”, it’s, “I’m going to have made this shot, therefore I need to stand in thus-and-such way and use these muscles like so while breathing like this” and “I’m going to make this shot, therefore I can be relaxed and not tense up and ruin it by being uncertain”.
It looks like a stretch to me, to call this a belief.
I’ve no experience of high-level golf, but I did at one time shoot on the county small-bore pistol team (before the law changed and the guns went away, but that’s even more of a mind-killing topic than politics in general). When I aim at a target with the intention of hitting it, belief that I will or won’t doesn’t come into the picture. Thinking about what is going to happen is just a distraction.
A month ago I made the longest cycle ride I have ever done. I didn’t visualise myself as having completed the ride or anything of that sort. I simply did the work.
Whatever wins, wins, of course, but I find either of the following more likely accounts of what this exercise of “belief” really is:
(1) What it feels like to single-mindedly pursue a goal.
(2) A technique to keep the mind harmlessly occupied and out of the way while the real work happens—what a coach might tell people to do, to produce that result.
In terms of control theory, a reference signal—a goal—is not an imagined perception. It is simply a reference signal.
At which point, we’re arguing definitions, because AFAICT the rest of your comment is not arguing that the process consists of something other than “forming a mental image which you are not designating false, and acting as if it is true.” You seem to merely be arguing that this process should not be called “belief”.
What is relevant, however, is that this is a process of compartmentalizing one’s thinking, so as to ignore various facts about the situation. Whether you call this a belief or not isn’t relevant to the main point: decompartmentalization can be hazardous to performance.
As far as I can tell, you are not actually disputing that claim. ;-)
You can’t call black white and then say that to dispute that is to merely talk about definitions. “Acting as if one believes”, if it means anything at all, must mean doing the same acts one would do if one believed. But you explicitly excluded betting on the outcome, a paradigmatic test of belief on LW.
Aiming at a target is not acting as if one were sure to hit the target. Visualising hitting the target is not acting as if one believes one will. These are different things, whatever they are called.
Even if you call it “froobling”, it doesn’t change my point in any way, so I don’t see the relevance of your reply… which is still not disputing my point about compartmentalization.
I think maybe the problem is that different neurological processes are being taken as the primary prototype of “compartmentalization” by Anna and yourself.
Performance enhancing direction of one’s attention so as not to be distracted in the N minutes prior to a critical performance seems much different to me than the way the same person might calculatingly speculate about their own performance three days in advance while placing a side bet on themselves.
Volitional control over the contents of one’s working memory, with a thoughtful eye to the harmonization of your performance, your moment-to-moment-mindstates, and your long-term-mind-structures (like skills and declarative knowledge and such) , seems like something that would help the golfer in both cases. In both cases there is some element of explicit calculating prediction (about the value of the bet or the golfing technique) that could be wrong, but whose rightness is likely to correlate with success in either the bet or the technique.
Part of the trick here seems to be that both the pro- and the anti-compartmentalization advice are abstract enough that both describe and might inspire good or bad behavior, and whether you think the advice is good or bad depends on which subsets of vaguely implied behavior are salient to you (based on skill estimates, typical situations, or whatever).
Rationalists, especially early on, still get hurt… they just shouldn’t get hurt twice in the same way if they’re “doing it right”.
Any mistake should make you double check both the theory and its interpretation. The core claim of advocates of rationality is simply that there is a “there” there, that’s worth pursuing… that seven “rational iterations” into a process, you’ll be in much better position than if you’d done ten things “at random” (two of which were basically repetitions of an earlier mistake).
See Seligman’s optimism research. Optimists out-perform pessimists and realists in the long run, in any task that requires motivation to develop skill. This strongly implies that an epistemically accurate assessment of your ability is a handicap to actual performance in such areas.
These kinds of research can’t just be shrugged off with “seems like something that would help”, unless you want to drop epistemic rationality along with the instrumental. ;-)
I’m a fairly good calligrapher—the sort of good which comes from lots of attentive hours, though not focused experiments.
I’ve considered it a blessing that my ambition was always just a tiny bit ahead of what I was able to do. If I’d been able to see the difference between what I could do when I started and what I’m able to do now (let alone what people who are much better than I am are able to do), I think I would have given up. Admittedly, it’s a mixed blessing—it doesn’t encourage great ambition.
I hear about a lot of people who give up on making music because the difference between the sounds they can hear in their heads and the sounds they can produce at the beginning are simply too large.
In Effortless Mastery, Kenny Werner teaches thinking of every sound you make as the most beautiful sound, since he believes that the effort to sound good is a lot of what screws up musicians. I need to reread to see how he gets from there to directed practice, but he’s an excellent musician.
I’ve also gotten some good results on being able to filter out background noise by using “this is the most beautiful sound I’ve ever heard” rather than trying to make out particular voices in a noisy bar.
Steve Barnes recommends high goal-setting and a minute of meditation every three hours to lower anxiety enough to pursue the goals. It’s worked well for him and seems to work well for some people. I’ve developed a ugh field about my whole fucking life as a result of paying attention to his stuff, and am currently working on undoing it. Surprisingly, draining the certainty out of self-hatred has worked much better than trying to do anything about the hostility.
A quote about not going head-on against psychological defenses
That reminds me of another way in which more epistemic accuracy isn’t always useful: projects that I never would have started/finished if I had realized in advance how much work they’d end up being. ;-)
(I did similarly with the Litany of Gendlin in my post):
I wrote a slightly less general version of the Litany of Gendlin on similar lines, based on the one specific case I know of where believing something can produce utility:
The last two lines may be truncated off for some values of X, but usually shouldn’t be.
I’ve been wondering about this lately. I don’t have a crisp answer as yet, though for practical reasons I’m definitely working on it.
That said, I don’t think your golfer example speaks to me about the nature of the potential danger. This looks to me like it’s highlighting the value of concretely visualizing goals in some situations.
Here are a few potential examples of the kind of phenomenon that nags at me:
I’m under the impression that I’m as physically strong as I am because I learned early on how to use the try harder for physical tasks. I noticed when I was a really young kid that if I couldn’t make something physically budge and then I doubled my effort, I still had room to ramp up my effort but the object often gave way. (I would regularly test this around age 7 by trying to push buildings over.) Today this has cashed out as simple muscular strength, but when I hit resistance I can barely manage to move (such as moving a “portable” dance floor) my first instinct is still to use the try harder rather than to find an easier way of moving the thing.
This same instinct does not apply to endurance training, though. I do Tabata intervals and find my mind generating adamant reasons why three cycles is plenty. I attribute this to practicing thinking that I’m “bad at endurance stuff” from a young age.
Possibly relatedly, I don’t encounter injuries from doing weight-lifting at a gym, but every time I start a jogging regimen I get a new injury (illiotibal band syndrome, overstretching a tendon running inside my ankles, etc.). This could be coincidence, but it’s a weird one, and oddly consistent.
My impression is that I am emotionally capable of handling whatever I think I’m emotionally capable of handling, and conversely that I can’t handle what I think I can’t handle. For instance, when I’m in danger of being rejected in a social setting, I seem to have a good sense of whether that’s going to throw me emotionally off-kilter (being upset, feeling really hurt, having a harder time thinking clearly, etc.) and if so by roughly how much. That counts as evidence that I’m just good at knowing the range of emotional impacts I can handle—but the thing is, I seem to be able to game this. If I change how I think about the situation, I’m able to increase or decrease the emotional impact it has on me. Not without bound, but pretty significantly.
Whether I enjoy an outing with some friends seems to depend at least in part on my anticipation of how much fun we’re going to have. If I get excited enough, it takes some pretty major setbacks to keep me from enjoying myself.
I also faintly remember having heard of some research showing that people who think that a puzzle has been solved are better-able to solve it than those who are told it’s unsolved. But I could be misremembering this by quite a bit. I do know that some people speculate that the Manhattan Project might owe a lot of its success to rumors that the Nazis already had the bomb and that the Americans were playing catch-up before the Nazis could build one.