not at all the same thing as near-mode predicting that you will hit a hole-in-one (and so being shocked if you don’t, betting piles of money on the outcome, etc.).
To be clear, one is compartmentalizing - deliberately separating the anticipation of “this is what I’m going to feel in a moment when I hit that hole-in-one” from the kind of anticipation that would let you place a bet on it.
This example is one of only many where compartmentalizing your epistemic knowledge from your instrumental experience is a damn good idea, because it would otherwise interfere with your ability to perform.
do you think the distinction between “prediction” and “declaration/aim” exists only in far mode?
What I’m saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance.
To perform confidently and with motivation, it is often necessary to think and feel “as if” certain things were true, which may in fact not be true.
Note, though, that with respect to the declaration/prediction divide you propose, Wiseman’s luck research doesn’t say anything about people declaring intentions to be lucky, AFAICT, only anticipating being lucky. This expectation seems to prime unconscious perceptual fitlers as well as automatic motivations that do not occur when people do not expect to be lucky.
I suspect that one reason this works well for vague expectations such as “luck” is that the expectation can be confirmed by many possible outcomes, and is so more self-sustaining than more-specific beliefs would be.
We can also consider Dweck and Seligman’s mindset and optimism research under the same umbrella: the “growth” mindset anticipates only that the learner will improve with effort over time, and the optimist merely anticipates that setbacks are not permanent, personal, or pervasive.
In all cases, AFAICT, these are actual beliefs held by the parties under study, not “declarations”. (I would guess the same also applies to the medical benefits of believing in a personally-caring deity.)
What I’m saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance.
Compartmentalization only seems necessary when actually doing things; actually hitting golf balls or acting in a play or whatever. But during down time epistemic rationality does not seem to be harmed. Saying ‘optimists’ indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting, but switching to ‘realist’ mode as much as possible to ensure that the decompartmenalization algorithms are running at max capacity. This seems like plausible human behavior; at any rate, if realism as a trait doesn’t allow one to periodically be optimistic when necessary, then I worry that optimism as a trait wouldn’t allow one to periodically be realistic when necessary. The latter sounds more harmful, but I optimistically expect that such tradeoffs aren’t necessary.
Saying ‘optimists’ indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting,
I rather doubt that, since one of the big differences between the optimists and pessimists is the motivation to practice and improve, which needs to be active a lot more of the time than just while “doing something”.
If the choice is between, say, reading LessWrong and doing something difficult, my guess is the optimist will be more likely to work on the difficult thing, while the purely epistemic rationalist will get busy finding a way to justify reading LessWrong as being on task. ;-)
Don’t get me wrong, I never said I liked this characteristic of evolved brains. But it’s better not to fool ourselves about whether it’s better not to fool ourselves. ;-)
To be clear, one is compartmentalizing - deliberately separating the anticipation of “this is what I’m going to feel in a moment when I hit that hole-in-one” from the kind of anticipation that would let you place a bet on it.
This example is one of only many where compartmentalizing your epistemic knowledge from your instrumental experience is a damn good idea, because it would otherwise interfere with your ability to perform.
What I’m saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance.
To perform confidently and with motivation, it is often necessary to think and feel “as if” certain things were true, which may in fact not be true.
Note, though, that with respect to the declaration/prediction divide you propose, Wiseman’s luck research doesn’t say anything about people declaring intentions to be lucky, AFAICT, only anticipating being lucky. This expectation seems to prime unconscious perceptual fitlers as well as automatic motivations that do not occur when people do not expect to be lucky.
I suspect that one reason this works well for vague expectations such as “luck” is that the expectation can be confirmed by many possible outcomes, and is so more self-sustaining than more-specific beliefs would be.
We can also consider Dweck and Seligman’s mindset and optimism research under the same umbrella: the “growth” mindset anticipates only that the learner will improve with effort over time, and the optimist merely anticipates that setbacks are not permanent, personal, or pervasive.
In all cases, AFAICT, these are actual beliefs held by the parties under study, not “declarations”. (I would guess the same also applies to the medical benefits of believing in a personally-caring deity.)
Compartmentalization only seems necessary when actually doing things; actually hitting golf balls or acting in a play or whatever. But during down time epistemic rationality does not seem to be harmed. Saying ‘optimists’ indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting, but switching to ‘realist’ mode as much as possible to ensure that the decompartmenalization algorithms are running at max capacity. This seems like plausible human behavior; at any rate, if realism as a trait doesn’t allow one to periodically be optimistic when necessary, then I worry that optimism as a trait wouldn’t allow one to periodically be realistic when necessary. The latter sounds more harmful, but I optimistically expect that such tradeoffs aren’t necessary.
I rather doubt that, since one of the big differences between the optimists and pessimists is the motivation to practice and improve, which needs to be active a lot more of the time than just while “doing something”.
If the choice is between, say, reading LessWrong and doing something difficult, my guess is the optimist will be more likely to work on the difficult thing, while the purely epistemic rationalist will get busy finding a way to justify reading LessWrong as being on task. ;-)
Don’t get me wrong, I never said I liked this characteristic of evolved brains. But it’s better not to fool ourselves about whether it’s better not to fool ourselves. ;-)