Software developer and mindhacking instructor. Interested in intelligent feedback (especially of the empirical testing variety) on my new (temporarily free) ebook, A Minute To Unlimit You.
pjeby
It’s also easier to teach what the “right thing” is, than to catalog all the possible wrong things. People, as it turns out, are good at coming up with NEW ways to be wrong.
“Fear of success” is a null concept; a name for a thing that doesn’t exist in and of itself. The fact that the thing someone’s afraid of is also labeled “success” (by the individual or others) doesn’t make the fear special or different in any way. In essence, there’s only fear of failure. It’s just that sometimes one form of success leads to another form of failure. In other words, it’s not “success” that people fear, just unpleasant futures.
Choking under pressure, meanwhile, is just a normal function of fear-induced shutdown, like a “deer in the headlights” freezing to avoid becoming prey. In humans, this response is counterproductive because human beings need to actively DO things (other than running away) to prevent problems. Animals don’t need to go to work to make money so they won’t go broke and starve a month later; they rely on positive drives for motivation.
Humans, however, freeze up when they get scared… even if it’s fear of something that’s going to happen later if they DON’T perform. This is a major design flaw, from an “idealized human” point of view, and in my work, most of what I teach people is about switching off this response or preventing it from arising in the first place, as it is (in my experiences with my clients, anyway) the #1 cause of chronic procrastination.
In other words, I doubt truels had any direct influence on “fear of success” and “choking under pressure”; they are far too easily explained as side-effects of the combination of existing mechanisms (fear of a predicted outcome, and fear-induced shutdowns) and the wider reach of those mechanisms due to our enhanced ability to predict the future.
That is, we more easily imagine bad futures in connection with our outcomes than other animals do, making us more susceptible to creating cached links between our plans… and our fears about the futures that might arise from them.
For example, not too long ago during a workshop, I helped a man debug a procrastination issue with his work, where simply looking at a Camtasia icon on his screen was enough to induce a state of panic.
As it turned out, he’d basically conditioned himself to respond that way by thinking about how he didn’t really know enough to do the project he was responsible for—creating a cached link between the visual stimulus and the emotions associated with his expected futures. (In other words, early on he got as far as starting the program and getting in over his head… then with practice he got better and better at panicking sooner!)
And we do this stuff all the time, mostly without even noticing it.
I used to consider rationality and truthseeking to be a terminal (intrinsic) value, but now I consider it secondary to happiness… or Fun Theory as Eliezer might call it.
IOW, “a rationalist should win.” And winning definitely includes Fun.
Rationality is a critical component of positive psychology, because it’s what you use to get rid of irrationally negative predictions, and thus restore your brain to its naturally overconfident positive state. In other words, in positive psychology, you want to pick and choose what biases you’re going to slice apart and which ones you’ll let stand.
More precisely, you want to ensure you’re irrationally positive about rationally-derived predictions. It’s one thing to know the risk of skydiving and be irrationally positive about doing it anyway; it’s another thing altogether to irrationally expect that you can do it without a parachute!
Thus, you want to be rational about your real-world predictions, but not necessarily rational about how much you’ll enjoy (or fail to enjoy) life, whether it has any real meaning, etc., etc. Be rational about the external world, and the effects of your actions on it. And even be rational about the operation of your brain, as a brain. But if you want to have Fun, I suggest remaining irrationally positive about how good things are in general, whether life has meaning, etc.
In these areas, it is rational to be a little irrational, if your intention is to WIN, rather than to feel good about your self-image as a person who’s willing to Sacrifice All for his/her rationality. That too, is irrational.
I’ll go one step further and defend belief in belief, infinitely regressed. ;-) As you point out, the placebo effect here is simply the expectation of a positive result—and it applies equally at any level of recursion here.
Humans only need a convincing argument for predicting a positive result, not a rational proof of that prediction! Once the positive result is expected, we get positive emotions activated every time we think of anything linked to that result, leading to self-fulfilling prophecies on every level.
This being the case, one might question whether it’s rational to disbelieve in belief, if you have nothing equally beneficial to replace it with.
When it comes to external results, sure, it makes sense to have greater prediction accuracy. But for interior events—like confidence, creativity, self-esteem, etc. -- biasing one’s predictions positively is a significant advantage, as it stabilizes what would otherwise be an unstable system of runaway feedback loops.
People whose systems are negatively biased, on the other hand, can get seriously stuck. They basically hit one little setback and become paralyzed because of runaway negative self-fulfilling prophecy.
(I’ve been such a person myself, and I’ve worked with/on many of them. Indeed, it was noticing that other, far less “rational” and “intelligent” individuals were much more confident, calm, and successful than I was, that led me to start seriously investigating the nature of mind and beliefs in the first place, and to begin noting the distinctions between people I dubbed “naturally successful” and those I considered “naturally struggling”.)
Perhaps I should’ve given an example: you can easily teach that the “right thing” is to run a profitable business, whose income exceeds its expenses, whose customers are fans of it, etc., etc. You can even teach in minute detail how each of these pieces is achieved.
What you can’t do is prevent all the ways that people can go and apply that “right” knowledge in the wrong way. The Dilbert comic strip is full of such examples, of people taking good ideas about how to run a business, and turning them into voodoo.
It’s “Guessing The Teacher’s Password”—you can’t stop someone with the wrong idea already in their head, from taking what you tell them and processing it through that existing wrong idea, thereby making it wrong.
Dishonest or not, convincing yourself that you’re attractive to the opposite sex is more likely to produce a positive result. And a rationalist should win. ;-)
It seems to me that you are confused.
There are two kinds of belief being discussed here: abstract/declarative and concrete/imperative.
We don’t have direct control over our imperative beliefs, but can change them through clever self-manipulation. We DO have direct control over our declarative beliefs, and we can think whatever the heck we want in them. They just won’t necessarily make any difference to how we BEHAVE, since they’re part of the “far” or “social” thinking mechanism.
You seem to be implying that there’s only one kind of belief, and that it should be subject to some sort of consistency checking. However, NEITHER kind of belief has any global or automatic consistency checking. We can stop intellectually believing that we’re dumb or incompetent, for example, and still go on believing it emotionally, because although the abstract memory involved has been updated, the concrete memory hasn’t.
It isn’t even necessary to DO anything in order to have contradictory beliefs; it merely suffices to neglect the cross-checking, and perhaps a bit of effort to avoid thinking about the connection when somebody tries to show it to you.
And that avoidance can take place automatically, if you have a strong enough emotional reason for wanting to maintain the intellectual belief. Even among my clients who WANT to change some belief or fix some problem in their heads, the first step for me is always getting them to stop abstracting themselves away from actually looking at what they believe on the concrete/emotional level, as opposed to what they’d prefer to believe on the abstract/intellectual level.
Imagine how much harder it must be for someone who isn’t TRYING to change their beliefs!
Self-esteem is another one of those null concepts like “fear of success”. In my own work, for example, I’ve identified at least 2 (and maybe three) distinct mental processes by which behaviors described as “low self-esteem” can be produced.
One of the two could be thought of as “status-based”, but the actual mechanism seems more like comparison of behaviors and traits to valued (or devalued) behavioral examples. (For instance, you get called a crybaby and laughed at—and thus you learn that crying makes you a baby, and to be a “man” you must be “tough”.)
The other mechanism is based on the ability to evoke positive responses from others, and the behaviors one learns in order to evoke those responses. Which I suppose can also be thought of as status-based, too, but it’s very different in its operation. Response evocation motivates you to try different behaviors and imprint on ones that work, whereas role-judgment makes you try to conceal your less desirable behaviors and the negative identity associated with them. (Or, it motivates you to imitate and display admired traits and behaviors.)
Anyway, my main point was just to support your comments about evidence and falsifiability: rationalists should avoid throwing around high-level psychological terms like “procrastination” and “self-esteem” that don’t define a mechanism—they’re usually far too overloaded and abstract to be useful, ala “phlogiston”. If you want to be able to predict (or engineer!) esteem, you need to know more than that it contains a “status-ative principle”. ;-)
Actually, as you noticed—but didn’t notice that you noticed—no “striving” is actually required. What happened was simply that you had to translate abstract knowledge into concrete knowledge.
In each of the examples you gave, you created a metaphor or context reframe, based on imagining some specific sensory reality.
Because the emotional, “action”, or “near” brain doesn’t directly “get” conceptual abstractions. They have to be translated back into some kind of sensory representation first, and linked to the specific context where you want to change your (emotional/automatic) expectations.
A great example of one of the methods for doing this, is Byron Katie’s book, “Loving What Is”—which is all about getting people to emotionally accept the facts of a situation, and giving up their “shoulds”. Her approach uses four questions that postulate alternative realities, combined with a method of generating counterexamples (“turnarounds”, she calls them), which, if done in the same sort of “what if?” way that you imagined your friendly ghosts and probabilistic knife killers, produce emotional acceptance of a given reality—i.e., “loving what is”.
Hers is far from the only such method, though. There’s another approach, described by Morty Lefkoe in “Recreate Your Life”, which uses a different set of questions designed to elicit and re-interpret your evidence for an existing emotional belief. Robert Fritz’s books on the creative process demonstrate yet another set of questioning patterns, although not a formalized one.
And rational-emotive therapy, “learned optimism”, and cognitive-behavior therapy all have various questions and challenges of their own. (And I freely borrow from all of them in my client work.)
Of course, it’s easy to confuse these questioning patterns with logical arguments, and trying to convince yourself (or others) of something. But that not only misses the point, it doesn’t work. The purpose of questions like these is to get you to imagine other possibilities—other evidential interpretations and predictions—in a sensory way, in a specific sensory context, to update your emotional brain’s sensory prediction database.
In other words, to take abstractions from the “far” brain, and apply them to generate new sensory data for the “near” brain to process.
Viewed in this way, there’s no need to “struggle”—you simply need to know what the hell you’re doing. That is, have an “inside view” of the relationship between the “near” and “far” brains.
In other words, something that every rationalist should have. A rationalism that can’t fix (or at least efficiently work around) the bugs in its own platform isn’t very useful for Winning.
Struggle and striving is a sign of confusion, not virtue. We need to understand the human platform, and program it effectively, instead of using up our extremely limited concentration and willpower by constantly fighting with it.
- 17 Sep 2010 15:58 UTC; 6 points) 's comment on Compartmentalization in epistemic and instrumental rationality by (
If you insist on believing it’s hard work, you can certainly make it such. But notice that Eliezer’s account indicates that once he chose suitable representations, the changes were immediate, or at least very quick. And that’s my experience with these methods also.
The difficult part of change isn’t changing your beliefs—it’s determining which beliefs you have that aren’t useful to you… and that therefore need changing.
That’s the bit that’s incredibly difficult, unless you have the advantage of a successful model in a given area. (Not unlike the difference between applying a programming pattern, and inventing a programming pattern.)
For example, I’d say that the utility of a belief in struggle being a requirement for rationality is very low. Such a belief only seemed attractive to me in the past, because it was associated with an idea of being noble. Dropping it enabled me to make more useful changes, a lot faster.
On a more general level, when someone is successfully doing something that I consider a struggle, and that person says that doing the thing is easy, the rational response is for me to want to learn more about their mental models and belief structure, in order to update my own.
Not to argue (however indirectly) that struggle—like death—is a good thing because it’s part of the natural order!
(This is also ignoring the part where “struggle” itself is a confusion: in reality, there is never anything to “strive for” OR “struggle against”; these are only emotional labels we attach to the map, that don’t actually exist in the territory. In reality, there are no problems or enemies, only facts. Time-consuming tasks exist, but this does not make them a struggle.)
My personal experience is that self-talk is only useful insofar as you’re using it to lead yourself to a sensory experience of some kind. For example, asking “What if [desired state of affairs] were true?” is far more useful than simply asserting it so. The former at least invites one to imagine something specific.
Repetition also isn’t as useful as most people seem to think. Your brain has little problem updating information immediately, if there’s sufficient emotion involved… and the “aha” of insight (i.e. reducing the modeling complexity required to explain your observations) counts as an emotion. If you have to repeat it over and over again—and it’s not a skill you’re practicing—you’re doing something wrong.
All of these terms—self-talk, visualization, and pretending—are also examples of Unteachable Excellence and Guessing The Teacher’s Password. You can equally use the term to describe something useful (like asking good questions) or something ridiculous (like affirmations). The specific way in which you talk, visualize, or pretend is of critical importance.
For example, if you simply visualize some scripted scenario, rather than engaging in inquiry with yourself, you are wasting your time. The “near” brain needs to generate the details, not the “far” brain, or else you don’t get the right memories in context.
I’ll admit to a bit of hand-waving on that last part—I know that when my clients visualize, self-talk, or pretend in “scripted” ways (driven by conscious, logical, and “far” thinking), my tests show no change in belief or behavior, and that when they simply ask what-if questions and observe their mind’s response, the tests show changes. My guess is that this has something to do with the “reconsolidation” theory of memory: that activating a memory is required in order to change it. But I’m more of a pragmatist than a theorist in this area.
And when you imagine this, what concrete test are you imagining performing on the Jesuits before and after the 30 days’ visualization, in order to confirm that there was in fact behavioral change between the two points? ;-)
To be clear, I use tests of a person’s non-voluntary responses to imagined or recalled stimuli. I also prefer to get changes in test results within the order of 30 minutes (or 3 minutes for certain types of changes), rather than 30 days!
What’s more, I don’t instruct clients to use directed or scripted imagery or self-talk. In fact, I usually have to teach them NOT to do so.
Basically, when they apply a technique and get no change in test response, I go back over it with them, to find out what they did and how they did it. And one of the most common ways (by far) in which they’ve deviated from my instructions are by making statements, directing their visualization, indulging in analytical and abstract thinking, or otherwise failing to engage the “near” system with sensory detail.
And as soon as I get them to correct this, we start getting results immediately.
Now, does that prove that you CAN’T get results through directed, argumentative, or repetitive thinking? No, because you can’t prove a negative.
However, please note that these are not people who disbelieve in self-talk, nor are they attempting to prove or disprove anything. Rather, they are simply not familiar with—or skilled in—a particular way of engaging their minds, and are just doing the same things they always do.
Which is, of course, why they get the same results they always do.
And it’s also why I have such a pet peeve about self-help and psych books that try to teach the Unteachable Excellence, without understanding that by default, people try to Guess The Teacher’s Password—that is, to somehow change things without ever actually doing anything different.
Practical psychology is still far too much alchemy, not enough chemistry. Rationalists must—and CAN—do much better than this.
Sounds a bit like deBono’s Six Thinking Hats, as well.
I was going to try to list out some of the things I see as rationality, but Eliezer already did it much better, here:
I think, “whatever predictably makes you win, for an empirical, declared-in-advance definition of winning” might be a more accurate synopsis. ;-)
Actually, neither Craig nor David were rational, if it’s defined as “what makes you predictably win, for an empirical, declared-in-advance definition of winning”. Craig did not choose his beliefs in order to achieve some particular definition of winning. And David didn’t win… EVEN IF his declared-in-advance goals placed a higher utility on logical consistency than popularity or success.
Of course, the real flaw in your examples is that popularity isn’t directly created or destroyed through logical consistency or a lack thereof… although that idea seems to be a strangely common bias among people who are interested in logical consistency!
Unless David actually assigned ZERO utility to popularity, then he failed to “win” (in the sense of failing to achieve his optimum utility), by choosing actions that showed other people he valued logical consistency and correctness more than their pleasant company (or whatever else it was he did).
I’m not married to my off-the-cuff definition, and I’m certainly not claiming it’s comprehensive. But I think that a definition of rationality that does NOT include the things that I’m including—i.e. predicting maximal utility for a pre-defined utility function—would be severely lacking.
After all, note that this is dangerously close to Eliezer’s definition of “intelligence”: a process for optimizing the future according to some (implicitly, predefined) utility function.
And that definition is neither circular nor meaningless.
I think you mean: http://www.overcomingbias.com/2006/11/why_truth_and.html
I don’t know what you mean by “utilitarian”, but if you mean, “one who chooses his actions according to their desired results”, then how can you NOT be a utilitarian? That would indicate that either 1) you’re using a different utility function, or 2) you’re very, very confused.
Or to put it another way, if you say “I choose not to be a utilitarian”, you must be doing it because not being a utilitarian has some utility to you.
If you are arguing that truth is more important than utility in general, rather than being simply one component of a utility function, then you are simply describing what you perceive to be your utility function.
For human beings, all utility boils down to emotion of some kind. That is, if you are arguing that truth (or “rationality” or “validity” or “propriety” or whatever other concept) is most important, you can only do this because that idea makes you feel good… or because it makes you feel less bad than whatever you perceive the alternative is!
The problem with humans is that we don’t have a single, globally consistent, absolutely-determined utility function. We have a collection of ad-hoc, context-sensitive, relative utility and distutility functions. Hell, we can’t even make good decisions when looking at pros and cons simultaneously!
So, if intelligence is efficiently optimizing the future according to your utility function, then rationality could perhaps be considered the process of optimizing your local and non-terminal utility functions to better satisfy your more global ones.
(And I’d like to see how that conflicts with Aristotle—or any other “great” philosopher, for that matter—in a way that doesn’t simply amount to word confusion.)
...which means that it’s a rational action to engage in rational reasoning. ;-)
Intelligence as a blind optimization process shaping the future—esp. in comparison with evolution—and how the effect of our built-in anthropomorphism makes us see intelligence as existing, when in fact, ALL intelligence is blind. Some intelligence processes are just a little less blind than others.
(Somewhat offtopic, but related: some studies show that the number of “good” ideas produced by any process is linearly proportional to the TOTAL number of ideas produced by that process… which suggests that even human intelligence searches blindly, once we go past the scope of our existing knowledge and heuristics.)