Cached Selves

by Anna Sala­mon and Steve Ray­hawk (joint au­thor­ship)

Re­lated to: Be­ware iden­ti­ty

A few days ago, Yvain in­tro­duced us to prim­ing, the effect where, in Yvain’s words, “any ran­dom thing that hap­pens to you can hi­jack your judg­ment and per­son­al­ity for the next few min­utes.“

To­day, I’d like to dis­cuss a re­lated effect from the so­cial psy­chol­ogy and mar­ket­ing liter­a­tures: “com­mit­ment and con­sis­tency effects”, whereby any ran­dom thing you say or do in the ab­sence of ob­vi­ous out­side pres­sure, can hi­jack your self-con­cept for the medium- to long-term fu­ture.

To sum up the prin­ci­ple briefly: your brain builds you up a self-image. You are the kind of per­son who says, and does… what­ever it is your brain re­mem­bers you say­ing and do­ing. So if you say you be­lieve X… es­pe­cially if no one’s hold­ing a gun to your head, and it looks su­perfi­cially as though you en­dorsed X “by choice”… you’re li­able to “go on” be­liev­ing X af­ter­wards. Even if you said X be­cause you were ly­ing, or be­cause a sales­per­son tricked you into it, or be­cause your neu­rons and the wind just hap­pened to push in that di­rec­tion at that mo­ment.

For ex­am­ple, if I hang out with a bunch of Green Sky-ers, and I make small re­marks that ac­cord with the Green Sky po­si­tion so that they’ll like me, I’m li­able to end up a Green Sky-er my­self. If my friends ask me what I think of their po­etry, or their ra­tio­nal­ity, or of how they look in that dress, and I choose my words slightly on the pos­i­tive side, I’m li­able to end up with a falsely pos­i­tive view of my friends. If I get pro­moted, and I start tel­ling my em­ploy­ees that of course rule-fol­low­ing is for the best (be­cause I want them to fol­low my rules), I’m li­able to start be­liev­ing in rule-fol­low­ing in gen­eral.

All fa­mil­iar phe­nom­ena, right? You prob­a­bly already dis­count other peo­ples’ views of their friends, and you prob­a­bly already know that other peo­ple mostly stay stuck in their own bad ini­tial ideas. But if you’re like me, you might not have looked care­fully into the mechanisms be­hind these phe­nom­ena. And so you might not re­al­ize how much ar­bi­trary in­fluence con­sis­tency and com­mit­ment is hav­ing on your own be­liefs, or how you can re­duce that in­fluence. (Com­mit­ment and con­sis­tency isn’t the only mechanism be­hind the above phe­nom­ena; but it is a mechanism, and it’s one that’s more likely to per­sist even af­ter you de­cide to value truth.)

Con­sider the fol­low­ing re­search.

In the clas­sic 1959 study by Fest­inger and Car­l­smith, test sub­jects were paid to tell oth­ers that a te­dious ex­per­i­ment has been in­ter­est­ing. Those who were paid $20 to tell the lie con­tinued to be­lieve the ex­per­i­ment bor­ing; those paid a mere $1 to tell the lie were li­able later to re­port the ex­per­i­ment in­ter­est­ing. The the­ory is that the test sub­jects re­mem­bered call­ing the ex­per­i­ment in­ter­est­ing, and ei­ther:

  1. Hon­estly figured they must have found the ex­per­i­ment in­ter­est­ing—why else would they have said so for only $1? (This in­ter­pre­ta­tion is called self-per­cep­tion the­ory.), or

  2. Didn’t want to think they were the type to lie for just $1, and so de­ceived them­selves into think­ing their lie had been true. (This in­ter­pre­ta­tion is one strand within cog­ni­tive dis­so­nance the­ory.)


In a fol­low-up, Jonathan Freed­man used threats to con­vince 7- to 9-year old boys not to play with an at­trac­tive, bat­tery-op­er­ated robot. He also told each boy that such play was “wrong”. Some boys were given big threats, or were kept care­fully su­per­vised while they played—the equiv­a­lents of Fest­inger’s $20 bribe. Others were given mild threats, and left un­su­per­vised—the equiv­a­lent of Fest­inger’s $1 bribe. Later, in­stead of ask­ing the boys about their ver­bal be­liefs, Freed­man ar­ranged to test their ac­tions. He had an ap­par­ently un­re­lated re­searcher leave the boys alone with the robot, this time giv­ing them ex­plicit per­mis­sion to play. The re­sults were as pre­dicted. Boys who’d been given big threats or had been su­per­vised, on the first round, mostly played hap­pily away. Boys who’d been given only the mild threat mostly re­frained. Ap­par­ently, their brains had looked at their ear­lier re­straint, seen no harsh threat and no ex­per­i­menter su­per­vi­sion, and figured that not play­ing with the at­trac­tive, bat­tery-op­er­ated robot was the way they wanted to act.

One in­ter­est­ing take-away from Freed­man’s ex­per­i­ment is that con­sis­tency effects change what we do—they change the “near think­ing” be­liefs that drive our de­ci­sions—and not just our ver­bal/​propo­si­tional claims about our be­liefs. A sec­ond in­ter­est­ing take-away is that this be­lief-change hap­pens even if we aren’t think­ing much—Freed­man’s sub­jects were chil­dren, and a re­lated “for­bid­den toy” ex­per­i­ment found a similar effect even in pre-school­ers, who just barely have propo­si­tional rea­son­ing at all.

Okay, so how large can such “con­sis­tency effects” be? And how ob­vi­ous are these effects—now that you know the con­cept, are you likely to no­tice when con­sis­tency pres­sures change your be­liefs or ac­tions?

In what is per­haps the most un­set­tling study I’ve heard along these lines, Freed­man and Fraser had an os­ten­si­ble “vol­un­teer” go door-to-door, ask­ing home­own­ers to put a big, ugly “Drive Safely” sign in their yard. In the con­trol group, home­own­ers were just asked, straight-off, to put up the sign. Only 19% said yes. With this baseline es­tab­lished, Freed­man and Fraser tested out some com­mit­ment and con­sis­tency effects. First, they chose a similar group of home­own­ers, and they got a new “vol­un­teer” to ask these new home­own­ers to put up a tiny three inch “Drive safely” sign; nearly ev­ery­one said yes. Two weeks later, the origi­nal vol­un­teer came along to ask about the big, badly let­tered signs—and 76% of the group said yes, per­haps moved by their new self-image as peo­ple who cared about safe driv­ing. Con­sis­tency effects were work­ing.

The un­set­tling part comes next; Freed­man and Fraser wanted to know how ap­par­ently un­re­lated the con­sis­tency prompt could be. So, with a third group of home­own­ers, they had a “vol­un­teer” for an os­ten­si­bly un­re­lated non-profit ask the home­own­ers to sign a pe­ti­tion to “keep Amer­ica beau­tiful”. The pe­ti­tion was in­nocu­ous enough that nearly ev­ery­one signed it. And two weeks later, when the origi­nal guy came by with the big, ugly signs, nearly half of the home­own­ers said yes—a sig­nifi­cant boost above the 19% baseline rate. No­tice that the “keep Amer­ica beau­tiful” pe­ti­tion that prompted these effects was: (a) a tiny and un-mem­o­rable choice; (b) on an ap­par­ently un­re­lated is­sue (“keep­ing Amer­ica beau­tiful” vs. “driv­ing safely”); and (c) two weeks be­fore the sec­ond “vol­un­teer”’s sign re­quest (so we are ob­serv­ing medium-term at­ti­tude change from a sin­gle, brief in­ter­ac­tion).

Th­ese con­sis­tency effects are rem­i­nis­cent of Yvain’s large, un­no­ticed prim­ing effects—ex­cept that they’re based on your ac­tions rather than your sense-per­cep­tions, and the in­fluences last over longer pe­ri­ods of time. Con­sis­tency effects make us likely to stick to our past ideas, good or bad. They make it easy to freeze our­selves into our ini­tial pos­tures of dis­agree­ment, or agree­ment. They leave us vuln­er­a­ble to a va­ri­ety of sales tac­tics. They mean that if I’m work­ing on a cause, even a “ra­tio­nal­ist” cause, and I say things to try to en­gage new peo­ple, befriend po­ten­tial donors, or get core group mem­bers to col­lab­o­rate with me, my be­liefs are li­able to move to­ward what­ever my al­lies want to hear.

What to do?

Some pos­si­ble strate­gies (I’m not recom­mend­ing these, just putting them out there for con­sid­er­a­tion):

  1. Re­duce ex­ter­nal pres­sures on your speech and ac­tions, so that you won’t make so many pres­sured de­ci­sions, and your brain won’t cache those pres­sure-dis­torted de­ci­sions as in­di­ca­tors of your real be­liefs or prefer­ences. For ex­am­ple:

    • 1a. Avoid pe­ti­tions, and other so­cially prompted or in­cen­tivized speech. Cial­dini takes this route, in part. He writes: “[The Freed­man and Fraser study] scares me enough that I am rarely will­ing to sign a pe­ti­tion any­more, even for a po­si­tion I sup­port. Such an ac­tion has the po­ten­tial to in­fluence not only my fu­ture be­hav­ior but also my self-image in ways I may not want.”

    • 1b. Tenure, or in­de­pen­dent wealth.

    • 1c. Anonymity.

    • 1d. Leave your­self “so­cial lines of re­treat: avoid mak­ing definite claims of a sort that would be em­bar­rass­ing to re­tract later. Another tac­tic here is to tell peo­ple in ad­vance that you of­ten change your mind, so that you’ll be un­der less pres­sure not to.

  2. Only say things you don’t mind be­ing con­sis­tent with. For ex­am­ple:

    • 2a. Hyper-vigilant hon­esty. Take care never to say any­thing but what is best sup­ported by the ev­i­dence, aloud or to your­self, lest you come to be­lieve it.

    • 2b. Pos­i­tive hypocrisy. Speak and act like the per­son you wish you were, in hopes that you’ll come to be them. (Ap­par­ently this works.)

  3. Change or weaken your brain’s no­tion of “con­sis­tent”. Your brain has to be us­ing pre­dic­tion and clas­sifi­ca­tion meth­ods in or­der to gen­er­ate “con­sis­tent” be­hav­ior, and these can be hacked.

    • 3a. Treat $1 like a gun. Re­gard the de­ci­sions you made un­der slight mon­e­tary or so­cial in­cen­tives as like de­ci­sions you made at gun­point—de­ci­sions that say more about the ex­ter­nal pres­sures you were un­der, or about ran­dom dice-rolls in your brain, than about the truth. Take great care not to ra­tio­nal­ize your past ac­tions.

    • 3b. Build emo­tional com­fort with ly­ing, so you won’t be tempted to ra­tio­nal­ize your last week’s false claim, or your next week’s poli­ti­cal con­ve­nience. Per­haps fol­low Michael Vas­sar’s sug­ges­tion to lie on pur­pose in some unim­por­tant con­texts.

    • 3c. Reframe your past be­hav­ior as hav­ing oc­curred in a differ­ent con­text, and as not bear­ing on to­day’s de­ci­sions. Or add con­text cues to trick your brain into re­gard­ing to­day’s de­ci­sion as be­long­ing to a differ­ent cat­e­gory than past de­ci­sions. This is, for ex­am­ple, part of how con­ver­sion ex­pe­riences can help peo­ple change their be­hav­ior. (For a cheap hack, try trav­el­ing.)

    • 3d. More speci­fi­cally, vi­su­al­ize your life as some­thing you just in­her­ited from some­one else; ig­nore sunk words as you would as­pire to ig­nore sunk costs.

    • 3e. Re-con­cep­tu­al­ize your ac­tions into schemas you don’t mind prop­a­gat­ing. If you’ve just had some con­ver­sa­tions and come out be­liev­ing the Green Sky Plat­form, don’t say “so, I’m a green sky-er”. Say “so, I’m some­one who changes my opinions based on con­ver­sa­tion and rea­son­ing”. If you’ve in­curred re­peated library fines, don’t say “I’m so di­s­or­ga­nized, always and ev­ery­where”. Say “I have a pat­tern of for­get­ting library due dates; still, I’ve been get­ting more or­ga­nized with other ar­eas of my life, and I’ve changed harder habits many times be­fore.”

  4. Make a list of the most im­por­tant con­sis­tency pres­sures on your be­liefs, and con­sciously com­pen­sate for them. You might ei­ther con­sciously move in the op­po­site di­rec­tion (I know I’ve been hang­ing out with sin­gu­lar­i­tar­i­ans, so I some­what dis­trust my sin­gu­lar­i­tar­ian im­pres­sions) or take ex­tra pains to ap­ply ra­tio­nal­ist tools to any opinions you’re un­der con­sis­tency pres­sure to have. Per­haps write pub­lic or pri­vate cri­tiques of your con­sis­tency-re­in­forced views (though Eliezer notes rea­sons for cau­tion with this one).

  5. Build more re­li­ably truth-in­dica­tive types of thought. Ul­ti­mately, both prim­ing and con­sis­tency effects sug­gest that our baseline san­ity level is low; if small in­ter­ac­tions can have large, ar­bi­trary effects, our think­ing is likely pretty ar­bi­trary to to be­gin with. Some av­enues of ap­proach:

    • 5a. Im­prove your gen­eral ra­tio­nal­ity skill, so that your thoughts have some­thing else to be driven by be­sides your ran­dom cached selves. (It wouldn’t sur­prise me if OB/​LW-ers are less vuln­er­a­ble than av­er­age to some kinds of con­sis­tency effects. We could test this.)

    • 5b. Take your equals’ opinions as se­ri­ously as you take the opinions of your ten-min­utes-past self. If you of­ten dis­cuss top­ics with a com­pa­rably ra­tio­nal friend, and you two usu­ally end with the same opinion-differ­ence you be­gan with, ask your­self why. An ob­vi­ous first hy­poth­e­sis should be “ir­ra­tional con­sis­tency effects”: maybe you’re hold­ing onto par­tic­u­lar con­clu­sions, modes of anal­y­sis, etc., just be­cause your self-con­cept says you be­lieve them.

    • 5c. Work more of­ten from the raw data; ex­plic­itly dis­trust your be­liefs about what you pre­vi­ously saw the ev­i­dence as im­ply­ing. Re-de­rive the wheel, an­i­mated by a core dis­trust in your past self or cached con­clu­sions. Look for new thoughts.