Status Regulation and Anxious Underconfidence

Fol­low-up to: Against Modest Epistemology


I’ve now given my cri­tique of mod­esty as a set of ex­plicit doc­trines. I’ve tried to give the back­ground the­ory, which I be­lieve is noth­ing more than con­ven­tional cyn­i­cal eco­nomics, that ex­plains why so many as­pects of the world are not op­ti­mized to the limits of hu­man in­tel­li­gence in the man­ner of fi­nan­cial prices. I have ar­gued that the essence of ra­tio­nal­ity is to adapt to what­ever world you find your­self in, rather than to be “hum­ble” or “ar­ro­gant” a pri­ori. I’ve tried to give some pre­limi­nary ex­am­ples of how we re­ally, re­ally don’t live in the Ad­e­quate World where con­stant self-ques­tion­ing would be ap­pro­pri­ate, the way it is ap­pro­pri­ate when sec­ond-guess­ing equity prices. I’ve tried to sys­tem­atize mod­est episte­mol­ogy into a semifor­mal rule, and I’ve ar­gued that the rule yields ab­surd con­se­quences.

I was care­ful to say all this first, be­cause there’s a strict or­der to de­bate. If you’re go­ing to ar­gue against an idea, it’s bad form to start off by ar­gu­ing that the idea was gen­er­ated by a flawed thought pro­cess, be­fore you’ve ex­plained why you think the idea it­self is wrong. Even if we’re re­fut­ing geo­cen­trism, we should first say how we know that the Sun does not or­bit the Earth, and only then pon­tif­i­cate about what cog­ni­tive bi­ases might have af­flicted geo­cen­trists. As a rule, an idea should ini­tially be dis­cussed as though it had de­scended from the heav­ens on a USB stick spon­ta­neously gen­er­ated by an evap­o­rat­ing black hole, be­fore any word is said psy­cho­an­a­lyz­ing the peo­ple who be­lieve it. Other­wise I’d be guilty of poi­son­ing the well, also known as Bul­verism.

But I’ve now said quite a few words about mod­est episte­mol­ogy as a pure idea. I feel com­fortable at this stage say­ing that I think mod­est episte­mol­ogy’s pop­u­lar­ity owes some­thing to its emo­tional ap­peal, as op­posed to be­ing strictly de­rived from epistemic con­sid­er­a­tions. In par­tic­u­lar: emo­tions re­lated to so­cial sta­tus and self-doubt.

Even if I thought mod­esty were the cor­rect nor­ma­tive episte­mol­ogy, I would cau­tion peo­ple not to con­fuse the cor­rect rea­son­ing prin­ci­ple with those par­tic­u­lar emo­tional im­pulses. You’ll ob­serve that I’ve writ­ten one or two things above about how not to an­a­lyze in­ad­e­quacy, and mis­takes not to make. We hear far too lit­tle from its ad­vo­cates about po­ten­tial mi­suses and dis­tor­tions of mod­est episte­mol­ogy, if we’re go­ing to take mod­est episte­mol­ogy se­ri­ously as a ba­sic rea­son­ing mode, tech­nique, or prin­ci­ple.

And I’ll now try to de­scribe the kinds of feel­ings that I think mod­esty’s ap­peal rests on. Be­cause I’ve come to ap­pre­ci­ate in­creas­ingly that hu­man be­ings are re­ally gen­uinely differ­ent from one an­other, you shouldn’t be sur­prised if it seems to you like this is not how you work. I claim nonethe­less that many peo­ple do work like this.

i.

Let’s start with the emo­tion—not re­stricted to cases of mod­esty, just what I sus­pect to be a com­mon hu­man emo­tion—of “anx­ious un­der­con­fi­dence.”

As I started my cur­rent writ­ing ses­sion, I had just ten min­utes ago re­turned from the fol­low­ing con­ver­sa­tion with some­one look­ing for a job in the Bay Area that would give them rele­vant ex­pe­rience for run­ning their own startup later:


eliezer: Are you a pro­gram­mer?

as­piring founder: That’s what ev­ery­one asks. I’ve pro­grammed at all of my pre­vi­ous jobs, but I wouldn’t call my­self a pro­gram­mer.

eliezer: I think you should try ask­ing (per­son) if they know of any star­tups that could use non-su­per pro­gram­mers, and look for a non-doomed startup that’s still early-stage enough that you can be as­signed some busi­ness jobs and get a chance to try your hand at that with­out need­ing to man­age it your­self. That might get you the startup ex­pe­rience you want.

as­piring founder: I know how to pro­gram, but I don’t know if I can dis­play that well enough. I don’t have a Github ac­count. I think I’d have to spend three months bon­ing up on pro­gram­ming prob­lems be­fore I could do any­thing like the Google in­ter­view—or maybe I could do one of the boot­camps for pro­gram­mers—

eliezer: I’m not sure if they’re aimed at your cur­rent skill level. Why don’t you try just one in­ter­view and see how that goes be­fore you make any com­pli­cated fur­ther plans about how to prove your skills?


This fits into a very com­mon pat­tern of ad­vice I’ve found my­self giv­ing, along the lines of, “Don’t as­sume you can’t do some­thing when it’s very cheap to try test­ing your abil­ity to do it,” or, “Don’t as­sume other peo­ple will eval­u­ate you lowly when it’s cheap to test that be­lief.”

I try to be care­ful to dis­t­in­guish the virtue of avoid­ing over­con­fi­dence, which I some­times call “hu­mil­ity,” from the phe­nomenon I’m call­ing “mod­est episte­mol­ogy.” But even so, when over­con­fi­dence is such a ter­rible scourge ac­cord­ing to the cog­ni­tive bias liter­a­ture, can it ever be wise to cau­tion peo­ple against un­der­con­fi­dence?

Yes. First of all, over­com­pen­sa­tion af­ter be­ing warned about a cog­ni­tive bias is also a rec­og­nized prob­lem in the liter­a­ture; and the liter­a­ture on that talks about how bad peo­ple of­ten are at de­ter­min­ing whether they’re un­der­cor­rect­ing or over­cor­rect­ing.1 Se­cond, my own ex­pe­rience has been that while, yes, com­menters on the In­ter­net are of­ten over­con­fi­dent, it’s very differ­ent when I’m talk­ing to peo­ple in per­son. My more re­cent ex­pe­rience seems more like 90% tel­ling peo­ple to be less un­der­con­fi­dent, to reach higher, to be more am­bi­tious, to test them­selves, and maybe 10% cau­tion­ing peo­ple against over­con­fi­dence. And yes, this ra­tio ap­plies to men as well as women and non­bi­nary peo­ple, and to peo­ple con­sid­ered high-sta­tus as well as peo­ple con­sid­ered low-sta­tus.

Sev­eral peo­ple have now told me that the most im­por­tant thing I have ever said to them is: “If you never fail, you’re only try­ing things that are too easy and play­ing far be­low your level.” Or, phrased as a stan­dard Umeshism: “If you can’t re­mem­ber any time in the last six months when you failed, you aren’t try­ing to do difficult enough things.” I first said it to some­one who had set them­selves on a ca­reer track to be­com­ing a nurse in­stead of a physi­cist, even though they liked physics, be­cause they were sure they could suc­ceed at be­com­ing a nurse.

I call this “anx­ious un­der­con­fi­dence,” and it seems to me to share a com­mon thread with so­cial anx­iety. We might define “so­cial anx­iety” as “ex­pe­rienc­ing fear far in ex­cess of what a third party would say are the rea­son­ably pre­dictable ex­te­rior con­se­quences, with re­spect to other peo­ple pos­si­bly think­ing poorly of you, or want­ing things from you that you can’t provide them.” If some­one is ter­rified of be­ing pre­sent at a large so­cial event be­cause some­one there might talk to them and they might be con­fused and stut­ter out an an­swer—when, re­al­is­ti­cally, this at worst makes a tran­sient poor im­pres­sion that is soon for­got­ten be­cause you are not at the cen­ter of the other per­son’s life—then this is an ex­cess fear of that event.

Similarly, many peo­ple’s emo­tional makeup is such that they ex­pe­rience what I would con­sider an ex­cess fear—a fear dis­pro­por­tionate to the non-emo­tional con­se­quences—of try­ing some­thing and failing. A fear so strong that you be­come a nurse in­stead of a physi­cist be­cause that is some­thing you are cer­tain you can do. Any­thing you might not be able to do is crossed off the list in­stantly. In fact, it was prob­a­bly never gen­er­ated as a policy op­tion in the first place. Even when the cor­rect course is ob­vi­ously to just try the job in­ter­view and see what hap­pens, the test will be put off in­definitely if failure feels pos­si­ble.

If you’ve never wasted an effort, you’re fil­ter­ing on far too high a re­quired prob­a­bil­ity of suc­cess. Try­ing to avoid wast­ing effort—yes, that’s a good idea. Feel­ing bad when you re­al­ize you’ve wasted effort—yes, I do that too. But some peo­ple slice off the en­tire realm of un­cer­tain pro­jects be­cause the prospect of hav­ing wasted effort, of hav­ing been pub­li­cly wrong, seems so hor­rible that pro­jects in this class are not to be con­sid­ered.

This is one of the emo­tions that I think might be at work in recom­men­da­tions to take an out­side view on your chances of suc­cess in some en­deavor. If you only try the things that are al­lowed for your “refer­ence class,” you’re sup­posed to be safe—in a cer­tain so­cial sense. You may fail, but you can jus­tify the at­tempt to oth­ers by not­ing that many oth­ers have suc­ceeded on similar tasks. On the other hand, if you try some­thing more am­bi­tious, you could fail and have ev­ery­one think you were stupid to try.

The mark of this vuln­er­a­bil­ity, and the proof that it is in­deed a fal­lacy, would be not test­ing the pre­dic­tions that the mod­est point of view makes about your in­evitable failures—even when they would be cheap to test, and even when failure doesn’t lead to any­thing that a non-pho­bic third party would rate as ter­rible.

ii.

The other emo­tions I have in mind are per­haps eas­iest to un­der­stand in the con­text of effi­cient mar­kets.

In hu­man­ity’s en­vi­ron­ment of evolu­tion­ary adapt­ed­ness, an offer of fifty car­rots for a roasted an­telope leg re­flects a judg­ment about roles, re­la­tion­ships, and sta­tus. This idea of “price” is eas­ier to grasp than the economist’s no­tion; and given that some­body doesn’t have the economist’s very spe­cific no­tion in mind when you speak of “effi­cient mar­kets,” they can end up mak­ing what I would con­sider an ex­tremely un­der­stand­able mis­take.

You tried to ex­plain to them that even if they thought AAPL stock was un­der­priced, they ought to ques­tion them­selves. You claimed that they couldn’t man­age to be sys­tem­at­i­cally right on the oc­ca­sions where the mar­ket price swung dras­ti­cally. Not un­less they had ac­cess to in­sider in­for­ma­tion on sin­gle stocks—which is to say, they just couldn’t do it.

But “I can’t do that. And you can’t ei­ther!” is a sus­pi­cious state­ment in ev­ery­day life. Sup­pose I try to jug­gle two balls and suc­ceed, and then I try to jug­gle three balls and drop them. I could con­clude that I’m bad at jug­gling and that other peo­ple could do bet­ter than me, which comes with a loss of sta­tus. Alter­na­tively, I could heave a sad sigh as I come to re­al­ize that jug­gling more than two balls is just not pos­si­ble. Where­upon my so­cial stand­ing in com­par­i­son to oth­ers is pre­served. I even get to give in­struc­tion to oth­ers about this hard-won life les­son, and smile with sage su­pe­ri­or­ity at any young fools who are still try­ing to figure out how to jug­gle three balls at a time.

I grew up with this fal­lacy, in the form of my Ortho­dox Jewish par­ents smil­ing at me and ex­plain­ing how when they were young, they had asked a lot of re­li­gious ques­tions too; but then they grew out of it, com­ing to rec­og­nize that some things were just be­yond our ken.

At the time, I was flab­ber­gasted at my par­ents’ ar­ro­gance in as­sum­ing that be­cause they couldn’t solve a prob­lem as teenagers, no­body else could pos­si­bly solve it go­ing for­ward. To­day, I un­der­stand this view­point not as ar­ro­gance, but as a sim­ple flinch away from a painful thought and to­ward a plea­surable one. You can ad­mit that you failed where suc­cess was pos­si­ble, or you can smile with gen­tly for­giv­ing su­pe­ri­or­ity at the youth­ful en­thu­si­asm of those who are still naive enough to at­tempt to do bet­ter.

Of course, some things are im­pos­si­ble. But if one’s flinch re­sponse to failure is to perform a men­tal search for rea­sons one couldn’t have suc­ceeded, it can be tempt­ing to slide into false de­spair.

In the book Su­perfore­cast­ing, Philip Tet­lock de­scribes the num­ber one char­ac­ter­is­tic of top fore­cast­ers, who show the abil­ity to per­sis­tently out­perform pro­fes­sional an­a­lysts and even small pre­dic­tion mar­kets: they be­lieve that out­perfor­mance in fore­cast­ing is pos­si­ble, and work to im­prove their perfor­mance.2

I would ex­pect this to come as a shock to peo­ple who grew up steeped in aca­demic stud­ies of over­con­fi­dence and took away the les­son that epistemic ex­cel­lence is mostly about ac­cept­ing your own limi­ta­tions.3 But I read that chap­ter of Su­perfore­cast­ing and laughed, be­cause I was pretty sure from my own ex­pe­rience that I could guess what had happed to Tet­lock: he had run into large num­bers of naive re­spon­dents who smiled con­de­scend­ingly at the naive en­thu­si­asm of those who thought that any­one can get good at pre­dict­ing fu­ture events.4

Now, imag­ine you’re some­body who didn’t read Su­perfore­cast­ing, but did at least grow up with par­ents tel­ling you that if they’re not smart enough to be a lawyer, then nei­ther are you. (As hap­pened to a cer­tain child­hood friend of mine who is now a lawyer.)

And then you run across some­body who tries to tell you, not just that they can’t out­guess the stock mar­ket, but that you’re not al­lowed to be­come good at it ei­ther. They claim that no­body is al­lowed to mas­ter the task at which they failed. Your un­cle tripled his sav­ings when he bet it all on GOOG, and this per­son tries to wave it off as luck. Isn’t that like some­body con­de­scend­ingly ex­plain­ing why jug­gling three balls is im­pos­si­ble, af­ter you’ve seen with your own eyes that your un­cle can jug­gle four?

This isn’t a naive ques­tion. Some­body who has seen the con­de­scen­sion of de­spair in ac­tion is right to treat this kind of claim as sus­pi­cious. It ought to take a mas­sive eco­nomics liter­a­ture ex­am­in­ing the idea in the­ory and in prac­tice, and re­spond­ing to var­i­ous ap­par­ent coun­terex­am­ples, be­fore we ac­cept that a new kind of near-im­pos­si­bil­ity has been es­tab­lished in a case where the laws of physics seem to leave the pos­si­bil­ity open.

Per­haps what you said to the effi­ciency skep­tic was some­thing like:

If it’s ob­vi­ous that AAPL stock should be worth more be­cause iPhones are so great, then a hedge fund man­ager should be able to see this logic too. This means that this in­for­ma­tion will already be baked into the mar­ket price. If what you’re say­ing is true, the mar­ket already knows it—and what the mar­ket knows be­yond that, nei­ther you nor I can guess.

But what they heard you say­ing was:

O thou, who burns with tears for those who burn,
In Hell, whose fires will find thee in thy turn
Hope not the Lord thy God to mercy teach
For who art thou to teach, or He to learn?5

This again is an ob­vi­ous fal­lacy for them to sus­pect you of com­mit­ting. They’re sug­gest­ing that some­thing might be wrong with Y’s judg­ment of X, and you’re tel­ling them to shut up be­cause Y knows far bet­ter than them. Even though you can’t point to any flaws in the skep­tic’s sug­ges­tion, can’t say any­thing about the kinds of rea­sons Y has in mind for be­liev­ing X, and can’t point them to the in­for­ma­tion sources Y might be draw­ing from. And it just so hap­pens that Y is big and pow­er­ful and im­pres­sive.

If we could look back at the ages be­fore liquid fi­nan­cial mar­kets ex­isted, and record all of the hu­man con­ver­sa­tions that went on at the time, then prac­ti­cally ev­ery in­stance in his­tory of any­thing that sounded like what you said about effi­cient mar­kets—that some mys­te­ri­ous pow­er­ful be­ing is always un­ques­tion­ably right, though the rea­son be im­pos­si­ble to un­der­stand—would have been a mis­take or a lie. So it’s hard to blame the skep­tic for be­ing sus­pi­cious, if they don’t yet un­der­stand how mar­ket effi­ciency works.

What you said to the skep­tic about AAPL stock is jus­tified for ex­tremely liquid mar­kets on short-term time hori­zons, but—at least I would claim—very rarely jus­tified any­where else. The claim is, “If you think you know the price of AAPL bet­ter than the stock mar­ket, then no mat­ter how good the ev­i­dence you think you’ve found is, your rea­son­ing just has some hid­den mis­take, or is ne­glect­ing some un­speci­fied key con­sid­er­a­tion.” And no mat­ter how vali­antly they ar­gue, no mat­ter how care­fully they con­struct their rea­son­ing, we just smile and say, “Sorry, kid.” It is a fi­nal and ab­solute slap­down that is meant to be in­escapable by any mun­dane means within a com­mon per­son’s grasp.

In­deed, this sup­pos­edly in­escapable and crush­ing re­join­der looks sur­pris­ingly similar to a par­tic­u­lar so­cial phe­nomenon I’ll call “sta­tus reg­u­la­tion.”

iii.

Sta­tus is an ex­tremely valuable re­source, and was valuable in the an­ces­tral en­vi­ron­ment.

Sta­tus is also a some­what con­served quan­tity. Not ev­ery­one can be sole dic­ta­tor.

Even if a hunter-gath­erer tribe or a startup con­tains more av­er­age sta­tus per per­son than a me­dieval so­ciety full of down­trod­den peas­ants, there’s still a sense in which sta­tus is a limited re­source and let­ting some­one walk off with lots of sta­tus is like let­ting them walk off with your bag of car­rots. So it shouldn’t be sur­pris­ing if act­ing like you have more sta­tus than I as­sign to you trig­gers a nega­tive emo­tion, a slap­down re­sponse.

If slap­downs ex­ist to limit ac­cess to an im­por­tant scarce re­source, we should ex­pect them to be cheater-re­sis­tant in the face of in­tense com­pe­ti­tion for that re­source.6 If just any­one could find some easy sen­tences to say that let them get higher sta­tus than God, then your sys­tem for al­lo­cat­ing sta­tus would be too easy to game. Es­cap­ing slap­downs should be hard, gen­er­ally re­quiring more than mere ab­stract ar­gu­men­ta­tion.

Ex­cept that peo­ple are differ­ent. So not ev­ery­one feels the same way about this, any more than we all feel the same way about sex.

As I’ve in­creas­ingly no­ticed of late, and con­trary to be­liefs ear­lier in my ca­reer about the psy­cholog­i­cal unity of hu­mankind, not all hu­man be­ings have all the hu­man emo­tions. The logic of sex­ual re­pro­duc­tion makes it un­likely that any­one will have a new com­plex piece of men­tal ma­chin­ery that no­body else has… but ab­sences of com­plex ma­chin­ery aren’t just pos­si­ble; they’re amaz­ingly com­mon.

And we tend to un­der­es­ti­mate how differ­ent other peo­ple are from our­selves. Once upon a time, there used to be a great and ac­rimo­nious de­bate in philos­o­phy about whether peo­ple had “men­tal imagery” (whether or not peo­ple ac­tu­ally see a lit­tle pic­ture of an elephant when they think about an elephant). It later turned out that some peo­ple see a lit­tle pic­ture of an elephant, some peo­ple don’t, and both sides thought that the way they per­son­ally worked was so fun­da­men­tal to cog­ni­tion that they couldn’t imag­ine that other peo­ple worked differ­ently. So both sides of the philo­soph­i­cal de­bate thought the other side was just full of crazy philoso­phers who were willfully deny­ing the ob­vi­ous. The typ­i­cal mind fal­lacy is the bias whereby we as­sume most other peo­ple are much more like us than they ac­tu­ally are.

If you’re fully asex­ual, then you haven’t felt the emo­tion oth­ers call “sex­ual de­sire”… but you can feel friend­ship, the warmth of cud­dling, and in most cases you can ex­pe­rience or­gasm. If you’re not around peo­ple who talk ex­plic­itly about the pos­si­bil­ity of asex­u­al­ity, you might not even re­al­ize you’re asex­ual and that there is a dis­tinct “sex­ual at­trac­tion” emo­tion you are miss­ing, just like some peo­ple with con­gen­i­tal anos­mia never re­al­ize that they don’t have a sense of smell.

Many peo­ple seem to be the equiv­a­lent of asex­ual with re­spect to the emo­tion of sta­tus reg­u­la­tion—my­self among them. If you’re blind to sta­tus reg­u­la­tion (or even sta­tus it­self) then you might still see that peo­ple with sta­tus get re­spect, and hunger for that re­spect. You might see some­one with a nice car and envy the car. You might see a hor­rible per­son with a big house and think that their be­hav­ior ought not to be re­warded with a big house, and feel bit­ter about the smaller house you earned by be­ing good. I can feel all of those things, but peo­ple’s over­all place in the peck­ing or­der isn’t a fast, per­cep­tual, pre-de­liber­a­tive thing for me in its own right.

For many peo­ple, I gather that the so­cial or­der is a reified emo­tional thing sep­a­rate from re­spect, sep­a­rate from the goods that sta­tus can ob­tain, sep­a­rate from any de­liber­a­tive rea­son­ing about who ought to have those goods, and sep­a­rate from any be­lief about who con­sented to be part of an im­plicit com­mu­nity agree­ment. There’s just a felt sense that some peo­ple are lower in var­i­ous sta­tus hi­er­ar­chies, while oth­ers are higher; and over­reach­ing by try­ing to claim sig­nifi­cantly more sta­tus than you cur­rently have is an offense against the reified so­cial or­der, which has an im­me­di­ate emo­tional im­pact, sep­a­rate from any be­liefs about the fur­ther con­se­quences that a so­cial or­der causes. One may also have ex­plicit be­liefs about pos­si­ble benefits or harms that could be caused by dis­rup­tions to the sta­tus hi­er­ar­chy, but the sta­tus reg­u­la­tion feel­ing is more ba­sic than that and doesn’t de­pend on high-level the­o­ries or cost-benefit calcu­la­tions.

Con­sider, in this con­text, the effi­ciency skep­tic’s per­spec­tive:


skep­tic: I have to say, I’m baf­fled at your in­sis­tence that hedge fund man­agers are the sum­mit of wor­ldly wis­dom. Many hedge fund man­agers—pos­si­bly most—are noth­ing but char­latans who con­vince pen­sion man­agers to in­vest money that ought to have gone into in­dex funds.

ce­cie: Mar­kets are a mechanism that al­low and in­cen­tivize a sin­gle smart par­ti­ci­pant to spot a bit of free en­ergy and eat it, in a way that ag­gre­gates to pro­duce a global equil­ibrium with no free en­ergy. We don’t need to sup­pose that most hedge fund man­agers are wise; we only need to sup­pose that a tiny hand­ful of mar­ket ac­tors are smart enough in each case to have already seen what you saw.

skep­tic: I’m not sure I un­der­stand. It sounds like what you’re say­ing, though, is that your faith is not in mere hu­mans, but in some mys­te­ri­ous higher force, the “Mar­ket.”

You con­sider this Mar­ket in­cred­ibly im­pres­sive and pow­er­ful. You con­sider it folly for any­one to think that they can know bet­ter than the Mar­ket. And you just hap­pen to have on hand a fully gen­eral method for slap­ping down any­one who dares challenge the Mar­ket, with­out need­ing to ac­tu­ally defend this or that par­tic­u­lar be­lief of the Mar­ket.

ce­cie: A mar­ket’s effi­ciency doesn’t de­rive from its so­cial sta­tus. True effi­ciency is very rare in hu­man ex­pe­rience. There’s a very good rea­son that we had to coin a term for the con­cept of “effi­cient mar­kets,” and not “effi­cient medicine” or “effi­cient physics”: be­cause in those ecolo­gies, not just any­one can come along and con­sume a morsel of free en­ergy.

If you per­son­ally know bet­ter than the doc­tors in a hos­pi­tal, you can’t walk in off the street to­mor­row and make mil­lions of dol­lars sav­ing more pa­tients’ lives. If you per­son­ally know bet­ter than an aca­demic field, you can’t walk in off the street to­mor­row and make mil­lions of dol­lars filling the arXiv with more ac­cu­rate pa­pers.

skep­tic: I don’t know. The par­allels be­tween effi­ciency and hu­man sta­tus re­la­tions seem awfully strong, and this “Mar­ket moves in mys­te­ri­ous ways” re­join­der seems like an awfully con­ve­nient trick.


In­deed, I would be sur­prised if there weren’t at least some be­liev­ers in “effi­cient mar­kets” who as­signed them ex­tremely high sta­tus and were tempted to ex­ag­ger­ate their effi­ciency, per­haps feel­ing a sense of in­dig­na­tion at those who dared to do bet­ter. Per­haps there are peo­ple who feel an urge to slap down any­one who starts ques­tion­ing the effi­ciency of Boomville’s res­i­den­tial hous­ing mar­ket.

So be it; Deepak Cho­pra can’t falsify quan­tum me­chan­ics by be­ing en­thu­si­as­tic about a dis­torted ver­sion of it. The effi­ciency skep­tic should jet­ti­son their skep­ti­cism, and should take care to avoid the fal­lacy fal­lacy—the fal­lacy of tak­ing for granted that some con­clu­sion is false just be­cause a fal­la­cious ar­gu­ment for that con­clu­sion ex­ists.7

I once sum­ma­rized my episte­mol­ogy like so: “Try to make sure you’d ar­rive at differ­ent be­liefs in differ­ent wor­lds.” You don’t want to think in such a way that you wouldn’t be­lieve in a con­clu­sion in a world where it were true, just be­cause a fal­la­cious ar­gu­ment could sup­port it. Emo­tion­ally ap­peal­ing mis­takes are not in­vin­cible cog­ni­tive traps that no­body can ever es­cape from. Some­times they’re not even that hard to es­cape.

The rem­edy, as usual, is tech­ni­cal un­der­stand­ing. If you know in de­tail when a phe­nomenon switches on and off, and when the “in­escapable” slap­down is es­capable, you prob­a­bly won’t map it onto God.

iv.

I ac­tu­ally can’t re­call see­ing any­one make the mis­take of treat­ing effi­cient mar­kets like high-sta­tus au­thor­i­ties in a so­cial peck­ing or­der.8 The more gen­eral phe­nomenon seems quite com­mon, though: heav­ily weight­ing rel­a­tive sta­tus in de­ter­min­ing odds of suc­cess; re­spond­ing to overly am­bi­tious plans as though they were not merely im­pru­dent but im­pu­dent; and priv­ileg­ing the hy­poth­e­sis that au­thor­i­ta­tive in­di­vi­d­u­als and in­sti­tu­tions have mys­te­ri­ous un­speci­fied good rea­sons for their ac­tions, even when these rea­sons stub­bornly re­sist elic­i­ta­tion and the ac­tions are suffi­ciently ex­plained by mis­al­igned in­cen­tives.

From what I can tell, sta­tus reg­u­la­tion is a sec­ond fac­tor ac­count­ing for mod­esty’s ap­peal, dis­tinct from anx­ious un­der­con­fi­dence. The im­pulse is to con­struct “cheater-re­sis­tant” slap­downs that can (for ex­am­ple) pre­vent dilet­tantes who are low on the rele­vant sta­tus hi­er­ar­chy from propos­ing new SAD treat­ments. Be­cause if dilet­tantes can ex­ploit an in­effi­ciency in a re­spected sci­en­tific field, then this makes it eas­ier to “steal” sta­tus and up­set the cur­rent or­der.

In the past, I didn’t un­der­stand that an im­por­tant part of sta­tus reg­u­la­tion, as most peo­ple ex­pe­rience it, is that one needs to already pos­sess a cer­tain amount of sta­tus be­fore it’s seen as ac­cept­able to reach up for a given higher level of sta­tus. What could be wrong (I pre­vi­ously thought) with try­ing to be­stow un­usu­ally large benefits upon your tribe? I could un­der­stand why it would be bad to claim that you had already ac­com­plished more than you had—to claim more re­spect than was due the good you’d already done. But what could be wrong with try­ing to do more good for the tribe, in the fu­ture, than you already had in the pre­sent?

It took me a long time to un­der­stand that try­ing to do in­ter­est­ing things in the fu­ture is a sta­tus vi­o­la­tion be­cause your cur­rent sta­tus right now de­ter­mines what kinds of images you are al­lowed to as­so­ci­ate with your­self, and if your sta­tus is low, then many peo­ple will in­tu­itively per­ceive an un­pleas­ant vi­o­la­tion of the so­cial or­der should you as­so­ci­ate with your­self an image of pos­si­ble fu­ture suc­cess above some level. Only peo­ple who already have some­thing like an aura of pre-im­por­tance are al­lowed to try to do im­por­tant things. Publi­cly set­ting out to do valuable and im­por­tant things even­tu­ally is above the sta­tus you already have now, and will gen­er­ate an im­me­di­ate sys­tem-1 slap­down re­ac­tion.

I rec­og­nize now that this is a com­mon lens through which peo­ple see the world, though I still don’t know how it feels to feel that.

Re­gard­less, when I see a sup­posed piece of episte­mol­ogy that looks to me an awful lot like my model of sta­tus reg­u­la­tion, but which doesn’t seem to co­here with the pat­terns of cor­rect rea­son­ing de­scribed by the­o­rists like E. T. Jaynes, I get sus­pi­cious. When peo­ple cite the “out­side view” to ar­gue that one should stick to pro­jects whose am­bi­tion and im­pres­sive­ness befit one’s “refer­ence class,” and an­nounce that any effort to sig­nifi­cantly out­perform the “refer­ence class” is epistem­i­cally sus­pect “over­con­fi­dence,” and in­sist that mov­ing to take into ac­count lo­cal ex­ten­u­at­ing fac­tors, causal ac­counts, and jus­tifi­ca­tions con­sti­tutes an illicit ap­peal to the “in­side view” and we should rely on more ob­vi­ous, visi­ble, pub­li­cly demon­stra­ble signs of over­all aus­pi­cious­ness or inaus­pi­cious­ness… you know, I’m not sure this is strictly in­spired by the ex­per­i­men­tal work done on peo­ple es­ti­mat­ing their Christ­mas shop­ping com­ple­tion times.

I be­come sus­pi­cious as well when this model is de­ployed in prac­tice by peo­ple who talk in the same tone of voice that I’ve come to as­so­ci­ate with sta­tus reg­u­la­tion, and when an awful lot of what they say sounds to me like an elab­o­rate ra­tio­nal­iza­tion of, “Who are you to act like some kind of big shot?”

I ob­serve that many of the same peo­ple worry a lot about “What do you say to the Repub­li­can?” or the pos­si­bil­ity that crack­pots might try to cheat—like they’re try­ing above all to guard some valuable so­cial re­source from the pos­si­bil­ity of theft. I ob­serve that the no­tion of some­body be­ing able to steal that re­source and get away with it seems to in­spire a spe­cial de­gree of hor­ror, rather than just be­ing one more case of some­body mak­ing a mis­taken prob­a­bil­ity es­ti­mate.

I ob­serve that at­tempts to do much bet­ter than is the norm elicit many heated ac­cu­sa­tions of over­con­fi­dence. I ob­serve that failures to even try to live up to your track record or to do as well as a typ­i­cal mem­ber of some sug­gested refer­ence class mys­te­ri­ously fail to elicit many heated ac­cu­sa­tions of un­der­con­fi­dence. Un­der­con­fi­dence and over­con­fi­dence are sym­met­ri­cal mis­takes epistem­i­cally, and yet some­how I never see gen­er­al­iza­tions of the out­side view even-hand­edly ap­plied to cor­rect both bi­ases.

And so I’m skep­ti­cal that this re­flects nor­ma­tive prob­a­bil­ity the­ory, pure epistemic rules such as aliens would also in­vent and use. Sort of like how an asex­ual de­ci­sion the­o­rist might be skep­ti­cal of an ar­gu­ment say­ing that the pure struc­ture of de­ci­sion the­ory im­plies that ar­bi­trary de­ci­sion agents with ar­bi­trary biolo­gies ought to value sex.

This kind of mod­esty of­ten looks like the con­de­scen­sion of de­spair, or bears the “God works in mys­te­ri­ous ways” prop­erty of at­tribut­ing vague good rea­sons to au­thor­i­ties on vague grounds. It’s the kind of rea­son­ing that makes sense in the con­text of an effi­cient mar­ket, but it doesn’t seem to be com­ing from a model of the struc­ture or in­cen­tives of rele­vant com­mu­ni­ties, such as the re­search com­mu­nity study­ing mood di­s­or­ders.

No-free-en­ergy equil­ibria do gen­er­al­ize be­yond as­set prices; mar­kets are not the only ecolo­gies full of mo­ti­vated agents. But some­times those agents aren’t suffi­ciently mo­ti­vated and in­cen­tivized to do cer­tain things, or the agents aren’t all in­di­vi­d­u­ally free to do them. In this case, I think that many peo­ple are do­ing the equiv­a­lent of humbly ac­cept­ing that they can’t pos­si­bly know whether a sin­gle house in Boomville is over­priced. In fact, I think this form of sta­tus-ori­ented mod­esty is ex­tremely com­mon, and is hav­ing hugely detri­men­tal effects on the epistemic stan­dards and the ba­sic emo­tional health of the peo­ple who fall into it.

v.

Modesty can take the form of an ex­plicit episte­molog­i­cal norm, or it can man­i­fest in more quiet and im­plicit ways, as small flinches away from painful thoughts and to­wards more com­fortable ones. It’s the lat­ter that I think is caus­ing most of the prob­lem. I’ve spent a sig­nifi­cant amount of time cri­tiquing the ex­plicit norms, be­cause I think these serve an im­por­tant role as ca­naries piling up in the coalmine, and be­cause they are bad episte­mol­ogy in their own right. But my chief hope is to illu­mi­nate that smaller and more quiet prob­lem.

I think that anx­ious un­der­con­fi­dence and sta­tus reg­u­la­tion are the main forces mo­ti­vat­ing mod­esty, while con­cerns about over­con­fi­dence, dis­agree­ment, and the­o­ret­i­cism serve a sec­ondary role in jus­tify­ing and prop­a­gat­ing these pat­terns of thought. Nor are anx­ious un­der­con­fi­dence and sta­tus reg­u­la­tion en­tirely sep­a­rate prob­lems; buck­ing the sta­tus quo is par­tic­u­larly painful when pub­lic failure is a pos­si­bil­ity, and shoot­ing low can be par­tic­u­larly at­trac­tive when it pro­tects against ac­cu­sa­tions of hubris.

Con­sider the out­side view as a heuris­tic for min­i­miz­ing the risk of so­cial trans­gres­sion and failure. Rely­ing on an out­side view in­stead of an in­side view will gen­er­ally mean mak­ing fewer knowl­edge claims, and the knowl­edge claims will gen­er­ally rest on sur­face im­pres­sions (which are eas­ier to share), rather than on priv­ileged in­sights and back­ground knowl­edge (which im­ply more sta­tus).

Or con­sider the so­cial util­ity of play­ing the fox’s part. The fox can say that they rely only on hum­ble data sets, dis­claiming the hedge­hog’s lofty the­o­ries, and dis­claiming any spe­cial knowl­edge or spe­cial pow­ers of dis­cern­ment im­plied thereby. And by stick­ing to rel­a­tively lo­cal claims, or only en­dors­ing global the­o­ries once they com­mand au­thor­i­ties’ uni­ver­sal as­sent, the fox can avoid en­dors­ing the kinds of gen­er­al­iza­tions that might en­croach on some­one else’s turf or oth­er­wise dis­rupt a sta­tus hi­er­ar­chy.

Fi­nally, con­sider ap­peals to agree­ment. As a mat­ter of prob­a­bil­ity the­ory, perfect ra­tio­nal­ity plus mu­tual un­der­stand­ing of­ten en­tails perfect agree­ment. Yet it doesn’t fol­low from this that the way for hu­man be­ings to be­come more ra­tio­nal is to try their best to min­i­mize dis­agree­ment. An all-know­ing agent will as­sign prob­a­bil­ities ap­proach­ing 0 and 1 to all or most of its be­liefs, but this doesn’t im­ply that the best way to be­come more knowl­edge­able is to man­u­ally ad­just one’s be­liefs to be as ex­treme as pos­si­ble.

The be­hav­ior of ideal Bayesian rea­son­ers is im­por­tant ev­i­dence about how to be­come more ra­tio­nal. What this usu­ally in­volves, how­ever, is un­der­stand­ing how Bayesian rea­son­ing works in­ter­nally and try­ing to im­ple­ment a causally similar pro­ce­dure, not look­ing at the end product and try­ing to pan­tomime par­tic­u­lar sur­face-level in­di­ca­tors or side-effects of good Bayesian in­fer­ence. And a psy­cholog­i­cal drive to­ward au­to­matic defer­ence or self-skep­ti­cism isn’t the mechanism by which Bayesi­ans end up agree­ing to agree.

Bayes-op­ti­mal rea­son­ers don’t Au­mann-agree be­cause they’re fol­low­ing some ex­otic meta-level heuris­tic. I don’t know of any gen­eral-pur­pose rule like that for quickly and cheaply leapfrog­ging to con­sen­sus, ex­cept ones that do so by sac­ri­fic­ing some amount of ex­pected be­lief ac­cu­racy. To the best of my knowl­edge, the out­landish and in­ge­nious trick that re­ally lets flawed rea­son­ers inch nearer to Au­mann’s ideal is just the old-fash­ioned one where you go out and think about your­self and about the world, and do what you can to cor­rect for this or that bias in a case-by-case fash­ion.

Whether ap­plied se­lec­tively or con­sis­tently, the temp­ta­tion of mod­esty is to “fake” Au­mann agree­ment—to rush the pro­cess, rather than wait­ing un­til you and oth­ers can ac­tu­ally ra­tio­nally con­verge upon the same views. The temp­ta­tion is to call an early halt to risky lines of in­quiry, to not claim to know too much, and to not claim to as­pire to too much; all while wield­ing a fully gen­eral ar­gu­ment against any­one who doesn’t do the same.

And now that I’ve given my warn­ing about these risks and wrong turns, I hope to re­turn to other mat­ters.

My friend John thought that there were hid­den good rea­sons be­hind Ja­pan’s de­ci­sion not to print money. Was this be­cause he thought that the Bank of Ja­pan was big and pow­er­ful, and there­fore higher sta­tus than a non-pro­fes­sional-economist like me?

I liter­ally had a bad taste in my mouth as I wrote that para­graph.9 This kind of psy­chol­o­giz­ing is not what peo­ple epistem­i­cally vir­tu­ous enough to bet on their be­liefs should spend most of their time say­ing to one an­other. They should just be win­ning hun­dreds of dol­lars off of me by bet­ting on whether some AI bench­mark will be met by a cer­tain time, as my friend later pro­ceeded to do. And then later he and I both lost money to other friends, bet­ting against Trump’s elec­tion vic­tory. The jour­ney goes on.

I’m not schem­ing to taint all hu­mil­ity for­ever with the mere sus­pi­cion of se­cretly fal­la­cious rea­son­ing. That would con­vict me of the fal­lacy fal­lacy. Yes, sub­con­scious in­fluences and emo­tional temp­ta­tions are a prob­lem, but you can of­ten beat those if your ex­plicit ver­bal rea­son­ing is good.

I’ve cri­tiqued the fruits of mod­esty, and noted my con­cerns about the tree on which they grow. I’ve said why, though my un­der­stand­ing of the men­tal mo­tions be­hind mod­esty is very im­perfect and in­com­plete, I do not ex­pect these mo­tions to yield good and true fruits. But cog­ni­tive fal­la­cies are not in­vin­cible traps; and if I spent most of my time think­ing about meta-ra­tio­nal­ity and cog­ni­tive bias, I’d be tak­ing my eye off the ball.10


Inad­e­quate Equil­ibria is now available in elec­tronic and print form on equil­ibri­abook.com.

Con­clu­sion: Against Shoot­ing Your­self in the Foot.


  1. From Bo­den­hausen, Macrae, and Hu­gen­berg (2003):

    [I]f cor­rec­tional mechanisms are to re­sult in a less bi­ased judg­ment, the per­ceiver must have a gen­er­ally ac­cu­rate lay the­ory about the di­rec­tion and ex­tent of the bias. Other­wise, cor­rec­tions could go in the wrong di­rec­tion, they could go in­suffi­ciently in the right di­rec­tion, or they could go too far in the right di­rec­tion, lead­ing to over­cor­rec­tion. In­deed, many ex­am­ples of over­cor­rec­tion have been doc­u­mented (see We­gener & Perry, 1997, for a re­view), in­di­cat­ing that even when a bias is de­tected and ca­pac­ity and mo­ti­va­tion are pre­sent, con­trol­led pro­cesses are not nec­es­sar­ily effec­tive in ac­cu­rately coun­ter­act­ing au­to­matic bi­ases.
  2. From Su­perfore­cast­ing: “The strongest pre­dic­tor of ris­ing into the ranks of su­perfore­cast­ers is per­pet­ual beta, the de­gree to which one is com­mit­ted to be­lief up­dat­ing and self-im­prove­ment. It is roughly three times as pow­er­ful a pre­dic­tor as its clos­est ri­val, in­tel­li­gence.”

  3. E.g., Alpert and Raiffa (1982), “A Progress Re­port on the Train­ing of Prob­a­bil­ity Asses­sors<a>.

  4. Or rather, get bet­ter at pre­dict­ing fu­ture events than in­tel­li­gence agen­cies, com­pany ex­ec­u­tives, and the wis­dom of crowds.

  5. From Ed­ward FitzGer­ald’s Rubaiyat of Omar Khayyám.

  6. The ex­is­tence of spe­cial­ized cog­ni­tive mod­ules for de­tect­ing cheat­ing can be seen, e.g., in the Wa­son se­lec­tion task. Test sub­jects perform poorly when asked to perform a ver­sion of this task in­tro­duced in so­cially neu­tral terms (e.g., rules gov­ern­ing num­bers and col­ors), but perform well when given an iso­mor­phic ver­sion of the task that is framed in terms of so­cial rules and meth­ods for spot­ting vi­o­la­tors of those rules. See Cos­mides and Tooby, “Cog­ni­tive Adap­ta­tions for So­cial Ex­change.”

  7. Give me any other ma­jor and widely dis­cussed be­lief from any other field of sci­ence, and I shall paint a pic­ture of how it re­sem­bles some other fal­lacy—maybe even find some­body who ac­tu­ally mis­in­ter­preted it that way. It doesn’t mean much. There’s just such a vast ar­ray of mis­takes hu­man minds can make that if you re­jected ev­ery ar­gu­ment that looks like it could maybe be guilty of some fal­lacy, you’d be left with noth­ing at all.

    It of­ten just doesn’t mean very much when we find that a line of ar­gu­ment can be made to look “sus­pi­ciously like” some fal­la­cious ar­gu­ment. Or rather: be­ing sus­pi­cious is one thing, and be­ing so sus­pi­cious that rele­vant ev­i­dence can­not re­al­is­ti­cally over­come a sus­pi­cion is an­other.

  8. It’s a mis­take that some­body could make, though, and peo­ple pro­mot­ing ideas that are sus­cep­ti­ble to fal­la­cious mis­in­ter­pre­ta­tion do have an obli­ga­tion to post warn­ing signs. Some­times it feels like I’ve spent my whole life do­ing noth­ing else.

  9. Well, my break­fast might also have had some­thing to do with it, but I no­ticed the bad taste while writ­ing those sen­tences.

  10. There’s more I can say about how I think mod­est episte­mol­ogy and sta­tus dy­nam­ics work in prac­tice, based on past con­ver­sa­tions; but it would re­quire me to digress into talk­ing about my work and fic­tion-writ­ing. For a sup­ple­men­tal chap­ter tak­ing a more con­crete look at these con­cepts, see Hero Li­cens­ing.