Living in an Inadequate World

Fol­low-up to: Moloch’s Toolbox (pt. 1, pt. 2)


Be warned: Try­ing to put to­gether a back­ground model like the one I sketched in the pre­vi­ous chap­ter is a pretty per­ilous un­der­tak­ing, es­pe­cially if you don’t have a pro­fes­sional economist check­ing your work at ev­ery stage.

Sup­pose I offered the fol­low­ing much sim­pler ex­pla­na­tion of how ba­bies are dy­ing in­side the US health­care sys­tem:

What if par­ents don’t re­ally care about their ba­bies?

Maybe par­ents don’t bond to their ba­bies so swiftly? Maybe they don’t re­ally care that much about those voice­less pink blobs in the early days? Maybe this is one of those things that peo­ple think they’re sup­posed to feel very strongly, and yet the emo­tion isn’t ac­tu­ally there. Maybe par­ents just sort of in­wardly shrug when their in­fants die, and only pre­tend to be sad about it. If they re­ally cared, wouldn’t they de­mand a sys­tem that didn’t kill ba­bies?

In our tax­on­omy, this would be a “de­ci­sion­maker is not benefi­ciary” ex­pla­na­tion, with the par­ents and doc­tors be­ing the de­ci­sion­mak­ers, and the ba­bies be­ing the benefi­cia­ries.

A much sim­pler hy­poth­e­sis, isn’t it?

When we try to do in­ad­e­quacy anal­y­sis, there is such a thing as wrong guesses and false cyn­i­cism.

I’m sure there are some par­ents who don’t bond to their ba­bies all that in­tensely. I’m sure some of them lie to them­selves about that. But in the early days when Omegaven was just plain ille­gal to sell across state lines, some par­ents would drive for hours, ev­ery month, to buy Omegaven from the Bos­ton Chil­dren’s Hospi­tal to take back to their home state. I, for one, would call that an ex­traor­di­nary effort. Those par­ents went far out­side their rou­tine, be­yond what the Sys­tem would de­mand of them, be­yond what the world was set up to sup­port them do­ing by de­fault. Most peo­ple won’t make an effort that far out­side their usual habits even if their own per­sonal lives are at stake.

If par­ents are let­ting their ba­bies die of liver dam­age be­cause the par­ents don’t care, we should find few ex­traor­di­nary efforts in these and other cases of baby-sav­ing. This is an ob­ser­va­tional con­se­quence we can check, and the ob­ser­va­tional check fails to sup­port the the­ory.

For a fixed amount of in­ad­e­quacy, there is only so much dys­func­tion that needs to be in­voked to ex­plain it. By the na­ture of in­ad­e­quacy there will usu­ally be more than one thing go­ing wrong at a time… but even so, there’s only a bounded amount of failure to be ex­plained. Every pos­si­ble dys­func­tion is com­pet­ing against ev­ery other pos­si­ble dys­func­tion to ex­plain the ob­served data. Sloppy cyn­i­cism will usu­ally be wrong, just like your Face­book ac­quain­tances who at­tribute civ­i­liza­tional dys­func­tions to gi­ant malev­olent con­spir­a­cies.

If you’re sloppy, then you’re al­most always go­ing to find some way to con­clude, “Oh, those physi­cists are just part of the bro­ken aca­demic sys­tem, what would they re­ally know about the Higgs bo­son?” You will de­tect in­ad­e­quacy ev­ery time you go look­ing for it, whether or not it’s there. If you see the same vi­sion wher­ever you look, that’s the same as be­ing blind.

i.

In most cases, you won’t need to re­sort to com­pli­cated back­ground analy­ses to figure out whether some­thing is bro­ken.

I mean, it’s not like the only pos­si­ble way one might no­tice that the US health care sys­tem is a vast, ill-con­ceived ma­chine that is bro­ken and also on fire is to un­der­stand microe­co­nomics and pre­dict a pri­ori that as­pects of this sys­tem de­sign might pro­mote in­ad­e­quate equil­ibria. In real life, one no­tices the bro­ken­ness by read­ing economists who blog about the grind­ing gears and seas of flame, and listen­ing to your friends sob about the screams com­ing from the ru­ins.

Then what good does it do to un­der­stand Moloch’s toolbox? What’s the point of the skill?

I sus­pect that for many peo­ple, the pri­mary benefit of in­ad­e­quacy anal­y­sis will be in un­do­ing a mis­take already made, where they dis­be­lieve in in­ad­e­quacy even when they’re look­ing straight at it.

There are peo­ple who would sim­ply never try to put up 130 light bulbs in their house—be­cause if that worked, surely some good and dili­gent pro­fes­sional re­searcher would have already tried it. The med­i­cal sys­tem would have made it a stan­dard treat­ment, right? The doc­tor would already know about it, right? And sure, some­times peo­ple are stupid, but we’re also peo­ple and we’re also stupid so how could we am­a­teurs pos­si­bly do bet­ter than cur­rent re­searchers on SAD, et cetera.

Often the most com­monly ap­pli­ca­ble benefit from a fancy ra­tio­nal tech­nique will be to can­cel out fancy ir­ra­tional­ity.1 I ex­pect that the most com­mon benefit of in­ad­e­quacy anal­y­sis will be to break a cer­tain kind of blind trust—that is, trust ar­rived at by men­tal rea­son­ing pro­cesses that are in­sen­si­tive to whether you ac­tu­ally in­habit a uni­verse that’s wor­thy of that trust—and open peo­ple’s eyes to the blatant bro­ken­ness of things that are eas­ily ob­served to be bro­ken. Un­der­stand­ing the back­ground the­ory helps can­cel out the elab­o­rate ar­gu­ments say­ing that you can’t sec­ond-guess the Euro­pean Cen­tral Bank even when it’s straight­for­ward to show how and why they’re mak­ing a mis­take.

Con­versely, I’ve also watched some peo­ple plunge straight into prob­lems that I’d guess were in­ex­ploitable, with­out do­ing the check, and then fail—usu­ally fal­ling prey to the Free En­ergy Fal­lacy, sup­pos­ing that they can win just by do­ing bet­ter on the axis they care about. That sub­group might benefit, not from be­ing told, “Shut up, you’ll always fail, the an­swer is always no,” but just from a re­minder to check for signs of in­ex­ploita­bil­ity.

It may be that some of those peo­ple will end up always say­ing, “I can think of at least one Moloch’s toolbox el­e­ment in play, there­fore this prob­lem will be ex­ploitable!” No hu­manly pos­si­ble stric­tures of ra­tio­nal­ity can be strict enough to pre­vent a re­ally de­ter­mined per­son from shoot­ing them­selves in the foot. But it does help to be aware that the skill ex­ists, be­fore you start re­fin­ing the skill.

Whether you’re try­ing to move past mod­esty or over­come the Free En­ergy Fal­lacy:

  • Step one is to re­al­ize that here is a place to build an ex­plicit do­main the­ory—to want to un­der­stand the meta-prin­ci­ples of free en­ergy, the prin­ci­ples of Moloch’s toolbox and the con­verse prin­ci­ples that im­ply real effi­ciency, and build up a model of how they ap­ply to var­i­ous parts of the world.

  • Step two is to ad­just your mind’s ex­ploita­bil­ity de­tec­tors un­til they’re not always an­swer­ing, “You couldn’t pos­si­bly ex­ploit this do­main, fool­ish mor­tal,” or, “Why trust those hedge-fund man­agers to price stocks cor­rectly when they have such poor in­cen­tives?”

And then you can move on to step three: the fine-tun­ing against re­al­ity.

ii.

In my past ex­pe­rience, I’ve both un­der­shot and over­shot the rel­a­tive com­pe­tence of doc­tors in the US med­i­cal sys­tem:

Anec­dote 1: I once be­came very wor­ried when my then-girlfriend got a headache and started see­ing blobs of color, and when she drew the blobs they were left-right asym­met­ri­cal. I im­me­di­ately started wor­ry­ing about the asym­me­try, think­ing, “This is the kind of symp­tom I’d ex­pect if some­one had suffered dam­age to just one side of the brain.” No­body at the emer­gency room seemed very con­cerned, and she waited for a cou­ple of hours to be seen, when I could re­mem­ber read­ing that strokes had to be treated within the first few hours (bet­ter yet, min­utes) to save as much brain tis­sue as pos­si­ble.

What she was re­ally ex­pe­rienc­ing, of course, was her first mi­graine. And I ex­pect that ev­ery nurse we talked to knew that, but only a doc­tor is al­lowed to make di­ag­noses, so they couldn’t legally tell us. I’d read all sorts of won­der­ful pa­pers about ex­otic and illu­mi­nat­ing forms of brain dam­age, but no pa­pers about the much more com­mon ail­ments that peo­ple in emer­gency rooms ac­tu­ally have. “Think horses, not ze­bras,” as the doc­tors say.

Anec­dote 2: I once saw a der­ma­tol­o­gist for a dan­druff prob­lem. He di­ag­nosed me with eczema, and gave me some steroid cream to put on my head for when the eczema be­came es­pe­cially se­vere. It didn’t cure the dan­druff—but I’d seen a doc­tor, so I shrugged and con­cluded that there prob­a­bly wasn’t much to be done, since I’d already tried and failed us­ing the big guns of the Med­i­cal Sys­tem.

Eight years later, when I was try­ing to com­pound a ke­to­genic meal re­place­ment fluid I’d for­mu­lated in an at­tempt to lose weight, my dan­druff seemed to get worse. So I checked whether on­line pa­leo blogs had any­thing to say about treat­ing dan­druff via diet. I learned that a lot of dan­druff is caused by the Can­dida fun­gus (which I’d never heard of), and that the fun­gus eats ke­tones. So if switch­ing to a ke­to­genic diet (or drink­ing MCT oil, which gets turned into ke­tones) makes your dan­druff worse, why, your dan­druff is prob­a­bly the Can­dida fun­gus. I looked up what kills Can­dida, found that I should use a sham­poo con­tain­ing ke­to­cona­zole, kept Googling, found a pa­per stat­ing that 2% ke­to­canozole sham­poo is an or­der of mag­ni­tude more effec­tive than 1%, learned that only 1% ke­to­canozole sham­poo was sold in the US, and or­dered im­ported 2% Ni­zo­ral from Thailand via Ama­zon. Shortly there­after, dan­druff was no longer a sig­nifi­cant is­sue for me and I could wear dark shirts with­out con­stantly check­ing my right shoulder for white specks. If my der­ma­tol­o­gist knew any­thing about dan­druff com­monly be­ing caused by a fun­gus, he never said a word.

From those two data points and oth­ers like them, I in­fer that med­i­cal com­pe­tence—not med­i­cal ab­solute perfor­mance, but med­i­cal com­pe­tence rel­a­tive to what I can figure out by Googling—is high-var­i­ance. I shouldn’t trust my doc­tor on sig­nifi­cant ques­tions with­out check­ing her di­ag­no­sis and treat­ment plan on the In­ter­net, and I also shouldn’t trust my­self.

A lot of the times we put on our in­ad­e­quacy-de­tect­ing gog­gles, we’re de­cid­ing whether to trust some as­pect of so­ciety to be more com­pe­tent than our­selves. Part of the point of learn­ing to think in eco­nomic terms about this ques­tion is to make it more nat­u­ral to treat it as a tech­ni­cal ques­tion where spe­cific lines of ev­i­dence can shift spe­cific con­clu­sions to vary­ing de­grees.

In par­tic­u­lar, you don’t need to be strictly bet­ter or worse than some part of so­ciety. The ques­tion isn’t about rank­ing peo­ple, so you can be smarter in some ways and dumber in oth­ers. It can vary from minute to minute as the gods roll their dice.

By con­trast, the mod­est view­point seems to me to have a very so­cial-sta­tus-col­ored per­spec­tive on such things.

In the mod­est world, ei­ther you think you’re bet­ter than doc­tors and all the civ­i­liza­tion back­ing them, or you ad­mit you’re not as good and that you ought to defer to them.

If you don’t defer to doc­tors, then you’ll end up as one of those peo­ple who try feed­ing their chil­dren or­ganic herbs to com­bat can­cer; the out­side view says that that’s what hap­pens to most non-doc­tors who dare to think they’re bet­ter than doc­tors.

On the mod­est view, it’s not that we hold up a thumb and eye­ball the lo­cal com­pe­tence level, based mostly on ob­ser­va­tion and a lit­tle on eco­nomic think­ing; and then up­date on our ob­served rel­a­tive perfor­mance; and some­times say, “This varies a lot. I’ll have to check each time.”

In­stead, ev­ery time you de­cide whether you think you can do bet­ter, you are declar­ing what sort of per­son you are.

For an ex­am­ple of what I mean here, con­sider writer Ozy Bren­nan’s tax­on­omy:

I think a for­ma­tive mo­ment for any ra­tio­nal­ist—our “Un­cle Ben shot by the mug­ger” mo­ment, if you will—is the mo­ment you go “holy shit, ev­ery­one in the world is fuck­ing in­sane.” […]

Now, there are ba­si­cally two ways you can re­spond to this.

First, you can say “holy shit, ev­ery­one in the world is fuck­ing in­sane. There­fore, if I adopt the rad­i­cal new policy of not be­ing fuck­ing in­sane, I can pick up these gi­ant piles of util­ity ev­ery­one is leav­ing on the ground, and then I win.” […]

This is the strat­egy of dis­cov­er­ing a hot new stock tip, in­vest­ing all your money, win­ning big, and re­tiring to Maui.

Se­cond, you can say “holy shit, ev­ery­one in the world is fuck­ing in­sane. How­ever, none of them seem to re­al­ize that they’re in­sane. By ex­ten­sion, I am prob­a­bly in­sane. I should take care­ful steps to min­i­mize the dam­age I do.” […]

This is the strat­egy of dis­cov­er­ing a hot new stock tip, re­al­iz­ing that most stock tips are bo­gus, and not go­ing bankrupt.2

Ac­cord­ing to this so­ciolog­i­cal hy­poth­e­sis, peo­ple can re­act to the dis­cov­ery that “ev­ery­one in the world is in­sane” by adopt­ing the Maui strat­egy, or they can re­act by adopt­ing the not-go­ing-bankrupt strat­egy.

(Note the in­evitable com­par­i­son to fi­nan­cial mar­kets—the one part of civ­i­liza­tion that worked well enough to prompt an economist, Eu­gene Fama, to come up with the mod­ern no­tion of effi­ciency.)

Bren­nan goes on to say that these two po­si­tions form a “di­alec­tic,” but that nonethe­less, some kinds of peo­ple are clearly on the “be­com­ing-sane side of things” while oth­ers are more on the “in­san­ity-harm-re­duc­tion side of things.”

But, speak­ing first to the ba­sic di­chotomy that’s be­ing pro­posed, the whole point of be­com­ing sane is that your be­liefs shouldn’t re­flect what sort of per­son you are. To the ex­tent you’re suc­ceed­ing, at least, your be­liefs should just re­flect how the world is.

Good rea­son­ers don’t be­lieve that there are gob­lins in their closets. The ul­ti­mate rea­son for this isn’t that gob­lin-be­lief is ar­chaic, out­moded, as­so­ci­ated with peo­ple lost in fan­tasy wor­lds, too much like wish­ful think­ing, et cetera. It’s just that we opened up our closets and looked and we didn’t see any gob­lins.

The goal is sim­ply to be the sort of per­son who, in wor­lds with closet gob­lins, ends up be­liev­ing in closet gob­lins, and in wor­lds with­out closet gob­lins, ends up dis­be­liev­ing in closet gob­lins. Avoid­ing be­liefs that sound ar­chaic does rel­a­tively lit­tle to help you learn that there are gob­lins in a world where gob­lins ex­ist, so it does rel­a­tively lit­tle to es­tab­lish that there aren’t gob­lins in a world where they don’t ex­ist. Ex­am­in­ing par­tic­u­lar em­piri­cal pre­dic­tions of the gob­lin hy­poth­e­sis, on the other hand, does provide strong ev­i­dence about what world you’re in.

To reckon with the dis­cov­ery that the world is mad, Bren­nan sug­gests that we con­sider the mix of hum­ble and au­da­cious “im­pulses in our soul” and try to strike the right bal­ance. Per­haps we have some per­son­al­ity traits or bi­ases that dis­pose us to­ward be­liev­ing in gob­lins, and oth­ers that dis­pose us to­ward doubt­ing them. On this fram­ing, the heart of the is­sue is how we can re­solve this in­ner con­flict; the heart isn’t any ques­tion about the be­hav­ioral ten­den­cies or phys­iol­ogy of gob­lins.

This is a cen­tral dis­agree­ment I have with mod­est episte­mol­ogy: mod­est peo­ple end up be­liev­ing that they live in an in­ex­ploitable world be­cause they’re try­ing to avoid act­ing like an ar­ro­gant kind of per­son. Un­der mod­est episte­mol­ogy, you’re not sup­posed to adapt rapidly and with­out hes­i­ta­tion to the re­al­ities of the situ­a­tion as you ob­serve them, be­cause that would mean trust­ing your­self to as­sess ad­e­quacy lev­els; but you can’t trust your­self, be­cause Dun­ning-Kruger, et cetera.

The al­ter­na­tive to mod­est episte­mol­ogy isn’t an im­mod­est episte­mol­ogy where you de­cide that you’re higher sta­tus than doc­tors af­ter all and con­clude that you can now in­vent your own de novo med­i­cal treat­ments as a mat­ter of course. The al­ter­na­tive is de­cid­ing for your­self whether to trust your­self more than a par­tic­u­lar facet of your civ­i­liza­tion at this par­tic­u­lar time and place, check­ing the re­sults when­ever you can, and build­ing up skill.

When it comes to medicine, I try to keep in mind that any­one what­so­ever with more real-world med­i­cal ex­pe­rience may have me beat cold solid when it comes to any real-world prob­lem. And then I go right on dou­ble-check­ing on­line to see if I be­lieve what the doc­tor tells me about whether con­sum­ing too much medium-chain triglyc­eride oil could stress my liver.3

In my ex­pe­rience, peo­ple who don’t viscer­ally un­der­stand Moloch’s toolbox and the ubiquitously bro­ken Nash equil­ibria of real life and how group in­san­ity can arise from in­tel­li­gent in­di­vi­d­u­als re­spond­ing to their own in­cen­tives tend to un­con­sciously trans­late all as­ser­tions about rel­a­tive sys­tem com­pe­tence into as­ser­tions about rel­a­tive sta­tus. If you don’t see sys­temic com­pe­tence as rare, or don’t see real-world sys­temic com­pe­tence as driven by rare in­stances of cor­rectly al­igned in­cen­tives, all that’s left is sta­tus. All good and bad out­put is just driven by good and bad in­di­vi­d­ual peo­ple, and to sug­gest that you’ll have bet­ter out­put is to as­sert that you’re in­di­vi­d­u­ally smarter than ev­ery­one else. (This is what sta­tus hi­er­ar­chy feels like from the in­side: to perform bet­ter is to be bet­ter.)

On a trip a cou­ple of years ago to talk with the Euro­pean ex­is­ten­tial risk com­mu­nity, which has in­ter­nal­ized norms from mod­est episte­mol­ogy to an even greater ex­tent than the Bay Area com­mu­nity has, I ran into var­i­ous peo­ple who asked ques­tions like, “Why do you and your co-work­ers at MIRI think you can do bet­ter than academia?” (MIRI is the Ma­chine In­tel­li­gence Re­search In­sti­tute, the or­ga­ni­za­tion I work at.)

I re­sponded that we were a small re­search in­sti­tute that sus­tains it­self on in­di­vi­d­ual donors, thereby sidestep­ping a set of stan­dard or­ga­ni­za­tional de­mands that col­lec­tively cre­ate bad in­cen­tives for the kind of re­search we’re work­ing on. I de­scribed how we had de­liber­ately or­ga­nized our­selves to steer clear of in­cen­tives that dis­cour­age long-term sub­stan­tive re­search pro­jects, to avoid academia’s “pub­lish or per­ish” dy­namic, and more gen­er­ally to nav­i­gate around the mul­ti­ple fron­tiers of com­pet­i­tive­ness where re­searchers have to spend all their en­ergy com­pet­ing along those di­men­sions to get into the best jour­nals.

Th­ese are known failure modes that aca­demics rou­tinely com­plain about, so I wasn’t say­ing any­thing novel or clever. The point I wanted to em­pha­size was that it’s not enough to say that you want risky long-term re­search in the ab­stract; you have to ac­cept that your peo­ple won’t be at the com­pet­i­tive fron­tier for jour­nal pub­li­ca­tions any­more.

The re­sponse I got back was some­thing like a di­vide-by-zero er­ror. When­ever I said “the non­profit I work at has differ­ent in­cen­tives that look prima fa­cie helpful for solv­ing this set of tech­ni­cal prob­lems,” my claim ap­peared to get parsed as “the non­profit I work at is bet­ter (higher sta­tus, more au­thor­i­ta­tive, etc.) than academia.”

I think that the peo­ple I was talk­ing with had already in­ter­nal­ized the math­e­mat­i­cal con­cept of Nash equil­ibria, but I don’t think they were steeped in a no-free-en­ergy microe­co­nomic equil­ibrium view of all of so­ciety where most of the time sys­tems end up dumber than the peo­ple in them due to mul­ti­ple lay­ers of ter­rible in­cen­tives, and that this is nor­mal and not at all a sur­pris­ing state of af­fairs to sug­gest. And if you haven’t prac­ticed think­ing about or­ga­ni­za­tions’ com­par­a­tive ad­van­tages from that per­spec­tive long enough to make that lens more cog­ni­tively available than the sta­tus com­par­i­sons lens, then it makes sense that all talk of rel­a­tive perfor­mance lev­els be­tween you and doc­tors, or you and academia, or what­ever, will be au­toparsed by the eas­ier, more na­tive, more au­to­matic sta­tus lens.

Be­cause, come on, do you re­ally think you’re more au­thor­i­ta­tive /​ re­spectable /​ qual­ified /​ rep­utable /​ adept than your doc­tor about medicine? If you think that, won’t you start con­sum­ing Vi­tamin C mega­doses to treat can­cer? And if you’re not more au­thor­i­ta­tive /​ re­spectable /​ qual­ified /​ rep­utable /​ adept than your doc­tor, then how could you pos­si­bly do bet­ter by do­ing In­ter­net re­search?

(Among most peo­ple I know, the rel­a­tive sta­tus feel­ing fre­quently gets ver­bal­ized in English as “smarter,” so if the above para­graph didn’t make sense, try re­plac­ing the so­cial-sta­tus place­holder “au­thor­i­ta­tive /​ re­spectable /​ etc.” with “smarter.”)

Again, a lot of the benefit of be­com­ing fluent with this view­point is just in hav­ing a way of see­ing “sys­tems with not-all-that-great out­puts,” of­ten ob­served ex­ten­sively and di­rectly, that can parse into some­thing that isn’t “Am I higher-sta­tus (‘smarter,’ ‘bet­ter,’ etc.) than the peo­ple in the sys­tem?”

iii.

I once en­coun­tered a case of (hon­est) mi­s­un­der­stand­ing from some­one who thought that when I cited some­thing as an ex­am­ple of civ­i­liza­tional in­ad­e­quacy (or as I put it at the time, “Peo­ple are crazy and the world is mad”), the thing I was try­ing to ar­gue was that the Great Stag­na­tion was just due to unim­pres­sive /​ un­qual­ified /​ low-sta­tus (“stupid”) sci­en­tists.4 He thought I thought that all we needed to do was take peo­ple in our so­cial cir­cle and have them go into biotech, or put sci­en­tists through a CFAR unit, and we’d see huge break­throughs.5

What?” I said.

(I was quite sur­prised.)

“I never said any­thing like that,” I said, af­ter re­cov­er­ing from the shock. “You can’t lift a ten-pound weight with one pound of force!”

I went on to say that it’s con­ceiv­able you could get faster-than-cur­rent re­sults if CFAR’s an­nual bud­get grew 20x, and then they spent four years iter­at­ing ex­per­i­men­tally on tech­niques, and then a group of promis­ing biotech­nol­ogy grad stu­dents went through a year of CFAR train­ing…6

So an­other way of think­ing about the cen­tral ques­tion of civ­i­liza­tional in­ad­e­quacy is that we’re try­ing to as­sess the quan­tity of effort re­quired to achieve a given level of out­perfor­mance. Not “Can it be done?” but “How much work?”

This brings me to the sin­gle most ob­vi­ous no­tion that cor­rect con­trar­i­ans grasp, and that peo­ple who have vastly over­es­ti­mated their own com­pe­tence don’t re­al­ize: It takes far less work to iden­tify the cor­rect ex­pert in a pre-ex­ist­ing dis­pute be­tween ex­perts, than to make an origi­nal con­tri­bu­tion to any field that is re­motely healthy.

I did not work out my­self what would be a bet­ter policy for the Bank of Ja­pan. I be­lieved the ar­gu­ments of Scott Sum­ner, who is not liter­ally main­stream (yet), but whose po­si­tion is shared by many other economists. I sided with a par­tic­u­lar band of con­trar­ian ex­pert economists, based on my at­tempt to parse the ob­ject-level ar­gu­ments, ob­serv­ing from the sidelines for a while to see who was right about near-term pre­dic­tions and pick­ing up on what pre­vi­ous ex­pe­rience sug­gested were strong cues of cor­rect con­trar­i­anism.7

And so I ended up think­ing that I knew bet­ter than the Bank of Ja­pan. On the mod­est view, that’s just about as im­mod­est as think­ing you can per­son­ally ad­vance the state of the art, since who says I ought to be smarter than the Bank of Ja­pan at pick­ing good ex­perts to trust, et cetera?

But in real life, in­side a civ­i­liza­tion that is of­ten tremen­dously bro­ken on a sys­temic level, find­ing a con­trar­ian ex­pert seem­ing to shine against an un­trust­wor­thy back­ground is nowhere re­motely near as difficult as be­com­ing that ex­pert your­self. It’s the differ­ence be­tween pick­ing which of four run­ners is most likely to win a fifty-kilo­me­ter race, and win­ning a fifty-kilo­me­ter race your­self.

Dist­in­guish­ing a cor­rect con­trar­ian isn’t easy in ab­solute terms. You are still try­ing to be bet­ter than the main­stream in de­cid­ing who to trust.8 For many peo­ple, yes, an at­tempt to iden­tify con­trar­ian ex­perts ends with them trust­ing faith heal­ers over tra­di­tional medicine. But it’s still in the range of things that am­a­teurs can do with a rea­son­able effort, if they’ve picked up on un­usu­ally good episte­mol­ogy from one source or an­other.

We live in a suffi­ciently poorly-func­tion­ing world that there are many visi­bly cor­rect con­trar­i­ans whose ideas are not yet be­ing im­ple­mented in the main­stream, where the au­thor­i­ties who allegedly judge be­tween ex­perts are mak­ing er­rors that ap­pear to me triv­ial. (And again, by “er­rors,” I mean that these au­thor­i­ties are en­dors­ing fac­tu­ally wrong an­swers or dom­i­nated poli­cies—not that they’re pass­ing up easy re­wards given their in­cen­tives.)

In a world like that, you can of­ten know things that the av­er­age au­thor­ity doesn’t know… but not be­cause you figured it out your­self, in al­most ev­ery case.

iv.

Go­ing be­yond pick­ing the right horse in the race and be­com­ing a horse your­self, in­vent­ing your own new per­sonal solu­tion to a civ­i­liza­tional prob­lem, re­quires a much greater in­vest­ment of effort.

I did make up my own de­ci­sion the­ory—not from a tab­ula rasa, but still to my own recipe. But events like that should be rare in a given per­son’s life. Log­i­cal coun­ter­fac­tu­als in de­ci­sion the­ory are one of my few ma­jor con­tri­bu­tions to an ex­ist­ing aca­demic field, and my early thoughts on this topic were quickly im­proved on by oth­ers.9 And that was a sig­nifi­cant life event, not the sort of thing I be­lieve I’ve done ev­ery month.

Above all, reach­ing the true fron­tier re­quires pick­ing your bat­tles.

Com­puter se­cu­rity pro­fes­sion­als don’t at­tack sys­tems by pick­ing one par­tic­u­lar func­tion and say­ing, “Now I shall find a way to ex­ploit these ex­act 20 lines of code!” Most lines of code in a sys­tem don’t provide ex­ploits no mat­ter how hard you look at them. In a large enough sys­tem, there are rare lines of code that are ex­cep­tions to this gen­eral rule, and some­times you can be the first to find them. But if we think about a ran­dom sec­tion of code, the base rate of ex­ploita­bil­ity is ex­tremely low—ex­cept in re­ally, re­ally bad code that no­body looked at from a se­cu­rity stand­point in the first place.

Think­ing that you’ve searched a large sys­tem and found one new ex­ploit is one thing. Think­ing that you can ex­ploit ar­bi­trary lines of code is quite an­other.

No mat­ter how bro­ken academia is, no one can im­prove on ar­bi­trary parts of the mod­ern aca­demic ed­ifice. My own base fre­quency for see­ing schol­ar­ship that I think I can im­prove upon is “al­most never,” out­side of some aca­demic sub­fields deal­ing with the equiv­a­lent of “un­usu­ally bad code.” But don’t ex­pect bad code to be guard­ing vaults of gleam­ing gold in a form that other peo­ple value, ex­cept with a very low base rate. There do tend to be real locks on the en­ergy-con­tain­ing vaults not already emp­tied… al­most (but not quite) all of the time.

Similarly, you do not gen­er­ate a good startup idea by tak­ing some ran­dom ac­tivity, and then talk­ing your­self into be­liev­ing you can do it bet­ter than ex­ist­ing com­pa­nies. Even where the cur­rent way of do­ing things seems bad, and even when you re­ally do know a bet­ter way, 99 times out of 100 you will not be able to make money by know­ing bet­ter. If some­body else makes money on a solu­tion to that par­tic­u­lar prob­lem, they’ll do it us­ing rare re­sources or skills that you don’t have—in­clud­ing the skill of be­ing su­per-charis­matic and get­ting tons of ven­ture cap­i­tal to do it.

To be­lieve you have a good startup idea is to say, “Un­like the typ­i­cal 99 cases, in this par­tic­u­lar anoma­lous and un­usual case, I think I can make a profit by know­ing a bet­ter way.”

The anomaly doesn’t have to be some su­per-un­usual skill pos­sessed by you alone in all the world. That would be a ques­tion that always re­turned “No,” a blind set of gog­gles. Hav­ing an un­usu­ally good idea might work well enough to be worth try­ing, if you think you can stan­dardly solve the other stan­dard startup prob­lems. I’m merely em­pha­siz­ing that to find a rare startup idea that is ex­ploitable in dol­lars, you will have to scan and keep scan­ning, not pur­sue the first “X is bro­ken and maybe I can fix it!” thought that pops into your head.

To win, choose winnable bat­tles; await the rare anoma­lous case of, “Oh wait, that could work.”

v.

In 2014, I ex­per­i­men­tally put to­gether my own ke­to­genic meal re­place­ment drink via sev­eral weeks of re­search, plus months of em­piri­cal tweak­ing, to see if it could help me with long-term weight nor­mal­iza­tion.

In that case, I did not get to pick my bat­tle­ground.

And yet even so, I still tried to de­sign my own recipe. Why? It seems I must have thought I could do bet­ter than the best ke­to­genic liquid-food recipes that had ever be­fore been tried, as of 2014. Why would I be­lieve I could do the best of any­one who’s yet tried, when I couldn’t pick my bat­tle?

Well, be­cause I looked up pre­vi­ous ke­to­genic Soylent recipes, and they used stan­dard mul­ti­vi­tamin pow­ders con­tain­ing, e.g., way too much man­ganese and the wrong form of se­le­nium. (You get all the man­ganese you need from or­di­nary drink­ing wa­ter, if it hasn’t been dis­til­led or bot­tled. Ex­cess amounts may be neu­ro­toxic. One of the lead­ing hy­pothe­ses for why mul­ti­vi­tam­ins aren’t found to pro­duce net health im­prove­ment, de­spite hav­ing many in­di­vi­d­ual com­po­nents found to be helpful, is that mul­ti­vi­tam­ins con­tain 100% of the US RDA of man­ganese. Similarly, if a mul­ti­vi­tamin in­cludes sodium se­len­ite in­stead of, e.g., se-methyl-se­leno­cys­teine, it’s the equiv­a­lent of hand­ing you a lump of char­coal and say­ing, “You’re a car­bon-based life­form; this has car­bon in it, right?”)

Just for the sake of grim amuse­ment, I also looked up my civ­i­liza­tion’s med­i­cally stan­dard ke­to­genic dietary op­tions—e.g., for epilep­tic chil­dren. As ex­pected, they were far worse than the am­a­teur Soylent-in­spired recipes. They didn’t even con­tain medium-chain triglyc­erides, which your liver turns di­rectly into ke­tones. (MCT is aca­dem­i­cally recom­mended, though not com­mer­cially stan­dard, as the ba­sis for main­tain­ing ke­to­sis in epilep­tic chil­dren.) In­stead the re­tail dietary op­tions for epilep­tic chil­dren in­volved mostly soy­bean oil, of which it has been said, “Why not just shoot them?”

Even when we can’t pick our bat­tle­ground, some­times the most ad­vanced weapon on offer turns out to be a bro­ken stick and it’s worth the time to carve a han­daxe.

… But even then, I didn’t try to syn­the­size my own dietary the­ory from scratch. There is noth­ing I be­lieve about how hu­man metabolism works that’s unique or origi­nal to me. Not a sin­gle el­e­ment of my home­made Ke­tosoylent was based on my per­sonal, pri­vate the­ory of how any of the micronu­tri­ents worked. Who am I to think I un­der­stand Vi­tamin D3 bet­ter than ev­ery­one else in the world?

The Ke­tosoylent didn’t work for long-term weight nor­mal­iza­tion, alas—the same re­sult as all other repli­cated ex­per­i­ments on try­ing to long-term-nor­mal­ize weight via putting differ­ent things in­side your mouth. (The Shangri-La Diet I men­tioned at the start of this book didn’t work for me ei­ther.)

So it goes. I men­tion the Ke­tosoylent be­cause it’s the most com­pli­cated thing I’ve tried to do with­out tons of ex­pe­rience in a do­main and with­out be­ing able to pick my bat­tles.

In the sim­pler and hap­pier case of treat­ing Brienne’s Sea­sonal Affec­tive Di­sor­der, I again didn’t get to pick the bat­tle­ground; but SAD has re­ceived far less sci­en­tific at­ten­tion to date than obe­sity. And suc­cess there again didn’t in­volve com­ing up with an amaz­ing new model of SAD. It’s not weird and pri­vate knowl­edge that suffi­ciently bright light might cure SAD. The Sun is known to work al­most all the time.

So a re­al­is­tic life­time of try­ing to adapt your­self to a bro­ken civ­i­liza­tion looks like:

  • 0–2 life­time in­stances of an­swer­ing “Yes” to “Can I sub­stan­tially im­prove on my civ­i­liza­tion’s cur­rent knowl­edge if I put years into the at­tempt?” A few peo­ple, but not many, will an­swer “Yes” to enough in­stances of this ques­tion to count on the fingers of both hands. Mov­ing on to your toes in­di­cates that you are a crack­pot.

  • Once per year or there­abouts, an an­swer of “Yes” to “Can I gen­er­ate a syn­the­sis of ex­ist­ing cor­rect con­trar­i­anism which will beat my cur­rent civ­i­liza­tion’s next-best al­ter­na­tive, for just my­self (i.e., with­out try­ing to solve the fur­ther prob­lems of wide­spread adop­tion), af­ter a few weeks’ re­search and a bunch of test­ing and oc­ca­sion­ally ask­ing for help?” (See my ex­per­i­ments with ke­to­genic diets and SAD treat­ment; also what you would do to gen­er­ate or judge a startup idea that wasn’t based on a hard sci­ence prob­lem.)

  • Many cases of try­ing to pick a pre­vi­ously ex­ist­ing side in a run­ning dis­pute be­tween ex­perts, if you think that you can fol­low the ob­ject-level ar­gu­ments rea­son­ably well and there are strong meta-level cues that you can iden­tify.

The ac­cu­mu­la­tion of many judg­ments of the lat­ter kind is where you get the fuel for many small day-to-day de­ci­sions (e.g., about what to eat), and much of your abil­ity to do larger things (like solv­ing a med­i­cal prob­lem af­ter go­ing through the med­i­cal sys­tem has proved fruitless, or ex­e­cut­ing well on a startup).

vi.

A few fi­nal pieces of ad­vice on ev­ery­day think­ing about in­ad­e­quacy:

When it comes to es­ti­mat­ing the com­pe­tence of some as­pect of civ­i­liza­tion, es­pe­cially rel­a­tive to your own com­pe­tence, try to up­date hard on your ex­pe­riences of failure and suc­cess. One data point is a hell of a lot bet­ter than zero data points.

Wor­ry­ing about how one data point is “just an anec­dote” can make sense if you’ve already col­lected thirty data points. On the other hand, when you pre­vi­ously just had a lot of prior rea­son­ing, or you were pre­vi­ously try­ing to gen­er­al­ize from other peo­ple’s not-quite-similar ex­pe­riences, and then you col­lide di­rectly with re­al­ity for the first time, one data point is huge.

If you do ac­ci­den­tally up­date too far, you can always re-up­date later when you have more data points. So up­date hard on each oc­ca­sion, and take care not to flush any new ob­ser­va­tion down the toi­let.

Oh, and bet. Bet on ev­ery­thing. Bet real money. It helps a lot with learn­ing.

I once bet $25 at even odds against the even­tual dis­cov­ery of the Higgs bo­son—af­ter 90% of the pos­si­ble mass range had been ex­per­i­men­tally elimi­nated, be­cause I had the im­pres­sion from read­ing di­a­tribes against string the­ory that mod­ern the­o­ret­i­cal physics might not be solid enough to pre­dict a qual­i­ta­tively new kind of par­ti­cle with prior odds greater than 9:1.

When the Higgs bo­son was dis­cov­ered in­side the re­main­ing 10% in­ter­val of pos­si­ble en­er­gies, I said, “Gosh, I guess they can pre­dict that sort of thing with prior prob­a­bil­ity greater than 90%,” up­dated strongly in fa­vor of the cred­i­bil­ity of things like dark mat­ter and dark en­ergy, and then didn’t make any more bets like that.

I made a mis­take; and I bet on it. This let me ex­pe­rience the mis­take in a way that helped me bet­ter learn from it. When you’re think­ing about large, messy phe­nom­ena like “the ad­e­quacy of hu­man civ­i­liza­tion at un­der­stand­ing nu­tri­tion,” it’s easy to get caught up in plau­si­ble-sound­ing sto­ries and never quite get around to run­ning the ex­per­i­ment. Run ex­per­i­ments; place bets; say oops. Any­thing less is an act of self-sab­o­tage.


Next: Blind Em­piri­cism.

The full book will be available Novem­ber 16th. You can go to equil­ibri­abook.com to pre-or­der the book, or sign up for no­tifi­ca­tions about new chap­ters and other de­vel­op­ments.


  1. As an ex­am­ple, rel­a­tively few peo­ple in the world need well-de­vel­oped skills at cog­ni­tive re­duc­tion­ism for the pur­pose of dis­assem­bling as­pects of na­ture. The rea­son why any­one else needs to learn cog­ni­tive re­duc­tion­ism—the rea­son it’s this big pub­lic epistemic hy­giene is­sue—is that there are a lot of dam­ag­ing su­per­nat­u­ral be­liefs that cog­ni­tive re­duc­tion­ism helps counter.

  2. Bren­nan, “The World Is Mad.”

    When I ran a draft of this chap­ter by Bren­nan, they said that they ba­si­cally agree with what I’m say­ing here, but are think­ing about these is­sues us­ing a differ­ent con­cep­tual frame­work.

  3. An­swer: this is the op­po­site of stan­dard the­ory; she was prob­a­bly con­fus­ing MCT with other forms of sat­u­rated fat.

  4. The Great Stag­na­tion is economist Tyler Cowen’s hy­poth­e­sis that de­clin­ing rates of in­no­va­tion since the 1970s (ex­clud­ing in­for­ma­tion tech­nol­ogy, for the most part) have re­sulted in rel­a­tive eco­nomic stag­na­tion in the de­vel­oped world.

  5. CFAR, the Cen­ter for Ap­plied Ra­tion­al­ity, is a non­profit that ap­plies ideas from cog­ni­tive sci­ence to ev­ery­day prob­lem-solv­ing and de­ci­sion-mak­ing, run­ning work­shops for peo­ple who want to get bet­ter at solv­ing big global prob­lems. MIRI and CFAR are fre­quent col­lab­o­ra­tors, and share office space; the or­ga­ni­za­tion’s origi­nal con­cept came from MIRI’s work on ra­tio­nal­ity.

  6. See also Wein­er­smith’s Law: “No prob­lem is too hard. Many prob­lems are too fast.

  7. E.g., the cry of “Stop ig­nor­ing your own care­fully gath­ered ex­per­i­men­tal ev­i­dence, damn it!”

  8. Though, to be clear, the main­stream isn’t ac­tu­ally de­cid­ing who to trust. It’s pick­ing win­ners by some other crite­rion that on a good day is not to­tally un­cor­re­lated with trust­wor­thi­ness.

  9. In par­tic­u­lar, Wei Dai came up with up­date­less­ness, yield­ing the ear­liest ver­sion of what’s now called func­tional de­ci­sion the­ory. See Soares and Lev­in­stein’s “Cheat­ing Death in Da­m­as­cus” for a de­scrip­tion.