In defence of epistemic modesty

This piece defends a strong form of epistemic mod­esty: that, in most cases, one should pay scarcely any at­ten­tion to what you find the most per­sua­sive view on an is­sue, hew­ing in­stead to an ideal­ized con­sen­sus of ex­perts. I start by bet­ter pin­ning down ex­actly what is meant by ‘epistemic mod­esty’, go on to offer a va­ri­ety of rea­sons that mo­ti­vate it, and re­ply to some com­mon ob­jec­tions. Along the way, I show com­mon traps peo­ple be­ing in­ap­pro­pri­ately mod­est fall into. I con­clude that mod­esty is a su­pe­rior epistemic strat­egy, and ought to be more widely used—par­tic­u­larly in the EA/​ra­tio­nal­ist com­mu­ni­ties.



I ar­gue for this:

In vir­tu­ally all cases, the cre­dence you hold for any given be­lief should be dom­i­nated by the bal­ance of cre­dences held by your epistemic peers and su­pe­ri­ors. One’s own con­vic­tions should weigh no more heav­ily in the bal­ance than that of one other epistemic peer.

In­tro­duc­tions and clarifications

A favourable mo­ti­vat­ing case

Sup­pose your mother thinks they can make some easy money day trad­ing blue-chip stocks, and plans to kick off to­mor­row short­ing Google on the stock mar­ket, as they’re sure it’s headed for a crash. You might want to dis­suade her in a va­ri­ety of ways.

You might ap­peal to an out­side view:

Mum, when you make this short you’re go­ing to be bet­ting against some hedge fund, quant, or what­ever else. They have loads of ad­van­tages: rele­vant back­ground, bet­ter in­for­ma­tion, lots of data and com­put­ers, and so on. Do you re­ally think you’re odds on to win this bet?

Or ap­peal to some refer­ence class:

Mum, I’m pretty sure the re­search says that peo­ple try­ing to day-trade stocks tend not to make much money at all. Although you might hear some big suc­cesses on the in­ter­net, you don’t hear about ev­ery­one else who went bust. So why should you think you are likely to be one of these re­mark­able suc­cesses?

Or just cite dis­agree­ment:

Look Mum: Dad, sister, the grand­par­ents and I all think this is a re­ally bad idea. Please don’t do it!

In­stead of di­rectly challeng­ing the ob­ject level claim (i.e. “Google isn’t over­val­ued, be­cause X”). Th­ese con­sid­er­a­tions at­tempt to situ­ate the cog­niser within some pop­u­la­tion, and from char­ac­ter­is­tics of this pop­u­la­tion in­fer the like­li­hood of this cog­niser get­ting things right.

Call the prac­tice of us­ing these tech­niques con­sid­er­a­tions epistemic mod­esty. We can dis­t­in­guish two com­po­nents:

  1. ‘In the­ory’ mod­esty: That con­sid­er­a­tions of this type should in prin­ci­ple in­fluence our cre­dences.

  2. ‘In prac­tice’ mod­esty: That one should in fact use these con­sid­er­a­tions when form­ing cre­dences.

Weaker and stronger forms of modesty

Some de­gree of mod­esty is (al­most) inar­guable. If one leaves for work on Tues­day and finds all your neigh­bours left their bins out, that’s at least rea­son to doubt your be­lief bins were on Thurs­day, and per­haps suffi­cient to be­lieve in­stead bins are on Tues­day (and fol­low suit with your bins). If it ap­pears that, say, the co­ag­u­la­tion cas­cade ‘couldn’t evolve’, the near una­n­im­ity of as­sent for evolu­tion among biol­o­gists at least counts against this, if not a de­ci­sive rea­son, de­spite one’s im­pres­sions, that it could. Nick Beck­stead sug­gests some­thing like ‘elite com­mon sense’ forms a prior which one should be hes­i­tant to di­verge from with­out good rea­son.

I ar­gue for some­thing much stronger (c.f. the Provo­ca­tion above): in the­ory, one’s cre­dence in some propo­si­tion P should be al­most wholly in­formed by mod­est con­sid­er­a­tions. That, ce­teris paribus, the fact it ap­pears to you that P should weigh no more heav­ily in one’s de­ter­mi­na­tion re­gard­ing P than know­ing that it ap­pears to some­one else that P. Not only is this the case in the­ory, but it is also the case in prac­tice. One’s all things con­sid­ered judge­ment on P should be just that im­plied by an ideal­ized ex­pert con­sen­sus on P, no mat­ter one’s own con­vic­tions re­gard­ing P.

Mo­ti­va­tions for more modesty

Why be­lieve ‘strong form’ epistemic mod­esty? I first show fam­i­lies of cases where ‘strong mod­esty’ leads to pre­dictably bet­ter perfor­mance, and show these re­sults gen­er­al­ise widely.[1]

The sym­me­try case

Sup­pose Adam and Beatrice are perfect epistemic peers, equal in all re­spects which could bear on them form­ing more or less ac­cu­rate be­liefs. They dis­agree on a par­tic­u­lar propo­si­tion P (say “This tree is an Oak tree”). They ar­gue about this at length, such that all con­sid­er­a­tions Adam takes to favour “This is an Oak tree” are known to Beatrice, and vice versa.[2] After this, they still dis­agree: Adam has a cre­dence of 0.8, Beatrice 0.4.

Sup­pose an out­side party (call him Oliver) is asked for his cre­dence of P, given Adam and Beatrice’s cre­dences and their epistemic peer-hood to one an­other, but bereft of any ob­ject-level knowl­edge. He should split the differ­ence be­tween Adam and Beatrice − 0.6: Oliver doesn’t have any rea­son to favour Adam over Beatrice’s cre­dence for P as they are epistemic peers, and so split­ting the differ­ence gives the least ex­pected er­ror.[3] If he was faced with a large class of similar situ­a­tions (maybe Adam and Beatrice get into the same ar­gu­ment for Tree 2 to Tree 10,000) Oliver would find that differ­ence split­ting has lower er­ror than bi­as­ing to ei­ther Adam or Beatrice’s cre­dence.

Adam and Beatrice should do like­wise. They also know they are epistemic peers, and so they should also know that for what­ever con­sid­er­a­tions ex­plain their differ­ence (per­haps Adam is re­ally per­suaded by the leaf shapes, but Beatrice isn’t) Adam’s take and Beatrice’s take are no more likely to be right than one an­other. So Adam should go (and Beatrice vice-versa), “I don’t un­der­stand why Beatrice isn’t per­suaded by the leaf shapes, but she ex­presses the same about why I find it so con­vinc­ing. Given she is my epistemic peer, ‘She’s not get­ting it’, and, ‘I’m not get­ting it’ are equally likely. So we should meet in the mid­dle”.

The un­der­ly­ing in­tu­ition is one of sym­me­try. Adam and Beatrice have the same in­for­ma­tion. The cor­rect cre­dence re­gard­ing P given this in­for­ma­tion should not de­pend on which brain Adam or Beatrice hap­pens to in­habit. Given this, they should hold the same cre­dence[4], and as they Adam is as likely to be fur­ther from the truth than Beatrice, the shared cre­dence should be in the mid­dle.

Com­pressed sens­ing of (and not dou­ble-count­ing) the ob­ject level

It seems odd that both Adam and Beatrice do bet­ter dis­card­ing their ob­ject level con­sid­er­a­tions re­gard­ing P. If we ad­just the sce­nario above so they can­not dis­cuss with one an­other but are merely in­formed of each other’s cre­dences (and that they are peers re­gard­ing P), the right strat­egy re­mains to meet in the mid­dle.[5] Yet how come Adam and Beatrice are do­ing bet­ter if they ig­nore rele­vant in­for­ma­tion? Both Adam and Beatrice have their ‘in­side view’ ev­i­dence (i.e. what they take to bear on the cre­dence of P) and the ‘out­side view’ ev­i­dence (what each other think about P). Why not use a hy­brid strat­egy which uses both?

Yet to what­ever ex­tent Adam or Beatrice’s hy­brid ap­proach leads them to di­verge from equal weight, they will do worse. Oliver can use the ‘meet in the mid­dle strat­egy’ to get an ex­pect­edly bet­ter ac­cu­racy than ei­ther bi­as­ing to­wards their own in­side view de­ter­mi­na­tion. In bet­ting terms, Oliver can ar­bi­trage any differ­ence in cre­dence be­tween Adam and Beatrice.

We can ex­plain why: the cre­dences Adam and Beatrice offer can be thought of as very com­pressed sum­maries of the con­sid­er­a­tions they take to bear upon P. What­ever ‘in­side view’ con­sid­er­a­tions Adam took to bear upon P are already ‘priced in’ to the cre­dence he re­ports (ditto Beatrice). Modesty is not ig­nor­ing this ev­i­dence, but weigh­ing it ap­pro­pri­ately: if Adam then tries to ad­just the out­side view de­ter­mi­na­tion by his own take on the bal­ance of ev­i­dence, he dou­ble counts his in­side view: once in it­self, and once more by in­clud­ing his cre­dence as weigh­ing equally to Beatrice’s in giv­ing the out­side view.

One’s take on the set of con­sid­er­a­tions re­gard­ing P may err, ei­ther by bias,[6] ig­no­rance, or ‘in­no­cent’ mis­take. Split­ting the differ­ence be­tween you and your peer’s very high level sum­mary of these cap­tures the great frac­tion of benefit of hash­ing out where these sum­maries differ.[7] Modesty cor­rectly di­ag­noses that one’s high level sum­mary is no more likely to be more ac­cu­rate than one’s peers, and so holds those in equal re­gard, even in cases where the com­po­nents of one’s own sum­mary are known bet­ter.

Re­peated mea­sures, brains as cre­dence cen­sors, and the wis­dom of crowds

Modesty out­performs non-mod­esty in the n=2 case. The de­gree of out­perfor­mance grows (albeit con­cavely) as n in­creases.

Scien­tific fields of­ten have to deal with un­re­li­able mea­sure­ment. They com­monly miti­gate this by hav­ing re­peat mea­sure­ment. If you have a crummy ther­mome­ter, re­peat­ing read­ings sev­eral times im­proves ac­cu­racy over just the once. Hu­man brains also try and mea­sure things, and they are also of­ten un­re­li­able. It is com­monly ob­served that nonethe­less the av­er­age of their mea­sure­ment tends to lie closer to the mark than the vast ma­jor­ity of in­di­vi­d­ual mea­sure­ments. Con­sider the com­mon­place ‘guess how many skit­tles are in this jar’ or similar es­ti­ma­tion games: the usual ob­ser­va­tion is that the av­er­age of all the guesses is bet­ter than all (or al­most all) the in­di­vi­d­ual guesses.

A toy model makes this un­sur­pris­ing. The in­di­vi­d­ual guesses will form some dis­tri­bu­tion cen­tered on the true value. Thus the ex­pected er­ror of a given in­di­vi­d­ual guess is the stan­dard de­vi­a­tion of this dis­tri­bu­tion. The ex­pected er­ror of the av­er­age of all guesses is given by the stan­dard er­ror, which is the stan­dard de­vi­a­tion di­vided by root(num­ber of guesses):[8] with 10 in­di­vi­d­u­als, the er­ror is about 3 times smaller than the ex­pected er­ror of each in­di­vi­d­ual guess; with 100, 10 times smaller; and so on.

Analo­gously, hu­man brains also try to mea­sure cre­dences or de­grees of be­lief, and are similarly im­perfect to when they’re try­ing to es­ti­mate ‘num­ber of X’. Yet one may ex­pect a similar effect to this ‘wis­dom of crowds’ to op­er­ate here too. In the same way Adam and Beatrice would do bet­ter in the situ­a­tion above if they took the av­er­age (even if it went against their view of the bal­ance of rea­sons by their lights), if Adam-to-Za­baleta (all epistemic peers) in­ves­ti­gated the same P, they’d ex­pect to do bet­ter if they took the av­er­age of their group ver­sus stead­fastly hold­ing to the cre­dence they ar­rived at ‘by their lights’. What­ever in­ac­cu­ra­cies that may throw off their in­di­vi­d­ual es­ti­mates of P some­what can­cel out.

Defer­ring to bet­ter brains

The ar­gu­ments above ap­ply to cases where one is an epistemic peer. If not, one needs to ad­just by some mea­sure of ‘epistemic virtue’. In cases where Adam is an epistemic su­pe­rior to Beatrice, they should meet closer to Adam’s view, com­men­su­rate with the de­gree of epistemic su­pe­ri­or­ity (and vice versa).

Although rea­sons for be­ing an epistemic su­pe­rior could be ‘they’re a su­perfore­caster’ or ‘they’re smarter than me’, per­haps the most com­mon source of epistemic su­pe­ri­ors lie un­der the head­ing of ‘sub­ject mat­ter ex­pert’. On top­ics from hu­man nu­tri­tion, to vot­ing rules, to the im­pact of the min­i­mum wage, to the na­ture of con­scious­ness, to ba­si­cally any­thing that isn’t triv­ial, one can usu­ally find a fairly large group of very smart peo­ple who spend many years study­ing that topic, who make pub­lic their views about this topic (some­times not even be­hind a pay­wall). That they at least have a much greater body of rele­vant in­for­ma­tion and have spent longer think­ing about it gives them a large ad­van­tage com­pared to you.

In such cases, the anal­ogy might be that your brain is a sun­dial, whilst theirs is an atomic clock. So if you have the op­tion of tak­ing their read­ings rather than yours, you should do so. The ev­i­dence a read­ing of a sun­dial pro­vides about the time con­di­tional on the atomic clock read­ing is effec­tively zero. ‘Split­ting the differ­ence’ in analagous epistemic cases should re­sult with both you and your epistemic su­pe­rior agree­ing that they are right and you are wrong.

In­fer­ence to the ideal epistemic ob­server

We can sum­marise these mo­ti­va­tions by anal­ogy to ideal ob­servers (used el­se­where in per­cep­tion and eth­i­cal the­ory). We can ges­ture that an ideal (epistemic) ob­server is just that which is able to form the most ac­cu­rate cre­dence for P given what­ever prior: we can ex­plain they have vast in­tel­li­gence, full knowl­edge of all mat­ters that bear upon P, perfect judge­ment, and in essence all epistemic virtues in ex­cel­sis.

Now con­sider this helpful fic­tion:

The epistemic fall: Imag­ine a pop­u­la­tion solely com­prised of ideal ob­servers, who all share the same (cor­rect) view on P. Overnight their epistemic virtues are as­sailed: they lose some of their rea­son­ing ca­pac­ity; they pick up par­tic­u­lar bi­ases that could throw them one way or an­other; they lose in­for­ma­tion, and so on, and each one to vary­ing de­grees.
They wake up to find they now have all sorts of differ­ent cre­dences about P, and none of them can re­mem­ber what cre­dence they all held yes­ter­day. What should they do?

It seems our fallen ideal ob­servers can be­gin to piece to­gether what their origi­nal cre­dence was about P by find­ing out more about their cre­dences and re­main­ing epistemic virtue, and so back­prop­a­gate their re­turn to epistemic apotheo­sis. If they find they’re all similarly vir­tu­ous and are evenly scat­tered, their best guess is the ideal ob­server was in the mid­dle of the dis­tri­bu­tion (c.f. the wis­dom of crowds). If they see a trend that those with greater resi­d­ual virtue tend to hold a higher cre­dence in P, they should at­tempt to ex­trap­o­late this trend to sug­gest the ideal agent ori­gin from which they were differ­en­tially blown of course from. If they see one group demon­strates a bias that oth­ers do not, they can cor­rect the po­si­tion of this group be­fore try­ing these pro­ce­dures. If they find the more vir­tu­ous agents are more scat­tered re­gard­ing P, (or that they seg­re­gate into widely dis­persed ag­gre­ga­tions), this should make them very un­sure about where the ideal ob­server ini­tially was. And so on.

Such a model clar­ifies the benefit of mod­esty. Although we didn’t have some grand epistemic fall, it is clear we all fall man­i­festly short of an ideal ob­server. Yet we all fall short in differ­ent re­spects, and in differ­ent de­grees. One should want to be­lieve what­ever one would be­lieve if one was an ideal ob­server, shorn of one’s man­i­fest epistemic vices. Purely im­mod­est views must say their best guess is the ideal ob­server would think the same as they do, and hope that all the vi­cis­si­tudes of their epistemic vice hap­pen to can­cel out. By ac­count­ing for the dis­tri­bu­tion of cog­nisers, mod­esty al­lows a much bet­ter fore­cast, and so a much more ac­cu­rate be­lief. And the best such fore­cast is the strong form of mod­esty, where one’s par­tic­u­lar dat­a­point, in and of it­self, should not be counted higher than any other.

Ex­cur­sus: Against com­mon jus­tifi­ca­tions for immodesty

So much for strong mod­esty in the­ory. How does it perform in prac­tice?

One rough heuris­tic for strong mod­esty is this: for any ques­tion, find the plau­si­ble ex­pert class to an­swer that ques­tion (e.g. if P is whether to raise the min­i­mum wage, talk to economists). If this class con­verges on a par­tic­u­lar an­swer, be­lieve that an­swer too. If they do not agree, have lit­tle con­fi­dence in any an­swer. Do this no mat­ter whether one’s im­pres­sion of the ob­ject level con­sid­er­a­tions that recom­mend (by your lights) a par­tic­u­lar an­swer.

Such a model cap­tures all the com­mon sense cases of mod­esty—trust the re­sults in typ­i­cal text­books, defer to con­sen­sus in cases like when to put the bins out, and so on. I now show it is also bet­ter in many cases where peo­ple think it is bet­ter to be im­mod­est.

Be­ing ‘well in­formed’ (or even true ex­per­tise) is not enough

A com­mon re­frain is that one is en­ti­tled to ‘join is­sue’ with the ex­perts due to one hav­ing made some non-triv­ial effort at im­prov­ing one’s knowl­edge of the sub­ject. “Sure, I ac­cept ex­perts widely dis­agree on macro-eco­nomics, but I’m con­fi­dent in neo-Key­ne­si­anism af­ter many months of care­ful study and re­flec­tion.”

This doesn’t fly by the sym­me­try ar­gu­ment above. Our out­sider ob­serves wide­spread dis­agree­ment in the area of macroe­co­nomics, and that many ex­perts who spend years on the sub­ject nonethe­less greatly dis­agree. Although it is pos­si­ble the ideal ob­server would have been in one or an­other of the ‘camps’ (the clus­ter­ing im­plies in­ter­me­di­ate po­si­tions are less plau­si­ble), the out­sider can­not ad­ju­di­cate which one if we grant the economists in each ap­pear to have similar lev­els of epistemic virtue. The bal­ance of this out­side view changes im­per­cep­ti­bly if an­other per­son who de­spite a few months of study re­mains nowhere near peer­hood (let alone su­pe­ri­or­ity) of these di­vided ex­perts, hap­pens to side with one camp or an­other. By sym­me­try, one’s own view of the bal­ance of rea­son should re­main un­changed if this ‘an­other per­son’ hap­pened to be you.

The same ap­plies even if you are a bona fide ex­pert. Un­less the dis­tri­bu­tion of ex­per­tise is such that there is a lone ‘world au­thor­ity’ above all oth­ers (and you’re them) your fel­low ex­perts form your epistemic peer group. Tak­ing the out­side view is still the bet­ter bet: the con­sen­sus of ex­perts tends to be right more of­ten than dis­sent­ing ex­perts, and so some differ­ence split­ting (weighed more to the con­sen­sus ow­ing to their greater num­bers) is the right an­swer.[9]

Com­mon knowl­edge ‘silver bul­let ar­gu­ments’

Sup­pose one takes an in­tro­duc­tory class in eco­nomics. From this, one sees there must be a ‘knock-down’ ar­gu­ment against a min­i­mum wage:

Well, sup­pose you’re an em­ployee whose true value on the free mar­ket is less than the min­i­mum wage. But un­der the min­i­mum wage, the firm might not de­cide on char­i­ta­bly em­ploy­ing above your mar­ket value, and just firing you in­stead. You’re worse off, as you’re on the dole, and the firm’s worse off, as it has to meet its labour de­mand an­other way. Every­one’s lost! So much for the min­i­mum wage!

Yet one quickly dis­cov­ers economists seem to be deeply di­vided over the mer­its of the min­i­mum wage (as they are about most other things). See for ex­am­ple this poll sug­gest­ing 38 eco­nomic ex­perts in the US are pretty evenly di­vided on whether the min­i­mum wage would ‘hit’ em­ploy­ment for low-skill work­ers, and leant in favour of the min­i­mum wage ‘all things con­sid­ered’.

It seems risi­ble to sup­pose these economists don’t know their eco­nomics 101. What seems much more likely is that they know other things that you don’t which make the min­i­mum wage more rea­son­able than your je­june un­der­stand­ing of the sub­ject sug­gests. One need not be­labour which side the out­side view strongly prefers.

Yet it is de­press­ingly com­mon for peo­ple to con­fi­dently hold that view X or Y is de­ci­sively re­futed by some point or an­other, notwith­stand­ing the fact this point is well known to the group of ex­perts that nonethe­less hold X or Y. Of course in some cases one re­ally has touched on the de­ci­sive point the ex­perts have failed to ap­pre­ci­ate. More of­ten, one is pro­claiming that one is on the wrong side of the Dun­ning-Kruger effect.

De­bunk­ing the ex­pert class (but not you)

To the litany of cases where (ap­par­ent) ex­perts screwed up, we can add verses with­out end. So we might be in­clined to de­bunk a par­tic­u­lar ‘ex­pert con­sen­sus’ due to some bias or ir­ra­tional­ity we can iden­tify. Thus, hav­ing seen there are no ‘real’ ex­perts to help us, we must look at the ob­ject level case.

The key ques­tion is this: “How are you bet­ter?” And it is here that de­bunk­ing at­tempts of­ten flounder:

An un­der­cut­ting defeater for one as­pect of epistemic su­pe­ri­or­ity for the ex­pert class is not good enough. Maybe one can show the ex­pert class has a poor pre­dic­tive track record in their field. Un­less one has a bet­ter track record in their field, this puts you on a par with re­spect to this desider­a­tum of epistemic virtue. They likely have oth­ers (e.g. more rele­vant ob­ject-level knowl­edge) that should still give them an edge, albeit at­ten­u­ated.

An un­der­cut­ting defeater that seems to ap­ply equally well to one­self as the ex­pert class also isn’t enough. Sup­pose (say) eco­nomics is riven by ide­olog­i­cal bias: why are you less sus­cep­ti­ble to these bi­ases? The same ide­olog­i­cal bi­ases that might plague pro­fes­sional economists may also plague am­a­teur economists, but the former re­tain other ad­van­tages.

Even if a pro­posed de­bunk­ing is ‘se­lec­tively toxic’ to the ex­perts ver­sus you, it still might be your epistemic su­pe­rior all things con­sid­ered. Both Big Pharma and Pro­fes­sional Philos­o­phy may be mis­al­igned, but per­haps not so much to be or­thog­o­nal or an­tipar­allel to the truth: in both they still ex­pect­edly benefit by find­ing drugs that work or mak­ing good ar­gu­ments re­spec­tively. They may still fare bet­ter over­all than, “In­tel­li­gent layper­son who’s read ex­ten­sively”, even if they are not sub­ject to ‘pub­lish or per­ish’ or similar.

Even if a pro­posed de­bunk­ing shows one as de­ci­sively su­pe­rior to that ex­pert class, there may be an­other ex­pert class which re­mains epistem­i­cally su­pe­rior to you. Maybe you can per­sua­sively show pro­fes­sional philoso­phers are so com­pro­mised on con­scious­ness that they should not be deferred to about it. Then the real ex­pert class may sim­ply switch to some­thing like ‘in­tel­li­gent peo­ple out­side the academy who think a lot about the topic’. If it’s the case that this group of peo­ple do not share your con­fi­dence in your view, it seems out­siders should still re­ject it—as should you.

It need not be said that the track record for these de­bunk­ing defeaters is poor. Most crack­pots have a per­se­cu­tion nar­ra­tive to ex­plain why the main­stream doesn’t recog­nise or un­der­stand them, and some of the most mor­dant crit­i­cisms of the med­i­cal es­tab­lish­ment arise from those tout­ing com­ple­men­tary medicine. Thus ‘ex­plain­ing away’ ex­pert dis­agree­ment may not put one in a more pro­pi­tious refer­ence class than one started from. One should be par­tic­u­larly sus­pi­cious of de­bunk­ing(s) suffi­ciently gen­eral that the per­son hold­ing the un­ortho­dox view has no epistemic peers—they are akin to Moses, de­scend­ing from Mt. Si­nai, bring­ing down God-breathed truth for the rest of us.[10]

Pri­vate ev­i­dence and pet arguments

Sup­pose one thinks one is in re­ceipt of a pow­er­ful piece of pri­vate ev­i­dence: maybe you’ve got new data or a new in­sight. So even though the ex­perts are gen­er­ally in the right, in this par­tic­u­lar case they are wrong be­cause they are un­aware of this new con­sid­er­a­tion.

New knowl­edge will not spread in­stan­ta­neously, and that some­one can be ‘ahead of the curve’ comes as no sur­prise. Yet many peo­ple who take them­selves to have pri­vate ev­i­dence are wrong: maybe ex­perts know about it but don’t bother to dis­cuss it be­cause it is so weak, or it is already in the liter­a­ture (but you haven’t seen it), or it isn’t ac­tu­ally rele­vant to the topic, or what­ever else. Most mav­er­icks who take them­selves to have new ev­i­dence that over­turns con­sen­sus are mis­taken.

The nat­u­ral risk is peo­ple tend to be too par­tial to their pet ar­gu­ments or pet data, and so give them un­due weight, and so one’s ‘in­sider’ per­cep­tions should per­haps be at­ten­u­ated by this fact. I sus­pect most are over­con­fi­dent here.[11] If this pri­vate ev­i­dence re­ally is pow­er­ful, one should ex­pect it to be per­sua­sive to mem­bers of this ex­pert class once they be­come aware of it. So it seems the cre­dence one should have is the (ap­pro­pri­ately dis­counted) fore­cast of what the ex­pert class would think once you provide them this ev­i­dence.

The nat­u­ral test of the power of this pri­vate ev­i­dence is to make it pub­lic. If one ob­serves ex­perts (or just epistemic peers) shift to your view, you were right about how pow­er­ful this ev­i­dence was. If in­stead one sees a much more mod­est change in opinion, this should lead one to down­grade your es­ti­mate as to how pow­er­ful this ev­i­dence re­ally is (and per­haps provide cal­ibra­tion data for next time). Hold­ing in­stead this re­ally is de­ci­sive ev­i­dence leads one to the prob­le­matic ‘com­mon knowl­edge silver bul­let’ case dis­cussed above. In­fer­ring from this ex­perts just can’t un­der­stand your rea­son­ing or are bi­ased against out­siders or what­ever else pro­duces a sus­pi­ciously self-serv­ing de­bunk­ing ar­gu­ment, also dis­cussed above.


So much for the case in favour. What about the case against? I di­vide ob­jec­tions into those ‘in the­ory’, and those ‘in prac­tice’.

In theory

There’s no pure ‘out­side view’[12]

It is not the case you can boot­strap an out­side view from noth­ing. One needs to at least start with some con­sid­er­a­tions as to what makes one an epistemic peer or su­pe­rior, and prob­a­bly some min­i­mal back­ground knowl­edge of ‘about­ness’ to place top­ics un­der one or an­other ex­pert class.

In the same way large amounts of our em­piri­cal in­for­ma­tion are now de­rived by in­stru­ment rather than di­rect ap­pli­ca­tion of our senses (but were ul­ti­mately ger­mi­nated from di­rect sen­sory ex­pe­rience), large amounts of our epistemic in­for­ma­tion can be de­rived by defer­ring to bet­ter (or more) brains rather than us­ing our own, even if this re­lies on some ini­tial seed episte­mol­ogy we have to re­al­ise for our­selves. This ‘ger­mi­nal set of claims’ can still be mod­estly re­vised later.

Im­mod­estly mod­est?

One line of at­tack from the so­cial episte­mol­ogy liter­a­ture is that strong forms of mod­esty are self-defeat­ing. If one is mod­est, one should as­sumedly be mod­est about ‘What is the right way to form be­liefs if epistemic peers dis­agree with you?’ Yet one finds that very few peo­ple en­dorse the sort of epistemic mod­esty ad­vo­cated above. When one looks among po­ten­tial ex­pert classes, such as more in­tel­li­gent friends of mine (i.e. friends of mine), episte­mol­o­gists, and so on, con­cili­a­tory views like these com­mand only a minor­ity. So the epistem­i­cally mod­est should van­ish as they defer to the more stead­fast con­sen­sus.

If so, so much the worse for mod­esty. I offer a cou­ple of in­com­plete defences:

One is hag­gling over the topic of dis­agree­ment. In my limited read­ing of ‘equal weight/​con­cili­a­tory views and their de­trac­tors’, I take the de­trac­tors to be sug­gest­ing some­thing like “one is ‘within one’s rights’ to be stead­fast”, rather than some­thing like “you’re more ac­cu­rate if you’re stead­fast”. Maybe there are epistemic virtues which aren’t the same as be­ing more ac­cu­rate. Yet there may be less dis­agree­ment on ‘con­di­tional on an ac­cu­racy first view, is mod­esty the right ap­proach?’

This only gets so far (af­ter all, shouldn’t we be mod­est whether only to care about ac­cu­racy?) A more gen­eral defence is this: the ‘what if you ap­ply the the­ory to it­self?’ prob­lem looks pretty per­va­sive across the­o­ries.[13] Ac­counts of moral un­cer­tainty that in what­ever sense in­volve weigh­ing nor­ma­tive the­o­ries by their plau­si­bil­ity tend to run into prob­lems if the same ac­counts are ap­plied ‘one level up’ to meta-moral un­cer­tainty. Bayesian ac­counts of episte­mol­ogy seem to go hay­wire if we think one should have a cre­dence in Bayesian episte­mol­ogy it­self, es­pe­cially if one as­signs any non-zero cre­dence on any the­ory which en­tails ob­ject level cre­dences have un­defined val­ues.

Closer to home, milder ver­sions of con­cili­a­tion (e.g. “Pay some at­ten­tion to peer dis­agree­ment, but it’s not the only fac­tor”) share a similarly trou­ble­some re­cur­sive loop (“Well, I see most other peo­ple are stead­fast, so I should up­date to be a bit less con­cili­a­tory, but now I have to ap­ply my mod­ified view to this dis­agree­ment again”) and neat con­ver­gence is not guaran­teed. The the­o­ries which avoid this prob­lem (e.g. ‘Wholly stead­fast, so peer dis­agree­ment should be ig­nored’), tend to be the least plau­si­ble on the ob­ject level (e.g. That if you be­lieve bins are on Thurs­day, the fact all your neigh­bours have their bins out on Tues­day is not even rea­son to re­con­sider your be­lief).

A solu­tion to these types of prob­lems re­mains elu­sive. Yet mod­esty finds it­self in fairly good com­pany. It may be the case that a good re­s­olu­tion to this type of is­sue would rule out the strong form of mod­esty ad­vo­cated here, in favour of some in­ter­me­di­ate view. Un­til then, I hope the (ad­mit­tedly in­el­e­gant) “Be mod­est, save for meta-epistemic norms about mod­esty it­self” is not too great a cost to bear across the scales from the mer­its of the ap­proach.

In practice

I take most of the ac­tion to sur­round whether mod­esty makes sense as a prac­ti­cal pro­ce­dure in the real world, even grant­ing it’s ‘in the­ory’ virtue. Given the strength of mod­esty, I ad­vo­cate, the fact we use some­thing like it in some cases, and we can iden­tify it can help in oth­ers, is not enough. It needs to be shown as a bet­ter strat­egy than even slightly weaker forms, in cir­cum­stances de­liber­ately se­lected to pose the great­est challenge to strong mod­esty.

Triv­ial (and less triv­ial) non-use cases

For some top­ics there’s no rele­vant epistemic peers or su­pe­ri­ors to con­sider. This is com­monly the case with pretty triv­ial be­liefs (e.g. my desk is yel­low).

Modesty also doesn’t help much for in­di­vi­d­ual tastes, idiosyn­crasies, or cir­cum­stances. If Adam works best listen­ing to Bach and Beatrice to Beethoven, they prob­a­bly won’t do bet­ter ‘meet­ing in the mid­dle’ and both go­ing half-and-half for each (or maybe pick­ing a com­poser in­ter­me­di­ate in his­tory, like Mozart). Any­way, Adam is prob­a­bly Beatrice’s sig­nifi­cant epistemic su­pe­rior on “What mu­sic does Adam work best listen­ing to?”, and vice-versa. One can also be cre­d­u­lous of claims like “It turned out this diet re­ally helped my back pain”: per­haps it’s placebo, or per­haps it is one of those cases where differ­ent things work for differ­ent peo­ple, and one ex­pects in such cases in­di­vi­d­u­als to have priv­ileged ac­cess to what worked for them.[14]

There will be cases where one re­ally is plow­ing a lonely fur­row where there aren’t any close epistemic peers or su­pe­ri­ors. It’s pos­si­ble I re­ally am the world’s lead­ing ex­pert on “How many counter-fac­tual DALYs does a doc­tor avert dur­ing their ca­reer?”, be­cause no one else has re­ally looked into this ques­tion. My cur­rent role in­volves in­ves­ti­gat­ing global catas­trophic biolog­i­cal risks, which ap­pears un­der­stud­ied to the point of be­ing pre-paradig­matic.

Th­ese com­prise a very small minor­ity of top­ics I have cre­dences about. Yet even here mod­esty can help. One can use more dis­tant bod­ies of ex­perts: I am re­as­sured that my au­tum­nal es­ti­mate for the ‘DALY ques­tion’ co­heres with ex­pert con­sen­sus that med­i­cal prac­tice had a minor role in im­prove­ments to hu­man health, for ex­am­ple. Even if I don’t have any epistemic peers, I can simu­late some by ask­ing, “If there were lots of peo­ple as or more rea­son­able than me look­ing at this, would I ex­pect them to agree with my take?” Given that the econo­met­ric-es­que meth­ods I de­ploy to the an­swer the ‘DALY ques­tion’ could prob­a­bly be done bet­ter by an ex­pert, and in any case rea­son­able peo­ple are of­ten scep­ti­cal of these in other ar­eas, I am less con­fi­dent of my find­ings than my ‘in­side view’ sug­gests, which I take to be a wel­come cor­rec­tive to ‘pet ar­gu­ment’ bi­ases.[15]

In the­ory, the world should be mad

Whether de­voured by Moloch, burned by Ra, trapped by aber­rant sig­nal­ling equil­ibria, or what­ever else, we can ex­pect to pre­dict when ap­par­ent ex­pert classes (and ap­par­ent epistemic peers) are go­ing to col­lec­tively go wrong. With this knowl­edge, we can know which top­ics we should ex­pect to our­selves to out­perform ex­per­tise. Rather than the sce­nario where we com­monly find our­selves look­ing up (at ex­perts) or around (at our peers), we find our­selves in many situ­a­tions where those who are usu­ally epistemic peers or su­pe­ri­ors are be­low us—and above us, only sky.

We could dis­t­in­guish two sorts of mad­ness, a sur­pris­ing ab­sence of ex­per­tise and a sur­pris­ing er­ror of ex­per­tise:

The former is a gap in the epistemic mar­ket. Although an im­por­tant topic should be combed over by a body of ex­perts, for what­ever rea­son it isn’t, and so it takes sur­pris­ingly lit­tle effort to climb to the sum­mit of epistemic su­pe­ri­or­ity. In such cases our sum­maries of ex­pert classes as rang­ing over a broad area con­ceal the de­gree of ex­per­tise is very patchy: pub­lic health ex­perts gen­er­ally know a great deal about the health im­pacts of smok­ing; they usu­ally know much less about the health im­pacts of nico­tine.

The lat­ter is a stronger de­bunk­ing ar­gu­ment. One ap­peals to some fea­tures of the world that gen­er­ates ex­per­tise and sug­gests that these ex­per­tise gen­er­at­ing fea­tures are anti-cor­re­lated to the truth, thus one can ad­ju­di­cate be­tween war­ring ex­pert camps (or just in­dict all so-called ‘ex­perts’) based on this knowl­edge. One strong pre­dic­tor of in­com­pat­i­bil­ism re­gard­ing free will among is be­liev­ing in God. If we are con­fi­dent these be­liefs in God are ir­ra­tional, then we can win­now the ex­pert class by this con­sid­er­a­tion and side with the com­pat­i­bil­ist camp much more strongly.

Yet, similar to the prob­lems of de­bunk­ing men­tioned ear­lier, that there is a good story sug­gest­ing one of these things does not im­ply one will do bet­ter ‘strik­ing out on one’s own’. Even in cases of dis­ease where ac­cu­racy is poorly cor­re­lated to ex­pert ac­tivity, it is hard to think of cases where these line up or­thog­o­nal or worse. Big pharma stud­ies are in­fa­mous, but even if you’re in big pharma op­ti­mis­ing for ‘can I get ev­i­dence to sup­port my product’, your drug ac­tu­ally work­ing does make this eas­ier. Even in pre-repli­ca­tion crisis psy­chol­ogy, true re­sults would be over­rep­re­sented ver­sus false ones in the liter­a­ture com­pared to some base rate across gen­er­ated hy­pothe­ses.

The ‘resi­d­ual’ ex­pert class still of­ten re­mains bet­ter. Although most pub­lic health ex­perts know lit­tle about nico­tine per se, there are some nearby health ex­perts, per­haps scat­tered across our com­mon-sense de­mar­ca­tion of fields, who do know about the im­pacts of nico­tine. It may still take quite a lot of effort to reach par­ity or su­pe­ri­or­ity to these. Even if we want to strike all the­ists from free will philoso­phers, com­pat­i­bil­ism does not rise close to una­n­im­ity, and so cau­tions against ex­tremely high con­fi­dence this is the cor­rect view.[16] So, I aver, the world is not that mad.

Em­piri­cally, the world is mad

One can offer a more di­rect demon­stra­tion of world mad­ness, and so mod­esty: out­perfor­mance.

A com­mon re­ply is to point to a par­tic­u­lar case where those be­ing mod­est would have got­ten it wrong. There are lots of cases where am­a­teurs and mav­er­icks were ridiculed by com­mon sense or ex­perts-at-the-time, only to be sub­se­quently vin­di­cated.

Another prob­lem is the mod­est view in­tro­duces a lag—it seems one of­ten needs to wait for the new in­for­ma­tion to take root among one’s epistemic peers be­fore chang­ing one’s view, whilst a cog­niser just rely­ing on the ob­ject level up­dates on cor­rect ar­gu­ments ‘at first sight’. It is of­ten cru­cially im­por­tant to be fast as well as right in both em­piri­cal and moral mat­ters: it is ex­tremely costly if a view makes one slower to recog­nise (among many other past moral catas­tro­phes) the hor­ror of slav­ery.

Yet mod­esty need not in­fal­lible, merely an im­prove­ment. Cit­ing cases where it goes poorly is (hope­fully less than) half the story. Modesty does worse in cases the mav­er­ick is right, yet bet­ter where the mav­er­ick is wrong: there are more cases of the lat­ter than the former. Modesty does worse in be­ing slug­gish in re­spond­ing to moral rev­olu­tions, yet bet­ter at avoid­ing be­ing swept away by waves of mis­taken sen­ti­ment: again, the lat­ter seem more com­mon than the former.[17]

Maybe one can fol­low a strat­egy such that you can ‘pick the hits’ of when to carve out ex­cep­tions, and so have a su­pe­rior track record. Yet, em­piri­cally, I don’t see it. When I look at peo­ple who are touted as par­tic­u­larly good at be­ing ‘cor­rect con­trar­i­ans’, I see at best some­thing like an ‘epistemic ven­ture cap­i­tal­ist’ - their bold con­trar­ian guesses are right more of­ten than chance, but not right more of­ten than not. They ap­pear by my lights to be un­able to ju­di­ciously ‘pick their bat­tles’, stak­ing out rad­i­cal views in top­ics where there isn’t a good story as to why the ex­perts would be get­ting this wrong (still less why they’re more likely to get it right). So al­though they do get big wins, the modal out­come of their con­trar­ian take is a bust.[18]

Modesty should price in the views of bet­ter-than-chance con­trar­i­ans into how it weighs con­sen­sus. Con­fi­dence in a con­sen­sus view should fall if a good con­trar­ian takes aim at it, but not so much one now takes the con­trar­ian view one­self. If one hap­pens to be a par­tic­u­larly suc­cess­ful con­trar­ian one should fol­low the same ap­proach: “I get these right sur­pris­ingly of­ten, but I’m still wrong more of­ten than not, so it might be worth it to look into this fur­ther to see if I can strike gold, but un­til then I should bank on the con­sen­sus view.”

Ex­pert groups are sel­dom in re­flec­tive equilibrium

Even if mod­esty works well in the ideal case of a clearly iden­ti­fied ‘ex­pert class’, it can get a lot messier in re­al­ity:

  1. Sup­pose one is in the early 1940s and asks, “Is there go­ing to be ex­plo­sives with many or­ders of mag­ni­tude more power than cur­rent ex­plo­sives?” One can imag­ine if one con­sulted ex­plo­sive ex­perts (how­ever we cash that out), their con­sen­sus would gen­er­ally say ‘no’. If one was able to talk to the physi­cists work­ing on the Man­hat­tan pro­ject, they would say ‘yes’. Which one should an out­side view be­lieve?[19]

  2. Most peo­ple be­lieve god ex­ists (the so called ‘com­mon con­sent ar­gu­ment for God’s ex­is­tence’); if one looks at po­ten­tial ex­pert classes (e.g. philoso­phers, peo­ple who are more in­tel­li­gent), most of them are Athe­ists. Yet if one looks at philoso­phers of re­li­gion (who spend a lot of time on ar­gu­ments for or against God’s ex­is­tence), most of them are Theists—but maybe there’s a gra­di­ent within them too. Which group, ex­actly, should be weighed most heav­ily?

So con­struct­ing the ideal ‘weighted con­sen­sus’ mod­esty recom­mends defer­ring to can be­come a pretty in­volved pro­ce­dure. One must care­fully di­v­ine whether a given topic lies closer to the mag­is­terium of one or an­other pu­ta­tive ex­pert class (e.g. maybe one should lean more to the physi­cists, as the ques­tion is re­ally more ‘about physics’ than ‘about ex­plo­sives’). One might have to care­fully weigh up the rele­vant epistemic virtues of var­i­ous ex­pert classes that ap­pear far from re­flec­tive equil­ibrium from one an­other (so per­haps one might use likely se­lec­tion effect of philos­o­phy of re­li­gion party dis­count the ap­par­ent sup­port this pro­vides). One might have to delve into com­pli­cated is­sues of in­de­pen­dence: al­though most peo­ple may be­lieve god ex­ists, un­like guesses of how many skit­tles are in the jar, they are not all form­ing this be­lief in­de­pen­dently from one an­other.[20]

This ex­er­cise be­gins to look in­creas­ingly in­sider-view-es­que. Try­ing to de­ter­mine the right mag­is­terium in­volves get­ting closer to ob­ject level con­sid­er­a­tions about ‘about­ness’ of top­ics; try­ing to tease apart is­sues of in­de­pen­dence and se­lec­tion amount to look­ing at be­lief form­ing prac­tices, and veer close to ob­ject level jus­tifi­ca­tions for the be­lief in ques­tion. At some point it be­comes ex­traor­di­nar­ily challeng­ing to try and back-trace from all these fac­tors to the likely po­si­tion of the ideal ob­server: the de­grees of free­dom these con­sid­er­a­tions in­vite (and the challenge in es­ti­mat­ing them re­li­ably) make strong mod­esty go worse.

One should not give up too early, though: mod­esty can still work pretty well even in these tricky cases. One can ask whether there’s any com­mu­ni­ca­tion be­tween the classes, and if so any di­rec­tion of travel (e.g. did some ex­plo­sive ex­perts end up talk­ing to the physi­cists, and agree­ing they were right? Vice-versa?), even if they were com­pletely iso­lated, one can ask if a third group hav­ing ac­cess to both made a de­ci­sion (e.g. the agree­ment of the U.S. and Ger­man gov­ern­ments with the im­plied view of the physi­cists). This is a lot more in­volved, but the ex­pected ‘ac­cu­racy yield per unit time spent’ may still be greater than (for ex­am­ple) mak­ing a care­ful study of the rele­vant physics.

A broader mod­ifi­ca­tion would be ‘im­mod­est only for the web of be­lief, but mod­est for the weights’: one uses an in­side view to piece to­gether the graph of con­sid­er­a­tions around P, but one still defers to con­sen­sus on the weights. This may avoid cases where (for ex­am­ple) strong mod­esty may mis­take as­tronomers as the ex­pert class for about space travel be­ing in­fea­si­ble (ver­sus pri­mor­dial rocket sci­en­tists), even though as­tronomers and rocket sci­en­tists agreed about the nec­es­sary ac­cel­er­a­tion, but as­tronomers were in­ex­pert on the key ques­tion as to whether that ex­pla­na­tion could be pro­duced.[21]

What if one can­not even do that? Then mod­estly (rightly) offers a coun­sel of de­spair. If an area is so frac­tious there’s no agree­ment, with no way to see which of nu­mer­ous of dis­parate camps have bet­ter ac­cess the truth of the mat­ter; so suffused with bias that even those with ap­par­ent epistemic virtues (e.g. judge­ment, in­tel­li­gence, sub­ject-mat­ter knowl­edge) can­not be seen to even tend to­wards the truth; what hope does one have to do bet­ter than they? In at­tempt­ing to thread the nee­dle through these haz­ards to­wards the right judge­ment, one will al­most cer­tainly run aground some­where or some­how, al­ike all one’s epistemic peers or su­pe­ri­ors who made the at­tempt be­fore. Per­haps re­al­ity obliges us to un­der­take these dox­as­tic suicide mis­sions from time to time. If mod­esty can­not help us, it can at least provide the so­lace of a pre-emp­tive funeral, rather than (as im­mod­est views would) cheer us on to our al­most cer­tain demise.

Some­what satis­fy­ing Shul­man

Carl Shul­man en­courages me to offer my cre­dences and ra­tio­nale in cases he takes to be par­tic­u­larly difficult for my view, and sug­gests in these cases I ei­ther ar­rive at ab­surd cre­dences or I am covertly aban­don­ing the strong mod­esty ap­proach. I offer these be­low for read­ers to de­cide—with the rider that if these are in fact ab­surd, ‘I’m an idiot’ is a com­pet­ing ex­pla­na­tion to ‘strong mod­esty is a bad epistemic prac­tice’ (and that, as­suredly, what­ever one’s cre­dence on the lat­ter, one’s cre­dence in the former should be far greater).

Propo­si­tion (roughly); Cre­dence (ish); (Modesty-based) ra­tio­nale, in sketch

Theism; 0.1[22]; Mostly dis­count com­mon con­sent (non-in­de­pen­dence) and PoR (se­lec­tion). Ma­jor hits from more in­tel­li­gent peo­ple/​ bet­ter in­formed tend to be athe­ist, but strug­gle to ex­trap­o­late this closer to 0 given ex­is­tence proofs of very epistem­i­cally vir­tu­ous re­li­gious peo­ple.

Liber­tar­ian free will; 0.1; Com­mands a non-triv­ial minor­ity across vir­tu­ous epistemic classes (philoso­phers, in­tel­li­gent peo­ple, etc), only some­what de­graded by se­lec­tion wor­ries.

Je­sus rose from the dead; 0.005; Chris­ti­an­ity in par­tic­u­lar a very small frac­tion of pos­si­bil­ity space of Theism. Sup­port from its wide­spread sup­port is mostly (but not wholly) screened off by non-in­de­pen­dence effects. Rele­vant (but dis­tant) ex­pert classes in his­tory etc. weigh ad­versely.

There has been a case of cold fu­sion; 10^-5; Strong pan sci­en­tific con­sen­sus against, cold fu­sion com­mu­nity looks rene­gade and much less epistem­i­cally vir­tu­ous. Base rate of these con­di­tional on no effect gives very ad­verse refer­ence class.

ESP; 10^-6; Very strong (but non-com­plete) trophism among elite com­mon sense, sci­en­tists, etc; bad pre­dic­tive track records for ESP re­searchers; dis­tant con­sen­suses highly ad­verse. Some greatly at­ten­u­ated boost from sur­vey data/​small frac­tion of rea­son­able be­liev­ers.

Prac­ti­cal challenges to immodesty

Modesty can lead to dou­ble-count­ing, or even group­think. Sup­pose in the origi­nal ex­am­ple Beatrice does what I sug­gest and re­vise their cre­dences to be 0.6, but Adam doesn’t. Now Char­lie forms his own view (say 0.4 as well) and does the same pro­ce­dure as Beatrice, so Char­lie now holds a cre­dence of 0.6 as well. The av­er­age should be lower: (0.8+0.4+0.4)/​3, not (0.8+0.6+0.4)/​3, but the re­sults are dis­torted by us­ing one-and-a-half helpings of Adam’s cre­dence. With larger cases one can imag­ine peo­ple wrongly defer­ring to hold con­sen­sus around a view they should think is im­plau­si­ble, and in gen­eral the nigh-in­tractable challenge from try­ing to in­fer cases of dou­ble count­ing from the pat­terns of ‘all things con­sid­ered’ ev­i­dence.

One can rec­tify this by dis­t­in­guish­ing ‘cre­dence by my lights’ ver­sus ‘cre­dence all things con­sid­ered’. So one can say “Well, by my lights the cre­dence of P is 0.8, but my ac­tual cre­dence is 0.6, once I ac­count for the views of my epistemic peers etc.” Iron­i­cally, one’s per­sonal ‘in­side view’ of the ev­i­dence is usu­ally the most helpful cre­dence to pub­li­cly re­port (as it helps oth­ers mod­estly ag­gre­gate), whilst ones all things con­sid­ered mod­est view usu­ally for pri­vate con­sump­tion.

Com­mu­nity benefits to immodesty

Modesty could be par­a­sitic on a com­mu­nity level. If one is mod­est, one need never trou­ble one­self with any ‘ob­ject level’ con­sid­er­a­tions at all, and sim­ply cul­ti­vate the ap­pro­pri­ate weight­ing of con­sen­suses to defer to. If ev­ery­one free-rode like that, no one would dis­cover any new ev­i­dence, have any new ideas, and so col­lec­tively stag­nate.[23] Progress only hap­pens if peo­ple get their hands dirty on the ob­ject-level mat­ters of the world, try to build mod­els, and make some guesses—some­times the ex­perts have got­ten it wrong, and one won’t ever find that out by defer­ring to them based on the fact they usu­ally get it right.[24]

The dis­tinc­tion be­tween ‘cre­dence by my lights’ ver­sus ‘cre­dence all things con­sid­ered’ al­lows the best of both wor­lds. One can say ‘by my lights, P’s cre­dence is X’ yet at the same time ‘all things con­sid­ered though, I take P’s cre­dence to be Y’. One can form one’s own model of P, think the ex­perts are wrong about P, and mar­shall ev­i­dence and ar­gu­ments for why you are right and they are wrong; yet soberly re­al­ise that the chances are you are more likely mis­taken; yet also think this effort is nonethe­less valuable be­cause even if one is most likely head­ing down a dead-end, the cor­po­rate efforts of peo­ple like you promises a good chance of some­one find­ing a bet­ter path.

Scott Sum­ner seems to do some­thing similar:

In macro, it’s im­por­tant for peo­ple like me to always search for the truth, and reach con­clu­sions about eco­nomic mod­els in a way that is in­de­pen­dent of the con­sen­sus model. In that way, I play my “worker ant” role of nudg­ing the pro­fes­sion to­wards a greater truth. But at the same time we need to rec­og­nize that there is noth­ing spe­cial about our view. If we are made dic­ta­tor, we should im­ple­ment the con­sen­sus view of op­ti­mal policy, not our own. Peo­ple have trou­ble with this, as it im­plies two lev­els of be­lief about what is true. The view from in­side our mind, and the view from 20,000 miles out in space, where I see there is no ob­jec­tive rea­son to fa­vor my view over Krug­man’s.

De­spite this ex­am­ple, maybe it is the case that ‘hav­ing a cre­ative brain which makes big dis­cov­er­ies’ is an­ti­cor­re­lated to ‘hav­ing a sober brain well-cal­ibrated to its limi­ta­tions com­pared to oth­ers’: anec­do­tally, ec­cen­tric views among ge­niuses are com­mon. Maybe for most it isn’t psy­cholog­i­cally ten­able to spend one’s life in­ves­ti­gat­ing a rene­gade view one thinks ul­ti­mately is likely a dead-end, and in fact peo­ple do ground­break­ing re­search gen­er­ally have to be over­con­fi­dent to do the best sci­ence. If so, we should act com­mu­nally to mod­er­ate this cost, but not cel­e­brate it as a fea­ture.

Not ev­ery­one has to do be work­ing on dis­cov­er­ing new in­for­ma­tion. One could imag­ine a sym­bio­sis be­tween ec­cen­tric over­con­fi­dent ge­niuses whose epistemic com­par­a­tive ad­van­tage is to who gam­bol around idea-space to find new con­sid­er­a­tions, and well-cal­ibrated thought­ful peo­ple whose com­par­a­tive ad­van­tage is in soberly weigh­ing con­sid­er­a­tions to ar­rive at a well cal­librated all-things-con­sid­ered view.

Con­clu­sion: a pean, and a plea

I have ar­gued above for a strong ap­proach to mod­esty, one which im­plies—at least in terms of ‘all things con­sid­ered view’ - one’s view of the ob­ject level mer­its counts for very lit­tle. Even if I am mis­taken about the ideal strength of mod­esty, I am highly con­fi­dent both the EA and ra­tio­nal­ist com­mu­ni­ties err in the ‘in­suffi­ciently mod­est’ di­rec­tion. I close on these re­marks.

Ra­tion­al­ist/​EA ex­cep­tion­al­ism

Both com­mu­ni­ties en­dure a steady os­ti­nato of com­plaints about ar­ro­gance. They’ve got a point. I de­spair of see­ing some wannabe-icon­o­clast spout off about how ob­vi­ously the solu­tion to some fa­mously re­con­dite is­sue is X and the sup­posed ex­perts who dis­agree ob­vi­ously just need to bet­ter un­der­stand the ‘tenets of EA’ or the se­quences. I be­come lachry­mose when fur­ther dis­cus­sion demon­strates said icon­o­clast has a shaky grasp of the ba­sics, that they are re­ca­pitu­lat­ing points already bet­ter-dis­cussed in the liter­a­ture, and so forth.[25]

To stress (and to pre-empt), the prob­lem is not, “You aren’t kow­tow­ing ap­pro­pri­ately to so­cial sta­tus!” The prob­lem is con­sid­er­able over-con­fi­dence mar­ried with in­ad­e­quate un­der­stand­ing. This both looks bad to out­siders,[26] but it also is bad as the in­di­vi­d­ual (and the com­mu­nity it­self) could get to the truth faster if they were more mod­est about their likely po­si­tion in the dis­tri­bu­tion of knowl­edge about X, and then did com­mon­sen­si­cal things to in­crease it.

Con­sider Gell-Mann am­ne­sia (via Michael Crich­ton):

You open the news­pa­per to an ar­ti­cle on some sub­ject you know well. In Mur­ray’s case, physics. In mine, show busi­ness. You read the ar­ti­cle and see the jour­nal­ist has ab­solutely no un­der­stand­ing of ei­ther the facts or the is­sues. Often, the ar­ti­cle is so wrong it ac­tu­ally pre­sents the story back­ward—re­vers­ing cause and effect. I call these the “wet streets cause rain” sto­ries. Paper’s full of them.
In any case, you read with ex­as­per­a­tion or amuse­ment the mul­ti­ple er­rors in a story, and then turn the page to na­tional or in­ter­na­tional af­fairs, and read as if the rest of the news­pa­per was some­how more ac­cu­rate about Pales­tine than the baloney you just read. You turn the page, and for­get what you know.

Gell-Mann cases in­vite in­fer­ring ad­verse judge­ments based on ex­trap­o­lat­ing from in in­stance of poor perfor­mance. When ex­perts in mul­ti­ple differ­ent sub­jects say the same thing (i.e. Mur­ray and Crich­ton chat­ted to an ex­pert on Pales­tine who had the same im­pres­sion), this ad­verse in­fer­ence gets all the stronger.

I think we have in­for­ma­tion some to many pieces of work or cor­po­rate pro­jects in our com­mu­nity share this prop­erty: that al­though it might look good or ground­break­ing to us as rel­a­tively less-in­formed, do­main ex­perts in the fields it touches upon tend to re­port the work is mis­guided or rudi­men­tary. Although it is pos­si­ble to in­dict all these judge­ments, akin to a per­son who gives very ad­verse ac­counts of all of their pre­vi­ous ro­man­tic part­ners, we may start to won­der about a com­mon fac­tor ex­pla­na­tion. Our col­lec­tive ego is writ­ing checks our epistemic perfor­mance (or, in can­dour, perfor­mance gen­er­ally) can­not cash; gen­eral ig­no­rance, rather than par­tic­u­lar knowl­edge, may ex­plain our self-re­gard.

To dis­cover, not summarise

It is thought that to make the world go bet­ter new things need to be dis­cov­ered, above and be­yond mak­ing sound judge­ments on ex­ist­ing knowl­edge. Quickly mak­ing ac­cu­rate de­ter­mi­na­tions of the bal­ance of rea­son for a given is­sue is greatly valuable for the lat­ter, but not so much for the former.

Yet the two should not be con­fused. If one writes a short overview of a sub­ject ‘for in­ter­nal con­sump­tion’ which gives a fairly good im­pres­sion of what a par­tic­u­lar view should be, one should not be too wor­ried if a spe­cial­ist com­plains that you haven’t cov­ered all the top­ics as ad­e­quately as one might. How­ever, if one is aiming to write some­thing which ar­tic­u­lates an in­sight or un­der­stand­ing not just novel to the com­mu­nity, but novel to the world, one should be ex­tremely con­cerned if do­main ex­perts re­view this work and say things along the lines of, “Well, this is sort of a pot­ted re­ca­pitu­la­tion of work in our field, and this in­sight is widely dis­cussed”.

Yet I see this hap­pen a lot to things we tout as ‘break­through dis­cov­er­ies’. We want to avoid case where we waste our time in un­wit­ting re­ca­pitu­la­tion, or fail to catch el­e­men­tary mis­takes. Yet too of­ten we li­cense our­selves to pro­nounce these dis­cov­er­ies with­out suffi­cient mod­esty in cases where there’s already a large ex­pert com­mu­nity work­ing on similar mat­ters. This does not pre­clude these dis­cov­er­ies, but it cau­tions us to care­fully check first. On oc­ca­sions where I take my­self to have a new in­sight in ar­eas out­side my field (most of­ten philos­o­phy), I am ex­tremely sus­pect of my sup­posed dis­cov­ery: all too of­ten would this arise from my mi­s­un­der­stand­ing, or already be in the liter­a­ture some­where I haven’t looked. I care­fully con­sult the liter­a­ture as best as I can, and run the idea by true do­main ex­perts, to rule out these pos­si­bil­ities.[27]

Others seem to lack this mod­esty, and so pre­dictably err. More gen­er­ally, a more mod­est view of ‘in­tra-com­mu­nity ver­sus out­side com­pe­tence’ may also avoid cases of hav­ing to rein­vent the wheel (e.g. that scor­ing rule you spent six months de­riv­ing for a karma sys­tem is in this canon­i­cal pa­per), or for an effort to de­rail (e.g. oh drat, our eval­u­a­tion pro­vides worth­less data be­cause of rea­sons we could have known from googling ‘study de­sign’).

Para­dox­i­cally patholog­i­cal mod­esty

If the EA and ra­tio­nal­ist com­mu­ni­ties com­prised a bunch of highly over­con­fi­dent and ec­cen­tric peo­ple buzzing around bump­ing their pet the­o­ries to­gether, I may worry about over­all judge­ment and how much novel work gets done, but I would at grant this at least looks like fer­tile ground for new ideas to be de­vel­oped.

Alas, not so much. What oc­curs in­stead is agree­ment ap­proach­ing fawn­ing obei­sance to a small set of peo­ple the com­mu­nity anoints as ‘thought lead­ers’, and so cen­tral­iz­ing on one par­tic­u­lar ec­cen­tric and over­con­fi­dent view.[28] So al­though we may preach im­mod­esty on be­half of the wider com­mu­nity, our prac­tice within it is much more defer­en­tial.

I hope a bet­ter un­der­stand­ing of mod­esty can get us out of this ‘worst of both wor­lds’ sce­nario. One, it can at least provide bet­ter ‘gu­rus’ to defer to. Modesty also helps in cor­rect­ing the overly wide gap we have be­tween our gu­rus and other ex­perts, and the overly nar­row gap be­tween ‘in­tel­li­gent layper­son in the com­mu­nity’ and ‘some­one able to con­tribute to the state of the art on a topic of in­ter­est. Some top­ics are re­ally hard: be­ing able to be­come some­one with ‘some­thing use­ful to say’ about these not take days but take years; there are many deep prob­lems we must con­cern our­selves with; that the few we se­lect as cham­pi­ons, de­spite their virtue, can­not do them all alone; and that we need all the out­side help we can get.


What the EA com­mu­nity mainly has now is a briar-patch of dilet­tantes: each ranges widely, but with shal­low roots, form­ing whorls around oth­ers where it deems it can find sup­port. What it needs is a for­est of ex­perts: each spread­ing not so widely; form­ing a deeper foun­da­tion and gath­er­ing more re­sources from the com­mon ground; stand­ing apart yet taller, and in con­cert pro­duc­ing a ver­dant canopy.[29] I hope this trans­for­ma­tion oc­curs, and aver mod­esty may help effect it.


I thank Joseph Car­l­smith, Owen Cot­ton-Bar­ratt, Eric Drexler, Ben Garfinkel, Rox­anne He­ston, Will MacAskill, Ben Pace, Ste­fan Schu­bert, Carl Shul­man, and Pablo Staffor­ini for their helpful dis­cus­sion, re­marks, and crit­i­cism. Their kind help does not im­ply their agree­ment. The er­rors re­main my own.

[1] Much of this fol­lows dis­cus­sion in the so­cial episte­mol­ogy liter­a­ture about con­cili­a­tion­ism, or the ‘equal weight view’. See here for a summary

[2] They also ar­gue at length about the ap­pro­pri­ate weight each of these con­sid­er­a­tions should have on the scales of judge­ment. I sug­gest (al­though this is not nec­es­sary for this ar­gu­ment) that in many cases most of the ac­tion lies in judg­ing the ‘power’ of ev­i­dence. In most cases I ob­serve peo­ple agree that a given con­sid­er­a­tion C in­fluences the cre­dence one holds in P; they usu­ally also agree in its qual­i­ta­tive di­rec­tion; the challenge comes in try­ing to weigh each con­sid­er­a­tion against the oth­ers, to see which con­sid­er­a­tions one’s cre­dence over P should pay the great­est at­ten­tion to.

This may rep­re­sent a gen­eral fea­ture of webs of be­lief be­ing dense and many-many (A given cre­dence is in­fluenced by many other con­sid­er­a­tions, and forms a con­sid­er­a­tion for many cre­dences in turn), or it may sim­ply be a par­tic­u­lar fea­ture of webs of be­lief in which hu­mans perform poorly: al­though I am con­fi­dent I can de­ter­mine the sign of a par­tic­u­lar con­sid­er­a­tion, I gen­er­ally don’t back my­self to hold cre­dences (or like­li­hood ra­tios) to much greater pre­ci­sion than the first sig­nifi­cant digit, and I (and, per­haps, oth­ers) strug­gle in cases where large num­bers of con­sid­er­a­tions point in both di­rec­tions.

[3] In the liter­a­ture this is called ‘straight av­er­ag­ing’. For a va­ri­ety of tech­ni­cal rea­sons this doesn’t quite work as a peer up­date rule. That said, given things like bayesian ag­gre­ga­tion re­main some­what open prob­lems, I hope read­ers will ac­cept my promis­sory note that there will be a more pre­cise ac­count which will pro­duce effec­tively the same re­sults (maybe ‘ap­prox­i­mately split­ting the differ­ence’) through the same mo­ti­va­tion.

[4] C.f. Au­mann’s agree­ment the­o­rem. As an aside (which I owe to Carl Shul­man), straight av­er­ag­ing will not work in some de­gen­er­ate cases where (similar to ‘com­mon knowl­edge puz­zles’) one can in­fer pre­cise ob­ser­va­tions from the prob­a­bil­ities stated. The neat­est ex­am­ple I can find comes from Hal Fin­ney (see also):

Sup­pose two coins are flipped out of sight, and you and an­other per­son are try­ing to es­ti­mate the prob­a­bil­ity that both are heads. You are told what the first coin is, and the other per­son is told what the sec­ond coin is. You both re­port your ob­ser­va­tions to each other.
Let’s sup­pose that they did in fact fall both heads. You are told that the first coin is heads, and you re­port the prob­a­bil­ity of both heads as 12. The other per­son is told that the sec­ond coin is heads, and he also re­ports the prob­a­bil­ity as 12. How­ever, you can now both con­clude that the prob­a­bil­ity is 1, be­cause if ei­ther of you had been told that the coin was tails, he would have re­ported a prob­a­bil­ity of zero. So in this case, both of you up­date your in­for­ma­tion away from the es­ti­mate pro­vided by the other.

[5] To mo­ti­vate: Adam and Beatrice no longer know whether or not rea­sons they hold for or against P are pri­vate ev­i­dence or not. Yet (given epistemic peer­hood), they have no prin­ci­pled rea­son to sup­pose “I know some­thing that they don’t” is more plau­si­ble than the op­po­site. So again they should be sym­met­ri­cal.

[6] (On which more later) it is worth mak­ing clear that the pos­si­bil­ity of bias for ei­ther Adam or Beatrice doesn’t change the win­ning strat­egy on ex­pec­ta­tion. Say Adam’s cre­dence for P is in fact bi­ased up­wards by 0.4. If Adam knows this, he can ad­just and be­come un­bi­ased, if Oliver or Beatrice knows this (and knows Adam doesn’t), the break the peer­hood for Adam but can simu­late un­bi­ased Adam* which would re­main a peer, and act ac­cord­ingly. If none of them know this, then it is the case that Beatrice wins, as does Oliver fol­low­ing a non-av­er­ag­ing ‘go with Beatrice’ strat­egy. Yet this is sim­ply epistemic luck: with­out in­for­ma­tion, all rea­son­able prior dis­tri­bu­tion can­di­dates of (Adam’s bias—Beatrice’s bias) are sym­met­ri­cal about 0.

[7] Another benefit of mod­esty is speed: Although it is the case Adam and Beatrice’s cre­dence (and thus the av­er­age) gets more ac­cu­rate if they have time to dis­cuss it, and so catch one an­other if they make a mis­take or re­veal pre­vi­ously-pri­vate ev­i­dence, av­er­ag­ing is faster and the trade-off in time for bet­ter pre­ci­sion may not be worth it. It still re­mains the case, as per the first ex­am­ple, that they still do bet­ter, af­ter this dis­cus­sion, if they meet in the mid­dle on resi­d­ual dis­agree­ment.

[8] A fur­ther (albeit minor and tech­ni­cal) div­i­dend is that al­though in­di­vi­d­ual guesses may form any dis­tri­bu­tion (for which the stan­dard de­vi­a­tion may not be a helpful sum­mary), the cen­tral limit the­o­rem ap­plies to the av­er­age of guesses dis­tri­bu­tion, so it tends to nor­mal­ity.

[9] Even if one is the world au­thor­ity, there should be some defer­ence to lesser ex­perts. In cases where the world ex­pert is an out­lier, one needs to weigh up num­bers ver­sus (rel­a­tive) epistemic su­pe­ri­or­ity to find the ap­pro­pri­ate mid­dle.


God from the Mount of Si­nai, whose gray top
Shall trem­ble, he de­scend­ing, will himself
In Thun­der Light­ning and loud Trum­pets sound
Or­daine them Lawes…
  • Mil­ton, Par­adise Lost

[11] I take the gen­eral pat­tern that strong mod­esty usu­ally im­mures one from com­mon bi­ases is a fur­ther point in its favour.

[12] I owe this to Eric Drexler

[13] A re­lated philo­soph­i­cal defence would point out that the self-un­der­min­ing ob­jec­tion would only ap­ply to whether one should be­lieve mod­esty, not whether mod­esty is in fact true.

[14] I nat­u­rally get much more scep­ti­cal if that per­son then gen­er­al­ises from this N=1 un­con­trol­led un­blinded crossover trial to oth­ers, or takes it as lend­ing sig­nifi­cant sup­port against some par­tic­u­lar ex­pert con­sen­sus or ex­per­tise more broadly: “Doc­tors don’t know any­thing about back pain! They did all this rub­bish but I found out all any­one needs to do is cut carbs!”

[15] It also pro­vokes fear and trem­bling in my pre-paradig­matic day job, given I don’t want the area to have strong founder effects which poorly track the truth.

[16] For ex­am­ple:

One of the eas­iest hard ques­tions, as mil­len­nia-old philo­soph­i­cal dilem­mas go. Though this im­pos­si­ble ques­tion is fully and com­pletely dis­solved on Less Wrong, as­piring re­duc­tion­ists should try to solve it on their own.

[17] Aside: A re­lated con­sid­er­a­tion is ‘op­ti­mal damp­ing’ of cre­dences, which is closely re­lated to re­silience. Very volatile cre­dences may rep­re­sent the buf­fet­ing of a de­gree of be­lief by ev­i­dence large rel­a­tive to one’s prior—but it may also rep­re­sent poor cal­ibra­tion in over­weigh­ing new ev­i­dence (and vice versa). The ‘ideal’ re­sponse in terms of ac­cu­racy is given by stan­dard the­ory. Yet it is also worth not­ing that’s one pru­den­tial rea­sons may want to in­tro­duce fur­ther lag or lead, akin to the ‘D’ or ‘I’ com­po­nents of a PID con­trol­ler. In large ir­re­versible de­ci­sions (e.g. ca­reer choice) it may be bet­ter to wait a while af­ter one’s cre­dences sup­port a change to change ac­tion; for case of new moral con­sid­er­a­tion it may be bet­ter to act ‘in ad­vance’ for pre­cau­tion­ary prin­ci­ple-es­que rea­sons.

[18] (Owed to Will MacAskill) There’s also a se­lec­tion effect: of a sam­ple of ‘ac­cu­rate con­trar­i­ans’, many of these may be lucky rather than good.

[19] I owe this par­tic­u­lar ex­am­ple to Eric Drexler, but similar counter-ex­am­ples along these lines to Carl Shul­man.

[20] Another gen­eral worry is these difficult-to-di­v­ine con­sid­er­a­tions offer plenty of fudge fac­tors—both to make mod­esty get the ‘right an­swer’ in his­tor­i­cal cases, and to fudge pre­sent ar­eas of un­cer­tainty to get re­sults that ac­cord with one’s prior judge­ment.

[21] I owe both this mod­ifi­ca­tion and ex­am­ple to dis­cus­sions with Eric Drexler. There are some costs—one may think there are cases one should defer to an out­side view on the web of be­lief (E.g. Chris­tian apol­o­gist: “Sure, I agree with sci­en­tific con­sen­sus that it’s im­prob­a­ble Je­sus rose nat­u­rally from the dead, but the key ar­gu­ment is whether Je­sus rose su­per­nat­u­rally from the dead. So the con­sen­sus for philoso­phers of re­li­gion is the right ex­pert class.”) The bal­ance of merit over­all is hard to say, but such a mod­ifi­ca­tion still looks like pretty strong mod­esty.

[22] In con­ver­sa­tion I re­call a sug­ges­tion by Shul­man such a cre­dence should change one’s be­havi­our re­gard­ing EA—maybe one should do the­ol­ogy re­search in the hope of find­ing a way to ex­tract in­finite value etc. Yet the ex­pert class for ac­tion|Theism gives a highly ad­verse prior: vir­tu­ally no ac­tual the­ists (re­gard­less of the­olog­i­cal ex­per­tise, within or out­side EA) ad­vo­cate this.

[23] I un­der­stand a similar point is raised in eco­nomics re­gard­ing the EMH and the suc­cess of in­dex funds. Some­one has to do the price dis­cov­ery.

[24] I owe this mainly to Ben Pace, An­drew Critch ar­gues similarly.

[25] For ob­vi­ous rea­sons I’m re­luc­tant to cite spe­cific ex­am­ples. I can offer some key words for the sort of top­ics I see this prob­lem as en­demic: Many-wor­lds, pop­u­la­tion ethics, free will, p-zom­bies, macroe­co­nomics, meta-ethics.

[26] C.f. Au­gus­tine, On the Literal Mean­ing of Ge­n­e­sis:

Usu­ally, even a non-Chris­tian knows some­thing about the earth, the heav­ens, and the other el­e­ments of this world, about the mo­tion and or­bit of the stars and even their size and rel­a­tive po­si­tions, about the pre­dictable eclipses of the sun and moon, the cy­cles of the years and the sea­sons, about the kinds of an­i­mals, shrubs, stones, and so forth, and this knowl­edge he hold to as be­ing cer­tain from rea­son and ex­pe­rience. Now, it is a dis­grace­ful and dan­ger­ous thing for an in­fidel to hear a Chris­tian, pre­sum­ably giv­ing the mean­ing of Holy Scrip­ture, talk­ing non­sense on these top­ics; and we should take all means to pre­vent such an em­bar­rass­ing situ­a­tion, in which peo­ple show up vast ig­no­rance in a Chris­tian and laugh it to scorn.

[27] I’m un­com­monly for­tu­nate that for me such do­main ex­perts are both nearby and gen­er­ous with their at­ten­tion. Yet this ob­sta­cle is not in­sur­mountable. An idea (which I owe to Pablo Staffor­ini) is that a con­trar­ian and a scep­tic of the con­trar­ian view could bet on whether a given ex­pert, on ex­po­sure to the con­trar­ian view, would change their mind as the con­trar­ian pre­dicts. S may bet with C: “We’ll pay some ex­pert $X to read your work ex­pli­cat­ing your view, if they change their mind sig­nifi­cantly in favour (how­ever we cash this out) I’ll pay the $X, if not, you pay the $X.

[28] C.f. Askell’s and Page’s re­marks on ‘buzz’.

[29] Per­haps un­sur­pris­ingly, I would use a more mod­est ecolog­i­cal metaphor in my own case. In re­claiming ex­tremely in­hos­pitable en­vi­ron­ments, the ini­tial pi­o­neer or­ganisms die rapidly. Yet their corpses sus­tain de­tri­tivores, and lit­tle by lit­tle, an ini­tial ecosys­tem emerges to be suc­ceeded by oth­ers. In a similar way, I hope that the de­tri­tus I provide will, af­ter a fash­ion (and a while), be­come the com­post in which an oak tree grows.