Common sense as a prior

Introduction

[I have ed­ited the in­tro­duc­tion of this post for in­creased clar­ity.]

This post is my at­tempt to an­swer the ques­tion, “How should we take ac­count of the dis­tri­bu­tion of opinion and epistemic stan­dards in the world?” By “epistemic stan­dards,” I roughly mean a per­son’s way of pro­cess­ing ev­i­dence to ar­rive at con­clu­sions. If peo­ple were good Bayesi­ans, their epistemic stan­dards would cor­re­spond to their fun­da­men­tal prior prob­a­bil­ity dis­tri­bu­tions. At a first pass, my an­swer to this ques­tions is:

Main Recom­men­da­tion: Believe what you think a broad coal­i­tion of trust­wor­thy peo­ple would be­lieve if they were try­ing to have ac­cu­rate views and they had ac­cess to your ev­i­dence.

The rest of the post can be seen as an at­tempt to spell this out more pre­cisely and to ex­plain, in prac­ti­cal terms, how to fol­low the recom­men­da­tion. Note that there are there­fore two broad ways to dis­agree with the post: you might dis­agree with the main recom­men­da­tion, or the guidelines for fol­low­ing main recom­men­da­tion.

The rough idea is to try find a group of peo­ple whose are trust­wor­thy by clear and gen­er­ally ac­cepted in­di­ca­tors, and then use an im­par­tial com­bi­na­tion of the rea­son­ing stan­dards that they use when they are try­ing to have ac­cu­rate views. I call this im­par­tial com­bi­na­tion elite com­mon sense. I recom­mend us­ing elite com­mon sense as a prior in two senses. First, if you have no un­usual in­for­ma­tion about a ques­tion, you should start with the same opinions as the broad coal­i­tion of trust­wor­thy peo­ple would have. But their opinions are not the last word, and as you get more ev­i­dence, it can be rea­son­able to dis­agree. Se­cond, a com­plete prior prob­a­bil­ity dis­tri­bu­tion speci­fies, for any pos­si­ble set of ev­i­dence, what pos­te­rior prob­a­bil­ities you should have. In this deeper sense, I am not just recom­mend­ing that you start with the same opinions as elite com­mon sense, but also you up­date in ways that elite com­mon sense would agree are the right ways to up­date. In prac­tice, we can’t spec­ify the prior prob­a­bil­ity dis­tri­bu­tion of elite com­mon sense or calcu­late the up­dates, so the frame­work is most use­ful from a con­cep­tual per­spec­tive. It might also be use­ful to con­sider the out­put of this frame­work as one model in a larger model com­bi­na­tion.

I am aware of two rel­a­tively close in­tel­lec­tual rel­a­tives to my frame­work: what philoso­phers call “equal weight” or “con­cili­a­tory” views about dis­agree­ment and what peo­ple on LessWrong may know as “philo­soph­i­cal ma­jori­tar­i­anism.” Equal weight views roughly hold that when two peo­ple who are ex­pected to be roughly equally com­pe­tent at an­swer­ing a cer­tain ques­tion have differ­ent sub­jec­tive prob­a­bil­ity dis­tri­bu­tions over an­swers to that ques­tion, those peo­ple should adopt some im­par­tial com­bi­na­tion of their sub­jec­tive prob­a­bil­ity dis­tri­bu­tions. Un­like equal weight views in philos­o­phy, my po­si­tion is meant as a set of rough prac­ti­cal guidelines rather than a set of ex­cep­tion­less and fun­da­men­tal rules. I ac­cord­ingly fo­cus on prac­ti­cal is­sues for ap­ply­ing the frame­work effec­tively and am open to limit­ing the frame­work’s scope of ap­pli­ca­tion. Philo­soph­i­cal ma­jori­tar­i­anism is the idea that on most is­sues, the av­er­age opinion of hu­man­ity as a whole will be a bet­ter guide to the truth than one’s own per­sonal judg­ment. My per­spec­tive differs from both equal weight views and philo­soph­i­cal ma­jori­tar­i­anism in that it em­pha­sizes an elite sub­set of the pop­u­la­tion rather than hu­man­ity as a whole and that it em­pha­sizes epistemic stan­dards more than in­di­vi­d­ual opinions. My per­spec­tive differs from what you might call “elite ma­jori­tar­i­anism” in that, ac­cord­ing to me, you can dis­agree with what very trust­wor­thy peo­ple think on av­er­age if you think that those peo­ple would ac­cept your views if they had ac­cess to your ev­i­dence and were try­ing to have ac­cu­rate opinions.

I am very grate­ful to Holden Karnofsky and Jonah Sinick for thought-pro­vok­ing con­ver­sa­tions on this topic which led to this post. Many of the ideas ul­ti­mately de­rive from Holden’s think­ing, but I’ve de­vel­oped them, made them some­what more pre­cise and sys­tem­atic, dis­cussed ad­di­tional con­sid­er­a­tions for and against adopt­ing them, and put ev­ery­thing in my own words. I am also grate­ful to Luke Muehlhauser and Pablo Staffor­ini for feed­back on this post.

In the rest of this post I will:

  1. Out­line the frame­work and offer guidelines for ap­ply­ing it effec­tively. I ex­plain why I fa­vor rely­ing on the epistemic stan­dards of peo­ple who are trust­wor­thy by clear in­di­ca­tors that many peo­ple would ac­cept, why I fa­vor pay­ing more at­ten­tion to what peo­ple think than why they say they think it (on the mar­gin), and why I fa­vor stress-test­ing crit­i­cal as­sump­tions by at­tempt­ing to con­vince a broad coal­i­tion of trust­wor­thy peo­ple to ac­cept them.

  2. Offer some con­sid­er­a­tions in fa­vor of us­ing the frame­work.

  3. Re­spond to the ob­jec­tion that com­mon sense is of­ten wrong, the ob­jec­tion that the most suc­cess­ful peo­ple are very un­con­ven­tional, and ob­jec­tions of the form “elite com­mon sense is wrong about X and can’t be talked out of it.”

  4. Dis­cuss some limi­ta­tions of the frame­work and some ar­eas where it might be fur­ther de­vel­oped. I sus­pect it is weak­est in cases where there is a large up­side to dis­re­gard­ing elite com­mon sense, there is lit­tle down­side, and you’ll find out whether your bet against con­ven­tional wis­dom was right within a tol­er­able time limit, and cases where peo­ple are un­will­ing to care­fully con­sider ar­gu­ments with the goal of hav­ing ac­cu­rate be­liefs.

    An out­line of the frame­work and some guidelines for ap­ply­ing it effectively

    My sug­ges­tion is to use elite com­mon sense as a prior rather than the stan­dards of rea­son­ing that come most nat­u­rally to you per­son­ally. The three main steps for do­ing this are:

    1. Try to find out what peo­ple who are trust­wor­thy by clear in­di­ca­tors that many peo­ple would ac­cept be­lieve about the is­sue.

    2. Iden­tify the in­for­ma­tion and anal­y­sis you can bring to bear on the is­sue.

    3. Try to find out what elite com­mon sense would make of this in­for­ma­tion and anal­y­sis, and adopt a similar per­spec­tive.

    On the first step, peo­ple of­ten have an in­stinc­tive sense of what oth­ers think, though you should be­ware the false con­sen­sus effect. If you don’t know what other opinions are out there, you can ask some friends or search the in­ter­net. In my ex­pe­rience, reg­u­lar peo­ple of­ten have similar opinions to very smart peo­ple on many is­sues, but are much worse at ar­tic­u­lat­ing con­sid­er­a­tions for and against their views. This may be be­cause many peo­ple copy the opinions of the most trust­wor­thy peo­ple.

    I fa­vor giv­ing more weight to the opinions of peo­ple who can be shown to be trust­wor­thy by clear in­di­ca­tors that many peo­ple would ac­cept, rather than peo­ple that seem trust­wor­thy to you per­son­ally. This guideline is in­tended to help avoid parochial­ism and in­crease self-skep­ti­cism. In­di­vi­d­ual peo­ple have a va­ri­ety of bi­ases and blind spots that are hard for them to rec­og­nize. Some of these bi­ases and blind spots—like the ones stud­ied in cog­ni­tive sci­ence—may af­fect al­most ev­ery­one, but oth­ers are idiosyn­cratic—like bi­ases and blind spots we in­herit from our fam­i­lies, friends, busi­ness net­works, schools, poli­ti­cal groups, and re­li­gious com­mu­ni­ties. It is plau­si­ble that com­bin­ing in­de­pen­dent per­spec­tives can help idiosyn­cratic er­rors wash out.

    In or­der for the er­rors to wash out, it is im­por­tant to rely on the stan­dards of peo­ple who are trust­wor­thy by clear in­di­ca­tors that many peo­ple would ac­cept rather than the stan­dards of peo­ple that seem trust­wor­thy to you per­son­ally. Why? The peo­ple who seem most im­pres­sive to us per­son­ally are of­ten peo­ple who have similar strengths and weak­nesses to our­selves, and similar bi­ases and blind spots. For ex­am­ple, I sus­pect that aca­demics and peo­ple who spe­cial­ize in us­ing a lot of ex­plicit rea­son­ing have a differ­ent set of strengths and weak­nesses from peo­ple who rely more on im­plicit rea­son­ing, and peo­ple who rely pri­mar­ily on many weak ar­gu­ments have a differ­ent set of strengths and weak­nesses from peo­ple who rely more on one rel­a­tively strong line of ar­gu­ment.

    Some good in­di­ca­tors of gen­eral trust­wor­thi­ness might in­clude: IQ, busi­ness suc­cess, aca­demic suc­cess, gen­er­ally re­spected sci­en­tific or other in­tel­lec­tual achieve­ments, wide ac­cep­tance as an in­tel­lec­tual au­thor­ity by cer­tain groups of peo­ple, or suc­cess in any area where there is in­tense com­pe­ti­tion and suc­cess is a func­tion of abil­ity to make ac­cu­rate pre­dic­tions and good de­ci­sions. I am less com­mit­ted to any par­tic­u­lar list of in­di­ca­tors than the gen­eral idea.

    Of course, trust­wor­thi­ness can also be do­main-spe­cific. Very of­ten, elite com­mon sense would recom­mend defer­ring to the opinions of ex­perts (e.g., listen­ing to what physi­cists say about physics, what biol­o­gists say about biol­ogy, and what doc­tors say about medicine). In other cases, elite com­mon sense may give par­tial weight to what pu­ta­tive ex­perts say with­out ac­cept­ing it all (e.g. eco­nomics and psy­chol­ogy). In other cases, they may give less weight to what pu­ta­tive ex­perts say (e.g. so­ciol­ogy and philos­o­phy). Or there may be no pu­ta­tive ex­perts on a ques­tion. In cases where elite com­mon sense gives less weight to the opinions of pu­ta­tive ex­perts or there are no plau­si­ble can­di­dates for ex­per­tise, it be­comes more rele­vant to think about what elite com­mon sense would say about a ques­tion.

    How should we as­sign weight to differ­ent groups of peo­ple? Other things be­ing equal, a larger num­ber of peo­ple is bet­ter, more trust­wor­thy peo­ple are bet­ter, peo­ple who are trust­wor­thy by clearer in­di­ca­tors that more peo­ple would ac­cept are bet­ter, and a set of crite­ria which al­lows you to have some grip on what the peo­ple in ques­tion think is bet­ter, but you have to make trade-offs. If I only in­cluded, say, the 20 smartest peo­ple I had ever met as judged by me per­son­ally, that would prob­a­bly be too small a num­ber of peo­ple, the peo­ple would prob­a­bly have bi­ases and blind spots very similar to mine, and I would miss out on some of the most trust­wor­thy peo­ple, but it would be a pretty trust­wor­thy col­lec­tion of peo­ple and I’d have some rea­son­able sense of what they would say about var­i­ous is­sues. If I went with, say, the 10 most-cited peo­ple in 10 of the most in­tel­lec­tu­ally cred­ible aca­demic dis­ci­plines, 100 of the most gen­er­ally re­spected peo­ple in busi­ness, and the 100 heads of differ­ent states, I would have a pretty large num­ber of peo­ple and a broad set of peo­ple who were very trust­wor­thy by clear stan­dards that many peo­ple would ac­cept, but I would have a hard time know­ing what they would think about var­i­ous is­sues be­cause I haven’t in­ter­acted with them enough. How these fac­tors can be traded-off against each other in a way that is prac­ti­cally most helpful prob­a­bly varies sub­stan­tially from per­son to per­son.

    I can’t give any very pre­cise an­swer to the ques­tion about whose opinions should be given sig­nifi­cant weight, even in my own case. Luck­ily, I think the out­put of this frame­work is usu­ally not very sen­si­tive to how we an­swer this ques­tion, partly be­cause most peo­ple would typ­i­cally defer to other, more trust­wor­thy peo­ple. If you want a rough guideline that I think many peo­ple who read this post could ap­ply, I would recom­mend fo­cus­ing on, say, the opinions of the top 10% of peo­ple who got Ivy-League-equiv­a­lent ed­u­ca­tions (note that I didn’t get such an ed­u­ca­tion, at least as an un­der­grad, though I think you should give weight to my opinion; I’m just giv­ing a rough guideline that I think works rea­son­ably well in prac­tice). You might give some ad­di­tional weight to more ac­com­plished peo­ple in cases where you have a grip on how they think.

    I don’t have a set­tled opinion about how to ag­gre­gate the opinions of elite com­mon sense. I sus­pect that tak­ing straight av­er­ages gives too much weight to the opinions of cranks and crack­pots, so that you may want to re­move some out­liers or give less weight to them. For the pur­pose of mak­ing de­ci­sions, I think that so­phis­ti­cated vot­ing meth­ods (such as the Con­dorcet method) and analogues of the par­li­a­men­tary ap­proaches out­lined by Nick Bostrom and Toby Ord seem fairly promis­ing as rough guidelines in the short run. I don’t do calcu­la­tions with this frame­work—as I said, it’s mostly con­cep­tual—so un­cer­tainty about an ag­gre­ga­tion pro­ce­dure hasn’t been a ma­jor is­sue for me.

    On the mar­gin, I fa­vor pay­ing more at­ten­tion to peo­ple’s opinions than their ex­plic­itly stated rea­sons for their opinions. Why? One rea­son is that I be­lieve peo­ple can have highly adap­tive opinions and pat­terns of rea­son­ing with­out be­ing able to ar­tic­u­late good defenses of those opinions and/​or pat­terns of rea­son­ing. (Luke Muehlhauser has dis­cussed some re­lated points here.) One rea­son is that peo­ple can adopt prac­tices that are suc­cess­ful with­out know­ing why they are suc­cess­ful, oth­ers who in­ter­act with them can adopt those prac­tices, oth­ers who in­ter­act with them can adopt those prac­tices, and so forth. I heard an ex­treme ex­am­ple of this from Spencer Green­berg, who had read it in Scien­tists Greater than Ein­stein. The story in­volved a folk rem­edy for vi­sual im­pair­ment:

    There were folk reme­dies wor­thy of study as well. One widely used in Java on chil­dren with ei­ther night blind­ness or Bi­tot’s spots con­sisted of drop­ping the juices of lightly roasted lamb’s liver into the eyes of af­fected chil­dren. Som­mer re­lates, “We were be­mused at the ap­pro­pri­ate­ness of this tech­nique and won­dered how it could pos­si­bly be effec­tive. We, there­fore, at­tended sev­eral treat­ment ses­sions, which were con­ducted ex­actly as the villagers had de­scribed, ex­cept for one small ad­di­tion—rather than dis­card­ing the re­main­ing or­gan, they fed it to the af­fected child. For some un­known rea­son this was never con­sid­ered part of the ther­apy it­self.” Som­mer and his as­so­ci­ates were be­mused, but now un­der­stood why the folk rem­edy had per­sisted through the cen­turies. Liver, be­ing the or­gan where vi­tamin A is stored in a lamb or any other an­i­mal, is the best food to eat to ob­tain vi­tamin A. (p. 14)

    Another strik­ing ex­am­ple is bed­time prayer. In many Chris­tian tra­di­tions I am aware of, it is com­mon to pray be­fore go­ing to sleep. And in the tra­di­tion I was raised in, the main com­po­nents of prayer were list­ing things you were grate­ful for, ask­ing for for­give­ness for all the mis­takes you made that day and think­ing about what you would do to avoid similar mis­takes in the fu­ture, and ask­ing God for things. Chris­ti­ans might say the point of this is that it is a duty to God, that re­pen­tance is a re­quire­ment for en­try to heaven, or that ask­ing God for things makes God more likely to in­ter­vene and cre­ate mir­a­cles. How­ever, I think these ac­tivi­ties are rea­son­able for differ­ent rea­sons: grat­i­tude jour­nals are great, re­flect­ing on mis­takes is a great way to learn and over­come weak­nesses, and it is a good idea to get clear about what you re­ally want out of life in the short-term and the long-term.

    Another rea­son I have this view is that if some­one has an effec­tive but differ­ent in­tel­lec­tual style from you, it’s pos­si­ble that your bi­ases and blind spots will pre­vent you from ap­pre­ci­at­ing their points that have sig­nifi­cant merit. If you partly give weight to opinions in­de­pen­dently of how good the ar­gu­ments seem to you per­son­ally, this can be less of an is­sue for you. Jonah Sinick de­scribed a strik­ing rea­son this might hap­pen in Many Weak Ar­gu­ments and the Typ­i­cal Mind:

    We should pay more at­ten­tion to peo­ple’s bot­tom line than to their stated rea­sons — If most high func­tion­ing peo­ple aren’t rely­ing heav­ily on any one of the ar­gu­ments that they give, if a typ­i­cal high func­tion­ing per­son re­sponds to a query of the type “Why do you think X?” by say­ing “I be­lieve X be­cause of ar­gu­ment Y” we shouldn’t con­clude that the per­son be­lieves ar­gu­ment Y with high prob­a­bil­ity. Rather, we should as­sume that ar­gu­ment Y is one of many ar­gu­ments that they be­lieve with low con­fi­dence, most of which they’re not ex­press­ing, and we should fo­cus on their be­lief in X in­stead of ar­gu­ment Y. [em­pha­sis his]

    This idea in­ter­acts in a com­ple­men­tary way to Luke Muehlhauser’s claim that some peo­ple who are not skil­led at ex­plicit ra­tio­nal­ity may be skil­led in tacit ra­tio­nal­ity, al­low­ing them to be suc­cess­ful at mak­ing many types of im­por­tant de­ci­sions. If we are in­ter­act­ing with such peo­ple, we should give sig­nifi­cant weight to their opinions in­de­pen­dently of their stated rea­sons.

    A coun­ter­point to my claim that, on the mar­gin, we should give more weight to oth­ers’ con­clu­sions and less to their rea­son­ing is that some very im­pres­sive peo­ple dis­agree. For ex­am­ple, Ray Dalio is the founder of Bridge­wa­ter, which, at least as of 2011, was the world’s largest hedge fund. He ex­plic­itly dis­agrees with my claim:

    “I stress-tested my opinions by hav­ing the smartest peo­ple I could find challenge them so I could find out where I was wrong. I never cared much about oth­ers’ con­clu­sions—only for the rea­son­ing that led to these con­clu­sions. That rea­son­ing had to make sense to me. Through this pro­cess, I im­proved my chances of be­ing right, and I learned a lot from a lot of great peo­ple.” (p. 7 of Prin­ci­ples by Ray Dalio)

    I sus­pect that get­ting the rea­son­ing to make sense to him was im­por­tant be­cause it helped him to get bet­ter in touch with elite com­mon sense, and also be­cause rea­son­ing is more im­por­tant when deal­ing with very formidable peo­ple, as I sus­pect Dalio did and does. I also think that for the some of the high­est func­tion­ing peo­ple who are most in touch with elite com­mon sense, it may make more sense to give more weight to rea­son­ing than con­clu­sions.

    The elite com­mon sense frame­work fa­vors test­ing un­con­ven­tional views by see­ing if you can con­vince a broad coal­i­tion of im­pres­sive peo­ple that your views are true. If you can do this, it is of­ten good ev­i­dence that your views are sup­ported by elite com­mon sense stan­dards. If you can’t, it’s of­ten good ev­i­dence that your views can’t be so sup­ported. Ob­vi­ously, these are rules of thumb and we should re­strict our at­ten­tion to cases where you are per­suad­ing peo­ple by ra­tio­nal means, in con­trast with us­ing rhetor­i­cal tech­niques that ex­ploit hu­man bi­ases. There are also some in­ter­est­ing cases where, for one rea­son or an­other, peo­ple are un­will­ing to hear your case or think about your case ra­tio­nally, and ap­ply­ing this guideline to these cases is tricky.

    Im­por­tantly, I don’t think cases where elite com­mon sense is bi­ased are typ­i­cally an ex­cep­tion to this rule. In my ex­pe­rience, I have very lit­tle difficulty con­vinc­ing peo­ple that some gen­uine bias, such as scope in­sen­si­tivity, re­ally is bi­as­ing their judg­ment. And if the bias re­ally is crit­i­cal to the dis­agree­ment, I think it will be a case where you can con­vince elite com­mon sense of your po­si­tion. Other cases, such as deeply en­trenched re­li­gious and poli­ti­cal views, may be more of an ex­cep­tion, and I will dis­cuss the case of re­li­gious views more in a later sec­tion.

    The dis­tinc­tion be­tween con­vinc­ing and “beat­ing in an ar­gu­ment” is im­por­tant for ap­ply­ing this prin­ci­ple. It is much eas­ier to tell whether you con­vinced some­one than it is to tell whether you beat them in an ar­gu­ment. Often, both par­ties think they won. In ad­di­tion, some­times it is ra­tio­nal not to up­date much in fa­vor of a view if an ad­vo­cate for that view beats you in an ar­gu­ment.

    In sup­port of this claim, con­sider what would hap­pen if the world’s smartest cre­ation­ist de­bated some fairly or­di­nary evolu­tion-be­liev­ing high school stu­dent. The stu­dent would be de­stroyed in ar­gu­ment, but the stu­dent should not re­ject evolu­tion, and I sus­pect he should hardly up­date at all. Why not? The stu­dent should know that there are peo­ple out there in the world who could de­stroy him on ei­ther side of this ar­gu­ment, and his per­sonal abil­ity to re­spond to ar­gu­ments is not very rele­vant. What should be most rele­vant to this stu­dent is the dis­tri­bu­tion of opinion among peo­ple who are most trust­wor­thy, not his per­sonal re­sponse to small sam­ple of the available ev­i­dence. Even if you gen­uinely are beat­ing peo­ple in ar­gu­ments, there is a risk that you will be like this cre­ation­ist de­bater.

    An ad­di­tional con­sid­er­a­tion is that cer­tain be­liefs and prac­tices may be rea­son­able and adopted for rea­sons that are not ac­cessible to peo­ple who have adopted those be­liefs and prac­tices, as illus­trated with the ex­am­ples of the liver rit­ual and bed­time prayer. You might be able to “beat” some Chris­tian in an ar­gu­ment about the mer­its of bed­time prayer, but pray­ing may still be bet­ter than not pray­ing. (I think it would be bet­ter still to in­tro­duce a differ­ent rou­tine that serves similar func­tions—this is some­thing I have done in my own life—but the Chris­tian may be do­ing bet­ter than you on this is­sue if you don’t have a re­place­ment rou­tine your­self.)

    Un­der the elite com­mon sense frame­work, the ques­tion is not “how re­li­able is elite com­mon sense?” but “how re­li­able is elite com­mon sense com­pared to me?” Sup­pose I learn that, ac­tu­ally, peo­ple are much worse at pric­ing deriva­tives than I pre­vi­ously be­lieved. For the sake of ar­gu­ment sup­pose this was a les­son of the 2008 fi­nan­cial crisis (for the pur­poses of this ar­gu­ment, it doesn’t mat­ter whether this is ac­tu­ally a cor­rect les­son of the crisis). This in­for­ma­tion does not fa­vor rely­ing more on my own judg­ment un­less I have rea­son to think that the bias ap­plies less to me than the rest of the deriva­tives mar­ket. By anal­ogy, it is not ac­cept­able to say, “Peo­ple are re­ally bad at think­ing about philos­o­phy. So I am go­ing to give less weight to their judg­ments about philos­o­phy (psst…and more weight to my per­sonal hunches and the hunches of peo­ple I per­son­ally find im­pres­sive).” This is only OK if you have ev­i­dence that your per­sonal hunches and the hunches of the peo­ple you per­son­ally find im­pres­sive are bet­ter than elite com­mon sense, with re­spect to philos­o­phy. In con­trast, it might be ac­cept­able to say, “Peo­ple are very bad at think­ing about the con­se­quences of agri­cul­tural sub­sidies in com­par­i­son with economists, and most trust­wor­thy peo­ple would agree with this if they had my ev­i­dence. And I have an un­usual amount of in­for­ma­tion about what economists think. So my opinion gets more weight than elite com­mon sense in this case.” Whether this ul­ti­mately is ac­cept­able to say would de­pend on how good elites are at think­ing about the con­se­quences of agri­cul­tural sub­sidies—I sus­pect they are ac­tu­ally pretty good at it—but this is isn’t rele­vant to the gen­eral point that I’m mak­ing. The gen­eral point is that this is one po­ten­tially cor­rect form of an ar­gu­ment that your opinion is bet­ter than the cur­rent stance of elite com­mon sense.

    This is partly a se­man­tic is­sue, but I count the above ex­am­ple as a case where “you are more re­li­able than elite com­mon sense,” even though, in some sense, you are rely­ing on ex­pert opinion rather than your own. But you have differ­ent be­liefs about who is a rele­vant ex­pert or what ex­perts say than com­mon sense does, and in this sense you are rely­ing on your own opinion.

    I fa­vor giv­ing more weight to com­mon sense judg­ments in cases where peo­ple are try­ing to have ac­cu­rate views. For ex­am­ple, I think peo­ple don’t try very hard to have cor­rect poli­ti­cal, re­li­gious, and philo­soph­i­cal views, but they do try to have cor­rect views about how to do their job prop­erly, how to keep their fam­i­lies happy, and how to im­press their friends. In gen­eral, I ex­pect peo­ple to try to have more ac­cu­rate views in cases where it is in their pre­sent in­ter­ests to have more ac­cu­rate views. (A quick refer­ence for this point is here.) This means that I ex­pect them to strive more for ac­cu­racy in de­ci­sion-rele­vant cases, cases where the cost of be­ing wrong is high, and cases where striv­ing for more ac­cu­racy can be ex­pected to yield more ac­cu­racy, though not nec­es­sar­ily in cases where the risks and re­wards are won’t come for a very long time. I sus­pect this is part of what ex­plains why peo­ple can be skil­led in tacit ra­tio­nal­ity but not ex­plicit ra­tio­nal­ity.

    As I said above, what’s crit­i­cal is not how re­li­able elite com­mon sense is but how re­li­able you are in com­par­i­son with elite com­mon sense. So it only makes sense to give more weight to your views when learn­ing that oth­ers aren’t try­ing to be cor­rect if you have com­pel­ling ev­i­dence that you are try­ing to be cor­rect. Ideally, this ev­i­dence would be com­pel­ling to a broad class of trust­wor­thy peo­ple and not just com­pel­ling to you per­son­ally.

    Some fur­ther rea­sons to think that the frame­work is likely to be helpful

    In ex­plain­ing the frame­work and out­lin­ing guidelines for ap­ply­ing it, I have given some rea­sons to ex­pect this frame­work to be helpful. Here are some more weak ar­gu­ments in fa­vor of my view:

    1. Some stud­ies I haven’t per­son­ally re­viewed closely claim that com­bi­na­tions of ex­pert fore­casts are hard to beat. For in­stance, a re­view by (Cle­men 1989) found that: “Con­sid­er­able liter­a­ture has ac­cu­mu­lated over the years re­gard­ing the com­bi­na­tion of fore­casts. The pri­mary con­clu­sion of this line of re­search is that fore­cast ac­cu­racy can be sub­stan­tially im­proved through the com­bi­na­tion of mul­ti­ple in­di­vi­d­ual fore­casts.” (ab­stract) And a re­cent work by the Good Judg­ment Pro­ject found that tak­ing an av­er­age in­di­vi­d­ual fore­casts and trans­form­ing it away from .5 cre­dence gave the low­est er­rors of a va­ri­ety of differ­ent meth­ods of ag­gre­gat­ing judg­ments of fore­cast­ers (p. 42).

    2. There are plau­si­ble philo­soph­i­cal con­sid­er­a­tions sug­gest­ing that, ab­sent spe­cial ev­i­dence, there is no com­pel­ling rea­son to fa­vor your own epistemic stan­dards over the epistemic stan­dards that oth­ers use.

    3. In prac­tice, we are ex­tremely re­li­ant on con­ven­tional wis­dom for al­most ev­ery­thing we be­lieve that isn’t very closely re­lated to our per­sonal ex­pe­rience, and sin­gle in­di­vi­d­u­als work­ing in iso­la­tion have ex­tremely limited abil­ity to ma­nipu­late their en­vi­ron­ment in com­par­i­son with in­di­vi­d­u­als who can build on the in­sights of oth­ers. To see this point, con­sider that a small group of very in­tel­li­gent hu­mans de­tached from all cul­tures wouldn’t have much of an ad­van­tage at all over other an­i­mal species in com­pe­ti­tion for re­sources, but hu­mans are in­creas­ingly dom­i­nat­ing the bio­sphere. A great deal of this must be chalked up to cul­tural ac­cu­mu­la­tion of highly adap­tive con­cepts, ideas, and pro­ce­dures that no in­di­vi­d­ual could de­velop on their own. I see try­ing to rely on elite com­mon sense as highly con­tin­u­ous with this suc­cess­ful en­deavor.

    4. Highly adap­tive prac­tices and as­sump­tions are more likely to get copied and spread, and these prac­tices and as­sump­tions of­ten work be­cause they help you to be right. If you use elite com­mon sense as a prior, you’ll be more likely to be work­ing with more adap­tive prac­tices and as­sump­tions.

    5. Some suc­cess­ful pro­cesses for find­ing valuable in­for­ma­tion, such as PageRank and Quora, seem analo­gous to the frame­work I have out­lined. PageRank is one al­gorithm that Google uses to de­cide how high differ­ent pages should be in searches, which is im­plic­itly a way of rank­ing high-qual­ity in­for­ma­tion. I’m speak­ing about some­thing I don’t know very well, but my rough un­der­stand­ing is that PageRank gives pages more votes when more pages link to them, and votes from a page get more weight if that page it­self has a lot of votes. This seems analo­gous to rely­ing on elite com­mon sense be­cause in­for­ma­tion sources are fa­vored when they are re­garded as high qual­ity by a broad coal­i­tion of other in­for­ma­tion sources. Quora seems analo­gous be­cause it fa­vors an­swers to ques­tions that many peo­ple re­gard as good.

    6. I’m go­ing to go look at the first three ques­tions I can find on Quora. I pre­dict that I would pre­fer the an­swers that elite com­mon sense would give to these ques­tions to what or­di­nary com­mon sense would say, and also that I would pre­fer elite com­mon sense’s an­swers to these ques­tions to my own ex­cept in cases where I have strong in­side in­for­ma­tion/​anal­y­sis. Re­sults: 1st ques­tion: weakly pre­fer elite com­mon sense, don’t have much spe­cial in­for­ma­tion. 2nd ques­tion: pre­fer elite com­mon sense, don’t have much spe­cial in­for­ma­tion. 3rd ques­tion: pre­fer elite com­mon sense, don’t have much spe­cial in­for­ma­tion. Note that I skipped a ques­tion be­cause it was a mat­ter of taste. This went es­sen­tially the way I pre­dicted it to go.

    7. The type of math­e­mat­i­cal con­sid­er­a­tions un­der­ly­ing Con­dorcet’s Jury The­o­rem give us some rea­son to think that com­bined opinions are of­ten more re­li­able than in­di­vi­d­ual opinions, even though the as­sump­tions un­der­ly­ing this the­o­rem are far from to­tally cor­rect.

    8. There’s a gen­eral cluster of so­cial sci­ence find­ings that goes un­der the head­ing “wis­dom of crowds” and sug­gests that ag­gre­gat­ing opinions across peo­ple out­performs in­di­vi­d­ual opinions in many con­texts.

    9. Some rough “mar­ket­place of ideas” ar­gu­ments sug­gest that the best ideas will of­ten be­come part of elite com­mon sense. When claims are de­ci­sion-rele­vant, peo­ple pay if they have dumb be­liefs and benefit if they have smart be­liefs. When claims aren’t de­ci­sion-rele­vant, peo­ple some­times pay a so­cial cost for say­ing dumb things and get so­cial benefits for say­ing things that are smarter, and the peo­ple with more in­for­ma­tion have more in­cen­tive to speak. For analo­gous rea­sons, when peo­ple use and pro­mote epistemic stan­dards that are dumb, they pay costs and when they use and pro­mote epistemic stan­dards that are smart. Ob­vi­ously there are many other fac­tors, in­clud­ing ones that point in differ­ent di­rec­tions, but there is some kind of pos­i­tive force here.

    Cases where peo­ple of­ten don’t fol­low the frame­work but I think they should

    I have seen a va­ri­ety of cases where I be­lieve peo­ple don’t fol­low the prin­ci­ples I ad­vo­cate. There are cer­tain types of er­rors that I think many or­di­nary peo­ple make and oth­ers that are more com­mon for so­phis­ti­cated peo­ple to make. Most of these boil down to giv­ing too much weight to per­sonal judg­ments, giv­ing too much weight to peo­ple who are im­pres­sive to you per­son­ally but not im­pres­sive by clear and un­con­tro­ver­sial stan­dards, or not putting enough weight on what elite com­mon sense has to say.

    Giv­ing too much weight to the opinions of peo­ple like you: Peo­ple tend to hold re­li­gious views and poli­ti­cal views that are similar to the views of their par­ents. Many of these peo­ple prob­a­bly aren’t try­ing to have ac­cu­rate views. And the situ­a­tion would be much bet­ter if peo­ple gave more weight to the ag­gre­gated opinion of a broader coal­i­tion of per­spec­tives.

    I think a differ­ent prob­lem arises in the LessWrong and effec­tive al­tru­ism com­mu­ni­ties. In this case, peo­ple are much more re­flec­tively choos­ing which sets of peo­ple to get their be­liefs from, and I be­lieve they are get­ting be­liefs from some pretty good peo­ple. How­ever, tak­ing an out­side per­spec­tive, it seems over­whelm­ingly likely that these com­mu­ni­ties are sub­ject to their own bi­ases and blind spots, and the peo­ple who are most at­tracted to these com­mu­ni­ties are most likely to suffer from the same bi­ases and blind spots. I sus­pect elite com­mon sense would take these com­mu­ni­ties more se­ri­ously than it cur­rently does if it had ac­cess to more in­for­ma­tion about the com­mu­ni­ties, but I don’t think it would take us suffi­ciently se­ri­ously to jus­tify hav­ing high con­fi­dence in many of our more un­usual views.

    Be­ing over­con­fi­dent on open ques­tions where we don’t have a lot of ev­i­dence to work with: In my ex­pe­rience, it is com­mon to give lit­tle weight to com­mon sense takes on ques­tions about which there is no gen­er­ally ac­cepted an­swer, even when it is im­pos­si­ble to use com­mon­sense rea­son­ing to ar­rive at con­clu­sions that get broad sup­port. Some less so­phis­ti­cated peo­ple seem to see this as a li­cense to think what­ever they want, as Paul Gra­ham has com­mented in the case of poli­tics and re­li­gion. I meet many more so­phis­ti­cated peo­ple with un­usual views about big pic­ture philo­soph­i­cal, poli­ti­cal, and eco­nomic ques­tions in ar­eas where they have very limited in­side in­for­ma­tion and very limited in­for­ma­tion about the dis­tri­bu­tion of ex­pert opinion. For ex­am­ple, I have now met a rea­son­ably large num­ber of non-ex­perts who have very con­fi­dent, de­tailed, un­usual opinions about meta-ethics, liber­tar­i­anism, and op­ti­mal meth­ods of tax­a­tion. When I challenge peo­ple about this, I usu­ally get some ver­sion of “peo­ple are not good at think­ing about this ques­tion” but rarely a de­tailed ex­pla­na­tion of why this per­son in par­tic­u­lar is an ex­cep­tion to this gen­er­al­iza­tion (more on this prob­lem be­low).

    There’s an in­verse ver­sion of this prob­lem where peo­ple try to “sus­pend judg­ment” on ques­tions where they don’t have high-qual­ity ev­i­dence, but ac­tu­ally end up tak­ing very un­usual stances with­out ad­e­quate jus­tifi­ca­tion. For ex­am­ple, I some­times talk with peo­ple who say that im­prov­ing the very long-term fu­ture would be over­whelm­ingly im­por­tant if we could do it, but are skep­ti­cal about whether we can. In re­sponse, I some­times run ar­gu­ments of the form:

    1. In ex­pec­ta­tion, it is pos­si­ble to im­prove broad fea­ture X of the world (ed­u­ca­tion, gov­er­nance qual­ity, effec­tive­ness of the sci­en­tific com­mu­nity, eco­nomic pros­per­ity).

    2. If we im­prove fea­ture X, it will help fu­ture peo­ple deal with var­i­ous big challenges and op­por­tu­ni­ties bet­ter in ex­pec­ta­tion.

    3. If peo­ple deal with these challenges and op­por­tu­ni­ties bet­ter in ex­pec­ta­tion, the fu­ture will be bet­ter in ex­pec­ta­tion.

    4. There­fore, it is pos­si­ble to make the fu­ture bet­ter in ex­pec­ta­tion.

    I’ve pre­sented some pre­limi­nary thoughts on re­lated is­sues here. Some peo­ple try to re­sist this ar­gu­ment on grounds of gen­eral skep­ti­cism about at­tempts at im­prov­ing the world that haven’t been doc­u­mented with high-qual­ity ev­i­dence. Peter Hur­ford’s post on “spec­u­la­tive causes” is the clos­est ex­am­ple that I can point to on­line, though I’m not sure whether he still dis­agrees with me on this point. I be­lieve that there can be some ad­just­ment in the di­rec­tion of skep­ti­cism in light of ar­gu­ments that GiveWell has ar­tic­u­lated here un­der “we are rel­a­tively skep­ti­cal,” but I con­sider re­ject­ing the sec­ond premise on these grounds a sig­nifi­cant de­par­ture from elite com­mon sense. I would have a similar view about any­one who re­jected any of the other premises—at least if they re­jected them for all val­ues of X—for such rea­sons. It’s not that I think the pre­sump­tion in fa­vor of elite com­mon sense can’t be over­come—I strongly fa­vor think­ing about such ques­tions more care­fully and am open to chang­ing my mind—it’s just that I don’t think it can be over­come by these types of skep­ti­cal con­sid­er­a­tions. Why not? Th­ese types of con­sid­er­a­tions seem like they could make the prob­a­bil­ity dis­tri­bu­tion over im­pact on the very long-term nar­rower, but I don’t see how they could put it tightly around zero. And in any case, GiveWell ar­tic­u­lates other con­sid­er­a­tions in that post and other posts which point in fa­vor of less skep­ti­cism about the sec­ond premise.

    Part of the is­sue may be con­fu­sion about “re­ject­ing” a premise and “sus­pend­ing judg­ment.” In my view, the ques­tion is “What are the ex­pected long-term effects of im­prov­ing fac­tor X?” You can try not to think about this ques­tion or say “I don’t know,” but when you make de­ci­sions you are im­plic­itly com­mit­ted to cer­tain ranges of ex­pected val­ues on these ques­tions. To jus­tifi­ably ig­nore very long-term con­sid­er­a­tions, I think you prob­a­bly need your im­plicit range to be close to zero. I of­ten see peo­ple who say they are “sus­pend­ing judg­ment” about these is­sues or who say they “don’t know” act­ing as if this ranger were very close to zero. I see this as a very strong, pre­cise claim which is con­trary to elite com­mon sense, rather than an open-minded, “we’ll wait un­til the ev­i­dence comes in” type of view to have. Another way to put it is that my claim that im­prov­ing some broad fac­tor X has good long-run con­se­quences is much more of an anti-pre­dic­tion than the claim that its ex­pected effects are close to zero. (In­de­pen­dent point: I think that a more com­pel­ling ar­gu­ment than the ar­gu­ment that we can’t af­fect the far fu­ture is the ar­gu­ment that that lots of or­di­nary ac­tions have flow-through effects with as­tro­nom­i­cal ex­pected im­pacts if any­thing does, so that peo­ple aiming ex­plic­itly at re­duc­ing as­tro­nom­i­cal waste are less priv­ileged than one might think at first glance. I hope to write more about this is­sue in the fu­ture.)

    Put­ting too much weight on your own opinions be­cause you have bet­ter ar­gu­ments on top­ics that in­ter­est you than other peo­ple, or the peo­ple you typ­i­cally talk to: As men­tioned above, I be­lieve that some smart peo­ple, es­pe­cially smart peo­ple who rely a lot on ex­plicit rea­son­ing, can be­come very good at de­vel­op­ing strong ar­gu­ments for their opinions with­out be­ing very good at find­ing true be­liefs. I think that in such in­stances, these peo­ple will gen­er­ally not be very suc­cess­ful at get­ting a broad coal­i­tion of im­pres­sive peo­ple to ac­cept their views (ex­cept per­haps by rely­ing on non-ra­tio­nal meth­ods of per­sua­sion). Stress-test­ing your views by try­ing to ac­tu­ally con­vince oth­ers of your opinions, rather than just out-ar­gu­ing them, can help you avoid this trap.

    Put­ting too much weight on the opinions of sin­gle in­di­vi­d­u­als who seem trust­wor­thy to you per­son­ally but not to peo­ple in gen­eral, and have very un­usual views: I have seen some peo­ple up­date sig­nifi­cantly in fa­vor of very un­usual philo­soph­i­cal, sci­en­tific, and so­ciolog­i­cal claims when they en­counter very in­tel­li­gent ad­vo­cates of these views. Th­ese peo­ple are of­ten fa­mil­iar with Au­mann’s agree­ment the­o­rem and ar­gu­ments for split­ting the differ­ence with epistemic peers, and they are rightly trou­bled by the fact that some­one fairly similar to them dis­agrees with them on an is­sue, so they try to cor­rect for their own po­ten­tial failures of ra­tio­nal­ity by giv­ing ad­di­tional weight to the ad­vo­cates of these very un­usual views.

    How­ever, I be­lieve that tak­ing dis­agree­ment se­ri­ously fa­vors giv­ing these very un­usual views less weight, not more. The prob­lem partly arises be­cause philo­soph­i­cal dis­cus­sion of dis­agree­ment of­ten fo­cuses on the sim­ple case of two peo­ple shar­ing their ev­i­dence and opinions with each other. But what’s more rele­vant is the dis­tri­bu­tion of qual­ity-weighted opinion around the world in gen­eral, not the dis­tri­bu­tion of qual­ity-weighted opinion of the peo­ple that you have had dis­cus­sions with, and not the dis­tri­bu­tion of qual­ity-weighted opinion of the peo­ple that seem trust­wor­thy to you per­son­ally. The epistem­i­cally mod­est move here is to try to stay closer to elite com­mon sense, not to split the differ­ence.

    Ob­jec­tions to this approach

    Ob­jec­tion: elite com­mon sense is of­ten wrong

    One ob­jec­tion I of­ten hear is that elite com­mon sense is of­ten wrong. I be­lieve this is true, but not a prob­lem for my frame­work. I make the com­par­a­tive claim that elite com­mon sense is more trust­wor­thy than the idiosyn­cratic stan­dards of the vast ma­jor­ity of in­di­vi­d­ual peo­ple, not the claim that elite com­mon sense is al­most always right. A fur­ther con­sid­er­a­tion is that analo­gous ob­jec­tions to analo­gous views fail. For in­stance, “mar­kets are of­ten wrong in their val­u­a­tion of as­sets” is not a good ob­jec­tion to the effi­cient mar­kets hy­poth­e­sis. As ex­plained above, the ar­gu­ment that “mar­kets are of­ten wrong” needs to point to spe­cific way in which one can do bet­ter than the mar­ket in or­der for it to make sense to place less weight on what the mar­ket says than on one’s own judg­ments.

    Ob­jec­tion: the best peo­ple are highly unconventional

    Another ob­jec­tion I some­times hear is that the most suc­cess­ful peo­ple of­ten pay the least at­ten­tion to con­ven­tional wis­dom. I think this is true, but not a prob­lem for my frame­work. One rea­son I be­lieve this is that, ac­cord­ing to my frame­work, when you go against elite com­mon sense, what mat­ters is whether elite com­mon sense rea­son­ing stan­dards would jus­tify your opinion if some­one fol­low­ing those stan­dards knew about your back­ground, in­for­ma­tion, and anal­y­sis. Though I can’t prove it, I sus­pect that the most suc­cess­ful peo­ple are of­ten de­part from elite com­mon sense in ways that elite com­mon sense would en­dorse if it had ac­cess to more in­for­ma­tion. I also be­lieve that the most suc­cess­ful peo­ple tend to pay at­ten­tion to elite com­mon sense in many ar­eas, and speci­fi­cally bet against elite com­mon sense in ar­eas where they are most likely to be right.

    A sec­ond con­sid­er­a­tion is that go­ing against elite com­mon sense may be a high-risk strat­egy, so that it is un­sur­pris­ing if we see the most suc­cess­ful peo­ple pur­su­ing it. Peo­ple who give less weight to elite com­mon sense are more likely to spend their time on pointless ac­tivi­ties, join cults, and be­come crack­pots, though they are also more likely to have rev­olu­tion­ary pos­i­tive im­pacts. Con­sider an anal­ogy: it may be that the gam­blers who earned the most used the riskiest strate­gies, but this is not good ev­i­dence that you should use a risky strat­egy when gam­bling be­cause the peo­ple who lost the most also played risky strate­gies.

    A third con­sid­er­a­tion is that while it may be un­rea­son­able to be too much of an in­de­pen­dent thinker in a par­tic­u­lar case, be­ing an in­de­pen­dent thinker helps you de­velop good epistemic habits. I think this point has a lot of merit, and could help ex­plain why in­de­pen­dent think­ing is more com­mon among the most suc­cess­ful peo­ple. This might seem like a good rea­son not to pay much at­ten­tion to elite com­mon sense. How­ever, it seems to me that you can get the best of both wor­lds by be­ing an in­de­pen­dent thinker and keep­ing sep­a­rate track of your own im­pres­sions and what elite com­mon sense would make of your ev­i­dence. Where con­flicts come up, you can try to use elite com­mon sense to guide your de­ci­sions.

    I feel my view is weak­est in cases where there is a strong up­side to dis­re­gard­ing elite com­mon sense, there is lit­tle down­side, and you’ll find out whether your bet against con­ven­tional wis­dom was right within a tol­er­able time limit. Per­haps many crazy-sound­ing en­trepreneurial ideas and sci­en­tific hy­pothe­ses fit this de­scrip­tion. I be­lieve it may make sense to pick a rel­a­tively small num­ber of these to bet on, even in cases where you can’t con­vince elite com­mon sense that you are on the right track. But I also be­lieve that in cases where you re­ally do have a great but un­con­ven­tional idea, it will be pos­si­ble to con­vince a rea­son­able chunk of elite com­mon sense that your idea is worth try­ing out.

    Ob­jec­tion: elite com­mon sense is wrong about X, and can’t be talked out of it, so your frame­work should be re­jected in general

    Another com­mon ob­jec­tion takes the form: view X is true, but X is not a view which elite com­mon sense would give much weight to. Eliezer makes a re­lated ar­gu­ment here, though he is ad­dress­ing a differ­ent kind of defer­ence to com­mon sense. He points to re­li­gious be­liefs, be­liefs about diet, and the re­jec­tion of cry­on­ics as ev­i­dence that you shouldn’t just fol­low what the ma­jor­ity be­lieves. My po­si­tion is closer to “fol­low the ma­jor­ity’s epistemic stan­dards” than “be­lieve what the ma­jor­ity be­liefs,” and closer still to “fol­low the best peo­ple’s epistemic stan­dards with­out cherry pick­ing “best” to suit your bi­ases,” but ob­jec­tions of this form could have some force against the frame­work I have defended.

    A first re­sponse is that un­less one thinks there are many val­ues of X in differ­ent ar­eas where my frame­work fails, pro­vid­ing a few coun­terex­am­ples is not very strong ev­i­dence that the frame­work isn’t helpful in many cases. This is a gen­eral is­sue in philos­o­phy which I think is un­der­ap­pre­ci­ated, and I’ve made re­lated ar­gu­ments in chap­ter 2 of my dis­ser­ta­tion. I think the most likely out­come of a care­ful ver­sion of this at­tack on my frame­work is that we iden­tify some ar­eas where the frame­work doesn’t ap­ply or has to be qual­ified.

    But let’s delve into the ques­tion about re­li­gion in greater de­tail. Yes, hav­ing some re­li­gious be­liefs is gen­er­ally more pop­u­lar than be­ing an athe­ist, and it would be hard to con­vince in­tel­li­gent re­li­gious peo­ple to be­come athe­ists. How­ever, my im­pres­sion is that my frame­work does not recom­mend be­liev­ing in God for the fol­low­ing rea­sons. Here are a num­ber of weak ar­gu­ments for this claim:

    1. My im­pres­sion is that the peo­ple who are most trust­wor­thy by clear and gen­er­ally ac­cepted stan­dards are sig­nifi­cantly more likely to be athe­ists than the gen­eral pop­u­la­tion. One illus­tra­tion of my per­spec­tive is that in a 1998 sur­vey of the Na­tional Academy of Sciences, only 7% of re­spon­dents re­ported that they be­lieved in God. How­ever, there is a flame war and peo­ple have pushed many ar­gu­ments on this is­sue, and sci­en­tists are prob­a­bly un­rep­re­sen­ta­tive of many trust­wor­thy peo­ple in this re­spect.

    2. While the world at large has broad agree­ment that some kind of higher power ex­ists, there is very sub­stan­tial dis­agree­ment about what this means, to the point where it isn’t clear that these peo­ple are talk­ing about the same thing.

    3. In my ex­pe­rience, peo­ple gen­er­ally do not try very hard to have ac­cu­rate be­liefs about re­li­gious ques­tions and have lit­tle pa­tience for peo­ple who want to care­fully dis­cuss ar­gu­ments about re­li­gious ques­tions at length. This makes it hard to stress-test one’s views about re­li­gion by try­ing to get a broad coal­i­tion of im­pres­sive peo­ple to ac­cept athe­ism, and makes it pos­si­ble to give more weight to one’s per­sonal take if one has thought un­usu­ally care­fully about re­li­gious ques­tions.

    4. Peo­ple are gen­er­ally raised in re­li­gious fam­i­lies, and there are sub­stan­tial so­cial in­cen­tives to re­main re­li­gious. So­cial in­cen­tives for athe­ists to re­main non-re­li­gious gen­er­ally seem weaker, though they can also be sub­stan­tial. For ex­am­ple, given my cur­rent so­cial net­work, I be­lieve I would pay a sig­nifi­cant cost if I wanted to be­come re­li­gious.

    5. De­spite the above point, in my ex­pe­rience, it is much more com­mon for re­li­gious peo­ple to be­come athe­ists than it is for athe­ists to be­come re­li­gious.

    6. In my ex­pe­rience, among peo­ple who try very hard to have ac­cu­rate be­liefs about whether God ex­ists, athe­ism is sig­nifi­cantly more com­mon than be­lief in God.

    7. In my ex­pe­rience, the most im­pres­sive peo­ple who are re­li­gious tend not to be­have much differ­ently from athe­ists or have differ­ent takes on sci­en­tific ques­tions/​ques­tions about the fu­ture.

    Th­ese points rely a lot on my per­sonal ex­pe­rience, could stand to be re­searched more care­fully, and feel un­com­fortably close to lousy con­trar­ian ex­cuses, but I think they are nev­er­the­less sug­ges­tive. In light of these points, I think my frame­work recom­mends that the vast ma­jor­ity of peo­ple with re­li­gious be­liefs should be sub­stan­tially less con­fi­dent in their views, recom­mends mod­esty for athe­ists who haven’t tried very hard to be right, and I sus­pect it al­lows rea­son­ably high con­fi­dence that God doesn’t ex­ist for peo­ple who have strong in­di­ca­tors that they have thought care­fully about the is­sue. I think it would be bet­ter if I saw a clear and prin­ci­pled way for the frame­work to push more strongly in the di­rec­tion of athe­ism, but the case has enough un­usual fea­tures that I don’t see this as a ma­jor ar­gu­ment against the gen­eral helpful­ness of the frame­work.

    As a more gen­eral point, the frame­work seems less helpful in the case of re­li­gion and poli­tics be­cause peo­ple are gen­er­ally un­will­ing to care­fully con­sider ar­gu­ments with the goal of hav­ing ac­cu­rate be­liefs. By and large, when peo­ple are un­will­ing to care­fully con­sider ar­gu­ments with the goal of hav­ing ac­cu­rate be­liefs, this is ev­i­dence that it is not use­ful to try to think care­fully about this area. This fol­lows from the idea men­tioned above that peo­ple tend to try to have ac­cu­rate views when it is in their pre­sent in­ter­ests to have ac­cu­rate views. So if this is the main way the frame­work breaks down, then the frame­work is mostly break­ing down in cases where good episte­mol­ogy is rel­a­tively unim­por­tant.

    Conclusion

    I’ve out­lined a frame­work for tak­ing ac­count of the dis­tri­bu­tion of opinions and epistemic stan­dards in the world and dis­cussed some of its strengths and weak­nesses. I think the largest strengths of the frame­work are that it can help you avoid fal­ling prey to idiosyn­cratic per­sonal bi­ases, and that us­ing it de­rives benefits from the “wis­dom of crowds” effects. The frame­work is less helpful in:

    1. cases where there is a large up­side to dis­re­gard­ing elite com­mon sense, there is lit­tle down­side, and you’ll find out whether your bet against con­ven­tional wis­dom was right within a tol­er­able time limit, and

    2. cases where peo­ple are un­will­ing to care­fully con­sider ar­gu­ments with the goal of hav­ing ac­cu­rate be­liefs.

    Some ques­tions for peo­ple who want to fur­ther de­velop the frame­work in­clude:

    1. How sen­si­tive is the frame­work to other rea­son­able choices of stan­dards for se­lect­ing trust­wor­thy peo­ple? Are there more helpful stan­dards to use?

    2. How sen­si­tive is the frame­work to rea­son­able choices of stan­dards for ag­gre­gat­ing opinions of trust­wor­thy peo­ple?

    3. What are the best ways of get­ting a bet­ter grip on elite com­mon sense?

    4. What other ar­eas are there where the frame­work is par­tic­u­larly weak or par­tic­u­larly strong?

    5. Can the frame­work be de­vel­oped in ways that make it more helpful in cases where it is weak­est?