Is my view contrarian?

Pre­vi­ously: Con­trar­ian Ex­cuses, The Cor­rect Con­trar­ian Cluster, What is bunk?, Com­mon Sense as a Prior, Trust­ing Ex­pert Con­sen­sus, Pre­fer Con­trar­ian Ques­tions.

Robin Han­son once wrote:

On av­er­age, con­trar­ian views are less ac­cu­rate than stan­dard views. Hon­est con­trar­i­ans should ad­mit this, that neu­tral out­siders should as­sign most con­trar­ian views a lower prob­a­bil­ity than stan­dard views, though per­haps a high enough prob­a­bil­ity to war­rant fur­ther in­ves­ti­ga­tion. Hon­est con­trar­i­ans who ex­pect rea­son­able out­siders to give their con­trar­ian view more than nor­mal cre­dence should point to strong out­side in­di­ca­tors that cor­re­late enough with con­trar­i­ans tend­ing more to be right.

I tend to think through the is­sue in three stages:

  1. When should I con­sider my­self to be hold­ing a con­trar­ian[1] view? What is the rele­vant ex­pert com­mu­nity?

  2. If I seem to hold a con­trar­ian view, when do I have enough rea­son to think I’m cor­rect?

  3. If I seem to hold a cor­rect con­trar­ian view, what can I do to give other peo­ple good rea­sons to ac­cept my view, or at least to take it se­ri­ously enough to ex­am­ine it at length?

I don’t yet feel that I have “an­swers” to these ques­tions, but in this post (and hope­fully some fu­ture posts) I’d like to or­ga­nize some of what has been said be­fore,[2] and push things a bit fur­ther along, in the hope that fur­ther dis­cus­sion and in­quiry will con­tribute to­ward sig­nifi­cant progress in so­cial episte­mol­ogy.[3] Ba­si­cally, I hope to say a bunch of ob­vi­ous things, in a rel­a­tively well-or­ga­nized fash­ion, so that less ob­vi­ous things can be said from there.[4]

In this post, I’ll just ad­dress stage 1. Hope­fully I’ll have time to re­visit stages 2 and 3 in fu­ture posts.

Is my view con­trar­ian?

World model differ­ences vs. value differences

Is my effec­tive al­tru­ism a con­trar­ian view? It seems to be more of a con­trar­ian value judg­ment than a con­trar­ian world model,[5] and by “con­trar­ian view” I tend to mean “con­trar­ian world model.” Some ap­par­ently con­trar­ian views are prob­a­bly ac­tu­ally con­trar­ian val­ues.

Ex­pert consensus

Is my athe­ism a con­trar­ian view? It’s definitely a world model, not a value judg­ment, and only 2% of peo­ple are athe­ists.

But what’s the rele­vant ex­pert pop­u­la­tion, here? Sup­pose it’s “aca­demics who spe­cial­ize in the ar­gu­ments and ev­i­dence con­cern­ing whether a god or gods ex­ist.” If so, then the ex­pert pop­u­la­tion is prob­a­bly dom­i­nated by aca­demic the­olo­gians and re­li­gious philoso­phers, and my athe­ism is a con­trar­ian view.

We need some heuris­tics for eval­u­at­ing the sound­ness of the aca­demic con­sen­sus in differ­ent fields. [6]

For ex­am­ple, we should con­sider the se­lec­tion effects op­er­at­ing on com­mu­ni­ties of ex­perts. If some­one doesn’t be­lieve in God, they’re un­likely to spend their ca­reer study­ing ar­cane ar­gu­ments for and against God’s ex­is­tence. So most peo­ple who spe­cial­ize in this topic are the­ists, but nearly all of them were the­ists be­fore they knew the ar­gu­ments.

Per­haps in­stead the rele­vant ex­pert com­mu­nity is “schol­ars who study the fun­da­men­tal na­ture of the uni­verse” — maybe, philoso­phers and physi­cists? They’re mostly athe­ists. [7] This is start­ing to get pretty ad-hoc, but maybe that’s un­avoid­able.

What about my view that the over­all long-term im­pact of AGI will be, most likely, ex­tremely bad? A re­cent sur­vey of the top 100 au­thors in ar­tifi­cial in­tel­li­gence (by cita­tion in­dex)[8] sug­gests that my view is some­what out of sync with the views of those re­searchers.[9] But is that the rele­vant ex­pert pop­u­la­tion? My im­pres­sion is that AI ex­perts know a lot about con­tem­po­rary AI meth­ods, es­pe­cially within their sub­field, but usu­ally haven’t thought much about, or read much about, long-term AI im­pacts.

In­stead, per­haps I’d need to sur­vey “AGI im­pact ex­perts” to tell whether my view is con­trar­ian. But who is that, ex­actly? There’s no stan­dard cre­den­tial.

More­over, the most plau­si­ble can­di­dates around to­day for “AGI im­pact ex­perts” are — like the “ex­perts” of many other fields — mere “scholas­tic ex­perts,” in that they[10] know a lot about the ar­gu­ments and ev­i­dence typ­i­cally brought to bear on ques­tions of long-term AI out­comes.[11] They gen­er­ally are not ex­perts in the sense of “Reli­ably su­pe­rior perfor­mance on rep­re­sen­ta­tive tasks” — they don’t have uniquely good track records on pre­dict­ing long-term AI out­comes, for ex­am­ple. As far as I know, they don’t even have uniquely good track records on pre­dict­ing short-term geopoli­ti­cal or sci-tech out­comes — e.g. they aren’t among the “su­per fore­cast­ers” dis­cov­ered in IARPA’s fore­cast­ing tour­na­ments.

Fur­ther­more, we might start to worry about se­lec­tion effects, again. E.g. if we ask AGI ex­perts when they think AGI will be built, they may be overly op­ti­mistic about the timeline: af­ter all, if they didn’t think AGI was fea­si­ble soon, they prob­a­bly wouldn’t be fo­cus­ing their ca­reers on it.

Per­haps we can sal­vage this ap­proach for de­ter­min­ing whether one has a con­trar­ian view, but for now, let’s con­sider an­other pro­posal.

Mildly ex­trap­o­lated elite opinion

Nick Beck­stead in­stead sug­gests that, at least as a strong prior, one should be­lieve what one thinks “a broad coal­i­tion of trust­wor­thy peo­ple would be­lieve if they were try­ing to have ac­cu­rate views and they had ac­cess to [one’s own] ev­i­dence.”[12] Below, I’ll pro­pose a mod­ifi­ca­tion of Beck­stead’s ap­proach which aims to ad­dress the “Is my view con­trar­ian?” ques­tion, and I’ll call it the “mildly ex­trap­o­lated elite opinion” (MEEO) method for de­ter­min­ing the rele­vant ex­pert pop­u­la­tion. [13]

First: which peo­ple are “trust­wor­thy”? With Beck­stead, I fa­vor “giv­ing more weight to the opinions of peo­ple who can be shown to be trust­wor­thy by clear in­di­ca­tors that many peo­ple would ac­cept, rather than peo­ple that seem trust­wor­thy to you per­son­ally.” (This guideline aims to avoid parochial­ism and self-serv­ing cog­ni­tive bi­ases.)

What are some “clear in­di­ca­tors that many peo­ple would ac­cept”? Beck­stead sug­gests:

IQ, busi­ness suc­cess, aca­demic suc­cess, gen­er­ally re­spected sci­en­tific or other in­tel­lec­tual achieve­ments, wide ac­cep­tance as an in­tel­lec­tual au­thor­ity by cer­tain groups of peo­ple, or suc­cess in any area where there is in­tense com­pe­ti­tion and suc­cess is a func­tion of abil­ity to make ac­cu­rate pre­dic­tions and good de­ci­sions…

Of course, trust­wor­thi­ness can also be do­main-spe­cific. Very of­ten, elite com­mon sense would recom­mend defer­ring to the opinions of ex­perts (e.g., listen­ing to what physi­cists say about physics, what biol­o­gists say about biol­ogy, and what doc­tors say about medicine). In other cases, elite com­mon sense may give par­tial weight to what pu­ta­tive ex­perts say with­out ac­cept­ing it all (e.g. eco­nomics and psy­chol­ogy). In other cases, they may give less weight to what pu­ta­tive ex­perts say (e.g. so­ciol­ogy and philos­o­phy).

Hence MEEO out­sources the challenge of eval­u­at­ing aca­demic con­sen­sus in differ­ent fields to the “gen­er­ally trust­wor­thy peo­ple.” But in do­ing so, it raises sev­eral new challenges. How do we de­ter­mine which peo­ple are trust­wor­thy? How do we “mildly ex­trap­o­late” their opinions? How do we weight those mildly ex­trap­o­lated opinions in com­bi­na­tion?

This ap­proach might also be promis­ing, or it might be even harder to use than the “ex­pert con­sen­sus” method.

My approach

In prac­tice, I tend to do some­thing like this:

  • To de­ter­mine whether my view is con­trar­ian, I ask whether there’s a fairly ob­vi­ous, rel­a­tively trust­wor­thy ex­pert pop­u­la­tion on the is­sue. If there is, I try to figure out what their con­sen­sus on the mat­ter is. If it’s differ­ent than my view, I con­clude I have a con­trar­ian view.

  • If there isn’t an ob­vi­ous trust­wor­thy ex­pert pop­u­la­tion on the is­sue from which to ex­tract a con­sen­sus view, then I ba­si­cally give up on step 1 (“Is my view con­trar­ian?”) and just move to the model com­bi­na­tion in step 2 (see be­low), re­tain­ing pretty large un­cer­tainty about how con­trar­ian my view might be.

When do I have good rea­son to think I’m cor­rect?

Sup­pose I con­clude I have a con­trar­ian view, as I plau­si­bly have about long-term AGI out­comes,[14] and as I might have about the tech­nolog­i­cal fea­si­bil­ity of pre­serv­ing my­self via cry­on­ics.[15] How much ev­i­dence do I need to con­clude that my view is jus­tified de­spite the in­formed dis­agree­ment of oth­ers?

I’ll try to tackle that ques­tion in a fu­ture post. Not sur­pris­ingly, my ap­proach is a kind of model com­bi­na­tion and ad­just­ment.


  1. I don’t have a con­cise defi­ni­tion for what counts as a “con­trar­ian view.” In any case, I don’t think that search­ing for an ex­act defi­ni­tion of “con­trar­ian view” is what mat­ters. In an email con­ver­sa­tion with me, Holden Karnofsky con­curred, mak­ing the point this way: “I agree with you that the idea of ‘con­trar­i­anism’ is tricky to define. I think things get a bit eas­ier when you start look­ing for pat­terns that should worry you rather than try­ing to Pla­ton­i­cally define con­trar­i­anism… I find ‘Most smart peo­ple think I’m bonkers about X’ and ‘Most peo­ple who have stud­ied X more than I have plus seem to gen­er­ally think like I do think I’m wrong about X’ both wor­ry­ing; I find ‘Most smart peo­ple think I’m wrong about X’ and ‘Most peo­ple who spend their lives study­ing X within a sys­tem that seems to be clearly dys­func­tional and to have a bad track record think I’m bonkers about X’ to be less wor­ry­ing.”

  2. For a di­verse set of per­spec­tives on the so­cial episte­mol­ogy of dis­agree­ment and con­trar­i­anism not in­fluenced (as far as I know) by the Over­com­ing Bias and Less Wrong con­ver­sa­tions about the topic, see Christensen (2009); Eric­s­son et al. (2006); Kuchar (forth­com­ing); Miller (2013); Gel­man (2009); Martin & Richards (1995); Sch­wed & Bear­man (2010); In­te­mann & de Melo-Martin (2013). Also see Wikipe­dia’s ar­ti­cle on sci­en­tific con­sen­sus.

  3. I sup­pose I should men­tion that my en­tire in­quiry here is, ala Gold­man (1998), premised on the as­sump­tions that (1) the point of episte­mol­ogy is the pur­suit of cor­re­spon­dence-the­ory truth, and (2) the point of so­cial episte­mol­ogy is to eval­u­ate which so­cial in­sti­tu­tions and prac­tices have in­stru­men­tal value for pro­duc­ing true or well-cal­ibrated be­liefs.

  4. I bor­row this line from Chalmers (2014): “For much of the pa­per I am largely say­ing the ob­vi­ous, but some­times the ob­vi­ous is worth say­ing so that less ob­vi­ous things can be said from there.”

  5. Holden Karnofsky seems to agree: “I think effec­tive al­tru­ism falls some­where on the spec­trum be­tween ‘con­trar­ian view’ and ‘un­usual taste.’ My com­mit­ment to effec­tive al­tru­ism is prob­a­bly bet­ter char­ac­ter­ized as ‘want­ing/​choos­ing to be an effec­tive al­tru­ist’ than as ‘be­liev­ing that effec­tive al­tru­ism is cor­rect.’”

  6. Without such heuris­tics, we can also rather quickly ar­rive at con­tra­dic­tions. For ex­am­ple, the ma­jor­ity of schol­ars who spe­cial­ize in Allah’s ex­is­tence be­lieve that Allah is the One True God, and the ma­jor­ity of schol­ars who spe­cial­ize in Yah­weh’s ex­is­tence be­lieve that Yah­weh is the One True God. Con­sis­tency isn’t ev­ery­thing, but con­tra­dic­tions like this should still be a warn­ing sign.

  7. Ac­cord­ing to the PhilPapers Sur­veys, 72.8% of philoso­phers are athe­ists, 14.6% are the­ists, and 12.6% cat­e­go­rized them­selves as “other.” If we look only at meta­physi­ci­ans, athe­ism re­mains dom­i­nant at 73.7%. If we look only at an­a­lytic philoso­phers, we again see athe­ism at 76.3%. As for physi­cists: Lar­son & Witham (1997) found that 77.9% of physi­cists and as­tronomers are dis­be­liev­ers, and Pew Re­search Cen­ter (2009) found that 71% of physi­cists and as­tronomers did not be­lieve in a god.

  8. Mul­ler & Bostrom (forth­com­ing). “Fu­ture Progress in Ar­tifi­cial In­tel­li­gence: A Poll Among Ex­perts.”

  9. But, this is un­clear. First, I haven’t read the forth­com­ing pa­per, so I don’t yet have the full re­sults of the sur­vey, along with all its im­por­tant caveats. Se­cond, dis­tri­bu­tions of ex­pert opinion can vary widely be­tween polls. For ex­am­ple, Schlosshauer et al. (2013) re­ports the re­sults of a poll given to par­ti­ci­pants in a 2011 quan­tum foun­da­tions con­fer­ence (mostly physi­cists). When asked “When will we have a work­ing and use­ful quan­tum com­puter?”, 9% said “within 10 years,” 42% said “10–25 years,” 30% said “25–50 years,” 0% said “50–100 years,” and 15% said “never.” But when the ex­act same ques­tions were asked of par­ti­ci­pants at an­other quan­tum foun­da­tions con­fer­ence just two years later, Norsen & Nel­son (2013) re­port, the dis­tri­bu­tion of opinion was sub­stan­tially differ­ent: 9% said “within 10 years,” 22% said “10–25 years,” 20% said “25–50 years,” 21% said “50–100 years,” and 12% said “never.”

  10. I say “they” in this para­graph, but I con­sider my­self to be a plau­si­ble can­di­date for an “AGI im­pact ex­pert,” in that I’m un­usu­ally fa­mil­iar with the ar­gu­ments and ev­i­dence typ­i­cally brought to bear on ques­tions of long-term AI out­comes. I also don’t have a uniquely good track record on pre­dict­ing long-term AI out­comes, nor am I among the dis­cov­ered “su­per fore­cast­ers.” I haven’t par­ti­ci­pated in IARPA’s fore­cast­ing tour­na­ments my­self be­cause it would just be too time con­sum­ing. I would, how­ever, very much like to see these su­per fore­cast­ers grouped into teams and tasked with fore­cast­ing longer-term out­comes, so that we can be­gin to gather sci­en­tific data on which psy­cholog­i­cal and com­pu­ta­tional meth­ods re­sult in the best pre­dic­tive out­comes when con­sid­er­ing long-term ques­tions. Given how long it takes to ac­quire these data, we should start as soon as pos­si­ble.

  11. Weiss & Shanteau (2012) would call them “priv­ileged ex­perts.”

  12. Beck­stead’s “elite com­mon sense” prior and my “mildly ex­trap­o­lated elite opinion” method are epistemic no­tions that in­volve some kind ideal­iza­tion or ex­trap­o­la­tion of opinion. One ear­lier such pro­posal in so­cial episte­mol­ogy was Haber­mas’ “ideal speech situ­a­tion,” a situ­a­tion of un­limited dis­cus­sion be­tween free and equal hu­mans. See Haber­mas’ “Wahrheit­s­the­o­rien” in Schulz & Fahren­bach (1973) or, for an English de­scrip­tion, Geuss (1981), pp. 65–66. See also the dis­cus­sion in Tucker (2003), pp. 502–504.

  13. Beck­stead calls his method the “elite com­mon sense” prior. I’ve named my method differ­ently for two rea­sons. First, I want to dis­t­in­guish MEEO from Beck­stead’s prior, since I’m us­ing the method for a slightly differ­ent pur­pose. Se­cond, I think “elite com­mon sense” is a con­fus­ing term even for Beck­stead’s prior, since there’s some ex­trap­o­la­tion of views go­ing on. But also, it’s only a “mild” ex­trap­o­la­tion — e.g. we aren’t ask­ing what elites would think if they knew ev­ery­thing, or if they could rewrite their cog­ni­tive soft­ware for bet­ter rea­son­ing ac­cu­racy.

  14. My rough im­pres­sion is that among the peo­ple who seem to have thought long and hard about AGI out­comes, and seem to me to ex­hibit fairly good epistemic prac­tices on most is­sues, my view on AGI out­comes is still an out­lier in its pes­simism about the like­li­hood of de­sir­able out­comes. But it’s hard to tell: there haven’t been sys­tem­atic sur­veys of the im­por­tant-to-me ex­perts on the is­sue. I also won­der whether my views about long-term AGI out­comes are more a mat­ter of se­ri­ously tack­ling a con­trar­ian ques­tion rather than be­ing a mat­ter of hav­ing a par­tic­u­larly con­trar­ian view. On this lat­ter point, see this Face­book dis­cus­sion.

  15. I haven’t seen a poll of cry­obiol­o­gists on the likely fu­ture tech­nolog­i­cal fea­si­bil­ity of cry­on­ics. Even if there were such polls, I’d won­der whether cry­obiol­o­gists also had the rele­vant philo­soph­i­cal and neu­ro­scien­tific ex­per­tise. I should men­tion that I’m not per­son­ally signed up for cry­on­ics, for these rea­sons.