Realistic epistemic expectations

When I state a po­si­tion and offer ev­i­dence for it, peo­ple some­times com­plain that the ev­i­dence that I’ve given doesn’t suffice to es­tab­lish my po­si­tion. The situ­a­tion is usu­ally that I’m not try­ing to give a rigor­ous ar­gu­ment for my po­si­tion, and I don’t in­tend to claim that the ev­i­dence that I provide suffices to es­tab­lish my po­si­tion.

My goal in these cases is to offer a high-level sum­mary of my think­ing, and to provide enough ev­i­dence so that read­ers have rea­son to Bayesian up­date and to find the view suffi­ciently in­trigu­ing to in­ves­ti­gate fur­ther.

In gen­eral, when a po­si­tion is non-ob­vi­ous, a sin­gle con­ver­sa­tion is nowhere near enough time to con­vince a ra­tio­nal per­son that it’s very likely to be true. As Bur­gundy re­cently wrote:

When you ask Carl Shul­man a ques­tion on AI, and he starts giv­ing you facts in­stead of a straight an­swer, he is re­veal­ing part of his book. The thing you are hear­ing from Carl Shul­man is re­ally only the tip of the ice­berg be­cause he can­not talk fast enough. His real an­swer to your ques­tion in­volves the to­tal­ity of his knowl­edge of AI, or per­haps the to­tal­ity of the con­tents of his brain.

If I were to re­strict my­self to mak­ing claims that I could sub­stan­ti­ate in a mere ~2 hours, that would pre­clude the pos­si­bil­ity of me shar­ing the vast ma­jor­ity of what I know.

In math, one can give rigor­ous proofs start­ing from very sim­ple ax­ioms, as Gauss de­scribed:

I mean the word proof not in the sense of lawyers, who set two half proofs equal to a whole one, but in the sense of math­e­mat­i­ci­ans, where 12 proof = 0, and it is de­manded for proof that ev­ery doubt be­comes im­pos­si­ble’.

Even within math, as a prac­ti­cal mat­ter, proofs that ap­pear to be right are some­times un­der­cut by sub­tle er­rors. But out­side of math – the only re­li­able tool that one has at one’s dis­posal is Bayesian in­fer­ence. In 2009, char­ity eval­u­a­tor GiveWell made very strong efforts to ap­ply care­ful rea­son­ing to iden­tify its top rated char­ity, and gave a “con­ser­va­tive” cost-effec­tive­ness es­ti­mate of $545/​life saved, which turned out to have been wildly op­ti­mistic. Ar­gu­men­ta­tion that looks solid on the sur­face of­ten breaks down on close scrutiny. This is closely re­lated to why GiveWell em­pha­sizes the need to look at giv­ing op­por­tu­ni­ties from many an­gles, and gives more weight to ro­bust­ness of ev­i­dence than to care­ful chains of ar­gu­men­ta­tion.

Eliezer named this web­site Less Wrong for a rea­son – one can never be cer­tain of any­thing – all ra­tio­nal be­liefs re­flect de­grees of con­fi­dence. I be­lieve that dis­cus­sion ad­vances ra­tio­nal­ity the most when it in­volves shar­ing per­spec­tives and ev­i­dence, rather than ar­gu­men­ta­tion.