Rationality Reading Group: Part C: Noticing Confusion

This is part of a semi-monthly read­ing group on Eliezer Yud­kowsky’s ebook, Ra­tion­al­ity: From AI to Zom­bies. For more in­for­ma­tion about the group, see the an­nounce­ment post.


Wel­come to the Ra­tion­al­ity read­ing group. This week we dis­cuss Part C: Notic­ing Con­fu­sion (pp. 81-114). This post sum­ma­rizes each ar­ti­cle of the se­quence, link­ing to the origi­nal LessWrong post where available.

C. Notic­ing Confusion

20. Fo­cus Your Uncer­taintyIf you are paid for post-hoc anal­y­sis, you might like the­o­ries that “ex­plain” all pos­si­ble out­comes equally well, with­out fo­cus­ing un­cer­tainty. But what if you don’t know the out­come yet, and you need to have an ex­pla­na­tion ready in 100 min­utes? Then you want to spend most of your time on ex­cuses for the out­comes that you an­ti­ci­pate most, so you still need a the­ory that fo­cuses your un­cer­tainty.

21. What Is Ev­i­dence? - Ev­i­dence is an event con­nected by a chain of causes and effects to what­ever it is you want to learn about. It also has to be an event that is more likely if re­al­ity is one way, than if re­al­ity is an­other. If a be­lief is not formed this way, it can­not be trusted.

22. Scien­tific Ev­i­dence, Le­gal Ev­i­dence, Ra­tional Ev­i­denceFor good so­cial rea­sons, we re­quire le­gal and sci­en­tific ev­i­dence to be more than just ra­tio­nal ev­i­dence. Hearsay is ra­tio­nal ev­i­dence, but as le­gal ev­i­dence it would in­vite abuse. Scien­tific ev­i­dence must be pub­lic and re­pro­ducible by ev­ery­one, be­cause we want a pool of es­pe­cially re­li­able be­liefs. Thus, Science is about re­pro­ducible con­di­tions, not the his­tory of any one ex­per­i­ment.

23. How Much Ev­i­dence Does It Take? - If you are con­sid­er­ing one hy­poth­e­sis out of many, or that hy­poth­e­sis is more im­plau­si­ble than oth­ers, or you wish to know with greater con­fi­dence, you will need more ev­i­dence. Ig­nor­ing this rule will cause you to jump to a be­lief with­out enough ev­i­dence, and thus be wrong.

24. Ein­stein’s Ar­ro­ganceAlbert Ein­stein, when asked what he would do if an ex­per­i­ment dis­proved his the­ory of gen­eral rel­a­tivity, re­sponded with “I would feel sorry for [the ex­per­i­menter]. The the­ory is cor­rect.” While this may sound like ar­ro­gance, Ein­stein doesn’t look nearly as bad from a Bayesian per­spec­tive. In or­der to even con­sider the hy­poth­e­sis of gen­eral rel­a­tivity in the first place, he would have needed a large amount of Bayesian ev­i­dence.

25. Oc­cam’s Ra­zorTo a hu­man, Thor feels like a sim­pler ex­pla­na­tion for light­ning than Maxwell’s equa­tions, but that is be­cause we don’t see the full com­plex­ity of an in­tel­li­gent mind. How­ever, if you try to write a com­puter pro­gram to simu­late Thor and a com­puter pro­gram to simu­late Maxwell’s equa­tions, one will be much eas­ier to ac­com­plish. This is how the com­plex­ity of a hy­poth­e­sis is mea­sured in the for­mal­isms of Oc­cam’s Ra­zor.

26. Your Strength as a Ra­tion­al­istA hy­poth­e­sis that for­bids noth­ing per­mits ev­ery­thing, and thus fails to con­strain an­ti­ci­pa­tion. Your strength as a ra­tio­nal­ist is your abil­ity to be more con­fused by fic­tion than by re­al­ity. If you are equally good at ex­plain­ing any out­come, you have zero knowl­edge.

27. Ab­sence of Ev­i­dence Is Ev­i­dence of Ab­senceAb­sence of proof is not proof of ab­sence. But ab­sence of ev­i­dence is always ev­i­dence of ab­sence. Ac­cord­ing to the prob­a­bil­ity calcu­lus, if P(H|E) > P(H) (ob­serv­ing E would be ev­i­dence for hy­poth­e­sis H), then P(H|~E) < P(H) (ab­sence of E is ev­i­dence against H). The ab­sence of an ob­ser­va­tion may be strong ev­i­dence or very weak ev­i­dence of ab­sence, but it is always ev­i­dence.

28. Con­ser­va­tion of Ex­pected Ev­i­denceIf you are about to make an ob­ser­va­tion, then the ex­pected value of your pos­te­rior prob­a­bil­ity must equal your cur­rent prior prob­a­bil­ity. On av­er­age, you must ex­pect to be ex­actly as con­fi­dent as when you started out. If you are a true Bayesian, you can­not seek ev­i­dence to con­firm your the­ory, be­cause you do not ex­pect any ev­i­dence to do that. You can only seek ev­i­dence to test your the­ory.

29. Hind­sight De­val­ues ScienceHind­sight bias leads us to sys­tem­at­i­cally un­der­value sci­en­tific find­ings, be­cause we find it too easy to retrofit them into our mod­els of the world. This un­fairly de­val­ues the con­tri­bu­tions of re­searchers. Worse, it pre­vents us from notic­ing when we are see­ing ev­i­dence that doesn’t fit what we re­ally would have ex­pected. We need to make a con­scious effort to be shocked enough.


This has been a col­lec­tion of notes on the as­signed se­quence for this week. The most im­por­tant part of the read­ing group though is dis­cus­sion, which is in the com­ments sec­tion. Please re­mem­ber that this group con­tains a va­ri­ety of lev­els of ex­per­tise: if a line of dis­cus­sion seems too ba­sic or too in­com­pre­hen­si­ble, look around for one that suits you bet­ter!

The next read­ing will cover Part D: Mys­te­ri­ous An­swers (pp. 117-191). The dis­cus­sion will go live on Wed­nes­day, 1 July 2015 at or around 6 p.m. PDT, right here on the dis­cus­sion fo­rum of LessWrong.