Conservation of Expected Evidence

Friedrich Spee von Lan­gen­feld, a priest who heard the con­fes­sions of con­demned witches, wrote in 1631 the Cau­tio Crim­i­nalis (“pru­dence in crim­i­nal cases”), in which he bit­ingly de­scribed the de­ci­sion tree for con­demn­ing ac­cused witches: If the witch had led an evil and im­proper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dis­sem­ble and try to ap­pear es­pe­cially vir­tu­ous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches char­ac­ter­is­ti­cally pre­tend in­no­cence and wear a bold front. Or on hear­ing of a de­nun­ci­a­tion of witchcraft against her, she might seek flight or re­main; if she ran, that proved her guilt; if she re­mained, the devil had de­tained her so she could not get away.

Spee acted as con­fes­sor to many witches; he was thus in a po­si­tion to ob­serve ev­ery branch of the ac­cu­sa­tion tree, that no mat­ter what the ac­cused witch said or did, it was held as proof against her. In any in­di­vi­d­ual case, you would only hear one branch of the dilemma. It is for this rea­son that sci­en­tists write down their ex­per­i­men­tal pre­dic­tions in ad­vance.

But you can’t have it both ways —as a mat­ter of prob­a­bil­ity the­ory, not mere fair­ness. The rule that “ab­sence of ev­i­dence is ev­i­dence of ab­sence” is a spe­cial case of a more gen­eral law, which I would name Con­ser­va­tion of Ex­pected Ev­i­dence: the ex­pec­ta­tion of the pos­te­rior prob­a­bil­ity, af­ter view­ing the ev­i­dence, must equal the prior prob­a­bil­ity.

There­fore, for ev­ery ex­pec­ta­tion of ev­i­dence, there is an equal and op­po­site ex­pec­ta­tion of coun­terev­i­dence.

If you ex­pect a strong prob­a­bil­ity of see­ing weak ev­i­dence in one di­rec­tion, it must be bal­anced by a weak ex­pec­ta­tion of see­ing strong ev­i­dence in the other di­rec­tion. If you’re very con­fi­dent in your the­ory, and there­fore an­ti­ci­pate see­ing an out­come that matches your hy­poth­e­sis, this can only provide a very small in­cre­ment to your be­lief (it is already close to 1); but the un­ex­pected failure of your pre­dic­tion would (and must) deal your con­fi­dence a huge blow. On av­er­age, you must ex­pect to be ex­actly as con­fi­dent as when you started out. Equiv­a­lently, the mere ex­pec­ta­tion of en­coun­ter­ing ev­i­dence—be­fore you’ve ac­tu­ally seen it—should not shift your prior be­liefs.

So if you claim that “no sab­o­tage” is ev­i­dence for the ex­is­tence of a Ja­panese-Amer­i­can Fifth Column, you must con­versely hold that see­ing sab­o­tage would ar­gue against a Fifth Column. If you claim that “a good and proper life” is ev­i­dence that a woman is a witch, then an evil and im­proper life must be ev­i­dence that she is not a witch. If you ar­gue that God, to test hu­man­ity’s faith, re­fuses to re­veal His ex­is­tence, then the mir­a­cles de­scribed in the Bible must ar­gue against the ex­is­tence of God.

Doesn’t quite sound right, does it? Pay at­ten­tion to that feel­ing of this seems a lit­tle forced, that quiet strain in the back of your mind. It’s im­por­tant.

For a true Bayesian, it is im­pos­si­ble to seek ev­i­dence that con­firms a the­ory. There is no pos­si­ble plan you can de­vise, no clever strat­egy, no cun­ning de­vice, by which you can le­gi­t­i­mately ex­pect your con­fi­dence in a fixed propo­si­tion to be higher (on av­er­age) than be­fore. You can only ever seek ev­i­dence to test a the­ory, not to con­firm it.

This re­al­iza­tion can take quite a load off your mind. You need not worry about how to in­ter­pret ev­ery pos­si­ble ex­per­i­men­tal re­sult to con­firm your the­ory. You needn’t bother plan­ning how to make any given iota of ev­i­dence con­firm your the­ory, be­cause you know that for ev­ery ex­pec­ta­tion of ev­i­dence, there is an equal and op­pos­i­tive ex­pec­ta­tion of coun­terev­i­dence. If you try to weaken the coun­terev­i­dence of a pos­si­ble “ab­nor­mal” ob­ser­va­tion, you can only do it by weak­en­ing the sup­port of a “nor­mal” ob­ser­va­tion, to a pre­cisely equal and op­po­site de­gree. It is a zero-sum game. No mat­ter how you con­nive, no mat­ter how you ar­gue, no mat­ter how you strate­gize, you can’t pos­si­bly ex­pect the re­sult­ing game plan to shift your be­liefs (on av­er­age) in a par­tic­u­lar di­rec­tion.

You might as well sit back and re­lax while you wait for the ev­i­dence to come in.

. . . Hu­man psy­chol­ogy is so screwed up.