But There’s Still A Chance, Right?

Years ago, I was speak­ing to some­one when he ca­su­ally re­marked that he didn’t be­lieve in evolu­tion. And I said, “This is not the nine­teenth cen­tury. When Dar­win first pro­posed evolu­tion, it might have been rea­son­able to doubt it. But this is the twenty-first cen­tury. We can read the genes. Hu­mans and chim­panzees have 98% shared DNA. We know hu­mans and chimps are re­lated. It’s over.

He said, “Maybe the DNA is just similar by co­in­ci­dence.”

I said, “The odds of that are some­thing like two to the power of seven hun­dred and fifty mil­lion to one.”

He said, “But there’s still a chance, right?”

Now, there’s a num­ber of rea­sons my past self can­not claim a strict moral vic­tory in this con­ver­sa­tion. One rea­son is that I have no mem­ory of whence I pul­led that 2750,000,000 figure, though it’s prob­a­bly the right meta-or­der of mag­ni­tude. The other rea­son is that my past self didn’t ap­ply the con­cept of a cal­ibrated con­fi­dence. Of all the times over the his­tory of hu­man­ity that a hu­man be­ing has calcu­lated odds of 2750,000,000:1 against some­thing, they have un­doubt­edly been wrong more of­ten than once in 2750,000,000 times. E.g., the shared genes es­ti­mate was re­vised to 95%, not 98%—and that may even ap­ply only to the 30,000 known genes and not the en­tire genome, in which case it’s the wrong meta-or­der of mag­ni­tude.

But I think the other guy’s re­ply is still pretty funny.

I don’t re­call what I said in fur­ther re­sponse—prob­a­bly some­thing like “No”—but I re­mem­ber this oc­ca­sion be­cause it brought me sev­eral in­sights into the laws of thought as seen by the un­en­light­ened ones.

It first oc­curred to me that hu­man in­tu­itions were mak­ing a qual­i­ta­tive dis­tinc­tion be­tween “No chance” and “A very tiny chance, but worth keep­ing track of.” You can see this in the Over­com­ing Bias lot­tery de­bate.

The prob­lem is that prob­a­bil­ity the­ory some­times lets us calcu­late a chance which is, in­deed, too tiny to be worth the men­tal space to keep track of it—but by that time, you’ve already calcu­lated it. Peo­ple mix up the map with the ter­ri­tory, so that on a gut level, track­ing a sym­bol­i­cally de­scribed prob­a­bil­ity feels like “a chance worth keep­ing track of,” even if the refer­ent of the sym­bolic de­scrip­tion is a num­ber so tiny that if it were a dust speck, you couldn’t see it. We can use words to de­scribe num­bers that small, but not feel­ings—a feel­ing that small doesn’t ex­ist, doesn’t fire enough neu­rons or re­lease enough neu­ro­trans­mit­ters to be felt. This is why peo­ple buy lot­tery tick­ets—no one can feel the smal­l­ness of a prob­a­bil­ity that small.

But what I found even more fas­ci­nat­ing was the qual­i­ta­tive dis­tinc­tion be­tween “cer­tain” and “un­cer­tain” ar­gu­ments, where if an ar­gu­ment is not cer­tain, you’re al­lowed to ig­nore it. Like, if the like­li­hood is zero, then you have to give up the be­lief, but if the like­li­hood is one over googol, you’re al­lowed to keep it.

Now it’s a free coun­try and no one should put you in jail for ille­gal rea­son­ing, but if you’re go­ing to ig­nore an ar­gu­ment that says the like­li­hood is one over googol, why not also ig­nore an ar­gu­ment that says the like­li­hood is zero? I mean, as long as you’re ig­nor­ing the ev­i­dence any­way, why is it so much worse to ig­nore cer­tain ev­i­dence than un­cer­tain ev­i­dence?

I have of­ten found, in life, that I have learned from other peo­ple’s nicely blatant bad ex­am­ples, duly gen­er­al­ized to more sub­tle cases. In this case, the flip les­son is that, if you can’t ig­nore a like­li­hood of one over googol be­cause you want to, you can’t ig­nore a like­li­hood of 0.9 be­cause you want to. It’s all the same slip­pery cliff.

Con­sider his ex­am­ple if you ever you find your­self think­ing, “But you can’t prove me wrong.” If you’re go­ing to ig­nore a prob­a­bil­is­tic coun­ter­ar­gu­ment, why not ig­nore a proof, too?