Two Truths and a Lie

Re­sponse to Man-with-a-ham­mer syn­drome.

It’s been claimed that there is no way to spot Affec­tive Death Spirals, or cultish ob­ses­sion with the One Big Idea of Every­thing. I’d like to posit a sim­ple way to spot such er­ror, with the caveat that it may not work for ev­ery case.

There’s an old game called Two Truths and a Lie. I’d bet al­most ev­ery­one’s heard of it, but I’ll sum­ma­rize it just in case. A per­son makes three state­ments, and the other play­ers must guess which of those state­ments is false. The state­ment-maker gets points for fool­ing peo­ple, peo­ple get points for not be­ing fooled. That’s it. I’d like to pro­pose a ra­tio­nal­ist’s ver­sion of this game that should serve as a nifty check on cer­tainAffec­tive Death Spirals, run­away The­ory-Of-Every­things, and Perfectly Gen­eral Ex­pla­na­tions. It’s al­most as sim­ple.

Say you have a the­ory about hu­man be­havi­our. Get a friend to do a lit­tle re­search and as­sert three fac­tual claims about how peo­ple be­have that your the­ory would re­al­is­ti­cally ap­ply to. At least one of these claims must be false. See if you can ex­plain ev­ery claim us­ing your the­ory be­fore learn­ing which one’s false.

If you can come up with a con­vinc­ing ex­pla­na­tion for all three state­ments, you must be very cau­tious when us­ing your One The­ory. If it can ex­plain false­hoods, there’s a very high risk you’re go­ing to use it to jus­tify what­ever prior be­liefs you have. Even worse, you may use it to in­fer facts about the world, even though it is clearly not con­sis­tent enough to do so re­li­ably. You must ex­er­cise the ut­most cau­tion in ap­ply­ing your One The­ory, if not aban­don re­li­ance on it al­to­gether. If, on the other hand, you can’t come up with a con­vinc­ing way to ex­plain some of the state­ments, and those turn out to be the false ones, then there’s at least a chance you’re on to some­thing.

Come to think of it, this is an ex­cel­lent challenge to any pro­po­nent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can dis­crim­i­nate. Just re­mem­ber to be ruth­less when they get it wrong; it doesn’t prove their idea is to­tally wrong, only that re­li­ance upon it would be.

Edited to clar­ify: My ar­gu­ment is not that one should sim­ply aban­don a the­ory al­to­gether. In some cases, this may be jus­tified, if all the the­ory has go­ing for it is its pre­dic­tive power, and you show it lacks that, toss it. But in the case of broad, com­plex the­o­ries that ac­tu­ally can ex­plain many di­ver­gent out­comes, this ex­er­cise should teach you not to rely on that the­ory as a means of in­fer­ence. Yes, you should be­lieve in evolu­tion. No, you shouldn’t make broad in­fer­ences about hu­man be­havi­our with­out any data be­cause they are con­sis­tent with evolu­tion, un­less your ap­pli­ca­tion of the the­ory of evolu­tion is so pre­cise and well-in­formed that you can con­sis­tently pass the Two-Truths-and-a-Lie Test.