Proving Too Much (w/​ exercises)

This is the first post in the Ar­gu­ing Well se­quence. This post is in­fluenced by Scott Alexan­der’s write up on Prov­ing too Much.

[edit: Re­for­mat­ted the post as a Prob­lem/​Solu­tion to clar­ify what I’m try­ing to claim]

The Problem

One of the pur­poses of ar­gu­ing well is to figure out what is true. A very com­mon type of bad ar­gu­ment claims some­thing like this:

Be­cause of rea­son X, I am 100% con­fi­dent in be­lief Y

I don’t know of any rea­son that leads to 100% truth all the time (and if you do, please let me know!), and it’s usu­ally hard to rea­son with the per­son un­til this faulty logic is dealt with first. This is the pur­pose of this post.

As­sum­ing the con­text of all ex­er­cises is with some­one claiming 100% be­lief for that one rea­son, what’s wrong with the fol­low­ing:

Ex. 1: I be­lieve that Cthulhu ex­ists be­cause that’s just how I was raised.

How some­one was raised doesn’t make some­thing true or not. In fact, I could be raised to be­lieve that Cthulhu doesn’t ex­ist. We can’t both be right.

Ex. 2: I be­lieve that a god­dess is watch­ing over me be­cause it makes me feel bet­ter and helps me get through the day.

Just be­cause be­liev­ing it makes you feel bet­ter doesn’t make it true. Kids might feel bet­ter be­liev­ing in Santa Claus, but that doesn’t make him ac­tu­ally ex­ist.

Gen­er­al­ized Model

How would you gen­er­al­ize the com­mon prob­lem in the above ar­gu­ments? You have 2 minutes

The com­mon theme that I see is that same logic that proves the origi­nal claim, also proves some­thing false. It “Proves too much” be­cause it also proves false things. I like to think of this logic as “Qual­ifi­ca­tions for 100% truth”, and what­ever qual­ifi­ca­tions proves the origi­nal claim can also prove a false claim.

Truth Qual­ifi­ca­tions → Claim

Same Truth Qual­ifi­ca­tions → Ab­surd Claim

Im­por­tant Note: the pur­pose of this frame isn’t to win an ar­gu­ment/​ prove any­thing. It’s to differ­en­ti­ate be­tween heuris­tics that claim 100% suc­cess rates vs ones that claim a more ac­cu­rate es­ti­mates. Imag­ine “I’m 100% con­fi­dent I’ll roll 7′s with my two die cause of my luck!” vs “There’s a 636 chance I’ll roll 7′s be­cause I’m as­sum­ing two fair die”

Let’s work a cou­ple more ex­am­ples with this model.

Ex. 3: My startup is guaran­teed to suc­ceed be­cause it uses quan­tum ma­chine learn­ing on a blockchain!

A startup us­ing buz­zwords doesn’t make it suc­ceed. In fact, there are sev­eral star­tups that use those terms and failed.

Ex. 4: Of course I be­lieve in evolu­tion! Stephen Hawk­ing be­lieves it, and he’s re­ally smart.

A smart per­son be­liev­ing some­thing doesn’t make it true. In fact, smart peo­ple of­ten dis­agree and I bet there’s a per­son with Mensa-level IQ that doesn’t be­lieve in evolu­tion.

Ex. 5: This pa­per’s re­sult has to be true since it has p < 0.05!

A pa­per hav­ing a p value less than 0.05 doesn’t mean it’s true. In fact, there are sev­eral pa­pers that dis­agree with each other with p < 0.05. Also, home­opa­thy has been shown to have a p value < 0.005!

Ideal Algorithm

What al­gorithm were you run­ning when you solved the above prob­lems? Is there a more ideal/​gen­eral al­gorithm? You have 3 min­utes.

1. What does this per­son be­lieve?

2. Why do they be­lieve it?

3. Gen­er­al­ize that reasoning

4. What’s some­thing crazy I can prove with this rea­son­ing?

The al­gorithm I ac­tu­ally ran felt like a mix of 1 & 2 & 3, and then 4, but with­out liter­ally think­ing those words in my head.

Now to prac­tice that new, ideal al­gorithm you made.

Fi­nal Prob­lem Sets

Ex. 6: I be­lieve in my re­li­gion be­cause of faith (defined as hope)

Hop­ing in some­thing doesn’t make it true. I can hope to make a good grade on a test, but that doesn’t mean that I will make a good grade. Study­ing would prob­a­bly help more than hop­ing. (Here I pro­vided a counter-ex­am­ple as re­quired and an ad­di­tional counter-rea­son)

Ex. 7: I be­lieve in my re­li­gion be­cause of faith (defined as trust)

Trust­ing in some­thing doesn’t make it true. I can trust that my dog won’t bite peo­ple, but then some­one steps on her paw and she bites them. Trust­ing that my dog won’t bite peo­ple doesn’t make my dog not bite peo­ple.

Ex. 8: I be­lieve in a soul be­cause I have a re­ally strong gut feel­ing.

Hav­ing a strong gut feel­ing doesn’t make it true. In ju­ries, peo­ple can even have con­flict­ing gut feel­ings about a crime. If a jury was try­ing to de­ter­mine if I was guilty, I would want them to use the ev­i­dence available and not their gut feel­ing. (Again, I added an ad­di­tional counter-rea­son)

Ex. 9: I be­lieve in my re­li­gion be­cause I had a re­ally amaz­ing, trans­for­ma­tive ex­pe­rience.

There are sev­eral re­li­gions that claim con­tra­dic­tory be­liefs, and also have sev­eral peo­ple who have had re­ally great, trans­for­ma­tive ex­pe­riences.

Ex. 10: I be­lieve in my re­li­gion be­cause there are sev­eral ac­counts of peo­ple see­ing heaven when they died and came back.

There are sev­eral ac­counts of peo­ple see­ing their re­li­gion’s ver­sion of heaven or nir­vana in death-to-life ex­pe­riences. You would have to be­lieve Chris­ti­an­ity, Mor­monism, Is­lam, Hindu, … too!

Ex. 11: You get an email ask­ing to be wired money, which you’ll be paid back hand­somely for. The email con­cludes “I, prince Nubadola, as­sure you that this is my mes­sage, and it is le­gi­t­i­mate. You can trust this email and any oth­ers that come from me.”

The email say­ing the email is le­gi­t­i­mate doesn’t make it true. I could even write a new email say­ing “Prince Nubadola is a fraud, and I as­sure you that this is true”. (This is cir­cu­lar rea­son­ing/​ beg­ging the ques­tion)

Conclusion

In or­der to ar­gue well, it’s im­por­tant to iden­tify and work through ar­gu­ments that prove too much. In prac­tice, this tech­nique has the po­ten­tial to lower some­one’s con­fi­dence… in a be­lief, or help clar­ify that “No, I don’t think this leads to 100% true things all the time, just most of the time”. Either way, com­mu­ni­ca­tion is bet­ter and progress is made.

In the next post, I will be gen­er­al­iz­ing Prov­ing too much. In the mean­time, what’s wrong with this ques­tion:

If a tree falls in the woods, but no one is around to hear it, does it make a sound? (note: you shouldn’t be able to frame it as Prov­ing too much)

[Feel free to com­ment if you got differ­ent an­swers/​ gen­er­al­iza­tions/​ al­gorithms than I did. Same if you feel like you hit on some­thing in­ter­est­ing or that there’s a con­cept I missed. Ad­ding your own ex­am­ples with the Spoiler tag >! is en­couraged]