Proving Too Much (w/​ exercises)

This is the first post in the Arguing Well sequence. This post is influenced by Scott Alexander’s write up on Proving too Much.

[edit: Reformatted the post as a Problem/​Solution to clarify what I’m trying to claim]

The Problem

One of the purposes of arguing well is to figure out what is true. A very common type of bad argument claims something like this:

Because of reason X, I am 100% confident in belief Y

I don’t know of any reason that leads to 100% truth all the time (and if you do, please let me know!), and it’s usually hard to reason with the person until this faulty logic is dealt with first. This is the purpose of this post.

Assuming the context of all exercises is with someone claiming 100% belief for that one reason, what’s wrong with the following:

Ex. 1: I believe that Cthulhu exists because that’s just how I was raised.

How someone was raised doesn’t make something true or not. In fact, I could be raised to believe that Cthulhu doesn’t exist. We can’t both be right.

Ex. 2: I believe that a goddess is watching over me because it makes me feel better and helps me get through the day.

Just because believing it makes you feel better doesn’t make it true. Kids might feel better believing in Santa Claus, but that doesn’t make him actually exist.

Generalized Model

How would you generalize the common problem in the above arguments? You have 2 minutes

The common theme that I see is that same logic that proves the original claim, also proves something false. It “Proves too much” because it also proves false things. I like to think of this logic as “Qualifications for 100% truth”, and whatever qualifications proves the original claim can also prove a false claim.

Truth Qualifications → Claim

Same Truth Qualifications → Absurd Claim

Important Note: the purpose of this frame isn’t to win an argument/​ prove anything. It’s to differentiate between heuristics that claim 100% success rates vs ones that claim a more accurate estimates. Imagine “I’m 100% confident I’ll roll 7′s with my two die cause of my luck!” vs “There’s a 636 chance I’ll roll 7′s because I’m assuming two fair die”

Let’s work a couple more examples with this model.

Ex. 3: My startup is guaranteed to succeed because it uses quantum machine learning on a blockchain!

A startup using buzzwords doesn’t make it succeed. In fact, there are several startups that use those terms and failed.

Ex. 4: Of course I believe in evolution! Stephen Hawking believes it, and he’s really smart.

A smart person believing something doesn’t make it true. In fact, smart people often disagree and I bet there’s a person with Mensa-level IQ that doesn’t believe in evolution.

Ex. 5: This paper’s result has to be true since it has p < 0.05!

A paper having a p value less than 0.05 doesn’t mean it’s true. In fact, there are several papers that disagree with each other with p < 0.05. Also, homeopathy has been shown to have a p value < 0.005!

Ideal Algorithm

What algorithm were you running when you solved the above problems? Is there a more ideal/​general algorithm? You have 3 minutes.

1. What does this person believe?

2. Why do they believe it?

3. Generalize that reasoning

4. What’s something crazy I can prove with this reasoning?

The algorithm I actually ran felt like a mix of 1 & 2 & 3, and then 4, but without literally thinking those words in my head.

Now to practice that new, ideal algorithm you made.

Final Problem Sets

Ex. 6: I believe in my religion because of faith (defined as hope)

Hoping in something doesn’t make it true. I can hope to make a good grade on a test, but that doesn’t mean that I will make a good grade. Studying would probably help more than hoping. (Here I provided a counter-example as required and an additional counter-reason)

Ex. 7: I believe in my religion because of faith (defined as trust)

Trusting in something doesn’t make it true. I can trust that my dog won’t bite people, but then someone steps on her paw and she bites them. Trusting that my dog won’t bite people doesn’t make my dog not bite people.

Ex. 8: I believe in a soul because I have a really strong gut feeling.

Having a strong gut feeling doesn’t make it true. In juries, people can even have conflicting gut feelings about a crime. If a jury was trying to determine if I was guilty, I would want them to use the evidence available and not their gut feeling. (Again, I added an additional counter-reason)

Ex. 9: I believe in my religion because I had a really amazing, transformative experience.

There are several religions that claim contradictory beliefs, and also have several people who have had really great, transformative experiences.

Ex. 10: I believe in my religion because there are several accounts of people seeing heaven when they died and came back.

There are several accounts of people seeing their religion’s version of heaven or nirvana in death-to-life experiences. You would have to believe Christianity, Mormonism, Islam, Hindu, … too!

Ex. 11: You get an email asking to be wired money, which you’ll be paid back handsomely for. The email concludes “I, prince Nubadola, assure you that this is my message, and it is legitimate. You can trust this email and any others that come from me.”

The email saying the email is legitimate doesn’t make it true. I could even write a new email saying “Prince Nubadola is a fraud, and I assure you that this is true”. (This is circular reasoning/​ begging the question)

Conclusion

In order to argue well, it’s important to identify and work through arguments that prove too much. In practice, this technique has the potential to lower someone’s confidence… in a belief, or help clarify that “No, I don’t think this leads to 100% true things all the time, just most of the time”. Either way, communication is better and progress is made.

In the next post, I will be generalizing Proving too much. In the meantime, what’s wrong with this question:

If a tree falls in the woods, but no one is around to hear it, does it make a sound? (note: you shouldn’t be able to frame it as Proving too much)

[Feel free to comment if you got different answers/​ generalizations/​ algorithms than I did. Same if you feel like you hit on something interesting or that there’s a concept I missed. Adding your own examples with the Spoiler tag >! is encouraged]