Finding Cruxes


This is the third post in the Arguing Well sequence, but it can be understood on its own. This post is influenced by double crux, is that your true rejection, and this one really nice street epistemology guy.

The Problem

Al the atheist tells you, “I don’t believe in the Bible, I mean, there’s no way they could fit all the animals on the boat”.

So you sit Al down and give the most convincing argument on it indeed being perfectly plausible, answering every counterpoint he could throw at you, and providing mountains of evidence in favor. After a few days you actually managed to convince Al that it’s plausible. Triumphantly you say, “So you believe in the Bible now, right?”

Al replies, “Oh no, there’s no evidence that a great flood even happened on Earth”

″...”

Sometimes when you ask someone why they believe something, they’ll give you a fake reason. They’ll do this without even realizing they gave you fake reason! Instead of wasting time arguing points that would never end up convincing them, you can discuss their cruxes.

Before going too deep, here’s a shortcut: ask “if this fact wasn’t true, would you still be just as sure about your belief?”

Ex. 1: Al the atheist tells you, “I don’t believe in the Bible, I mean, there’s no way they could fit all the animals on the boat”

Instead of the wasted argument from before, I’d ask, “If I somehow convinced that there was a perfectly plausible way to fit all the animals on the boat, would you believe in the Bible?” (This is effective, but there’s something subtly wrong with it discussed later. Can you guess it?).

Ex. 2: It’s a historical fact that Jesus existed and died on the cross. Josephus and other historical writers wrote about it and they weren’t Christians!

If you didn’t know about those sources, would you still be just as sure that Jesus existed?

General Frame

A crux is an important reason for believing a claim. Everything else doesn’t really carry any weight. How would you generalize/​frame the common problem in the above two examples? You have 3 minutes

Using the frame of probability theory, each crux would have a percent of the reason why you believe that claim. For example, say I’m very sure (95%) my friend Bob is the best friend I’ve ever had. 10% for all the good laughs we had, 30% for all the times Bob initiated calling me first/​ inviting to hang out, and 60% for that time he let me stay in his guest room for 6 months while I got back on my feet.

If I woke up in a hospital and realized I dreamed up those 6 months at Bob’s, I wouldn’t be as sure that he was the best friend I’ve ever had since I just lost a major crux/​a major reason for believing that.

What percentage of weight would a crux need to have to be considered a crux? What percentage would you consider a waste of time? Which cruxes would you tackle first?

This is arbitrary, and it may not matter for most people’s purposes. I can say for sure I’d like to avoid anything that has 0% of the belief! But regardless how you define “crux”, it makes sense to start with the highest weighted cruxes first and go down from there.

Ex. 3: Eating meat is fine, I mean, I’m not eating that is that smart anyways.

If I proved that pigs are just as intelligent as dogs, would you still eat pigs?

Ex. 4: The Bible is horrible nonsense. There’s no way a “good” God would have anybody eternally tormented.

“If I proved that the bible had a believable interpretation such that people were just permanently dead instead of tortured, would it make better sense?”

“What if, after digging into the greek and early manuscripts, the most believable interpretation is that some people would be punished temporarily, but eventually everyone would be saved?”

Algorithm:

What’s an ideal algorithm for finding cruxes?

1. “Why do you believe in X”

2. “If that reason was no longer true, would you still be just as sure about X?”

a. If no, you can argue the specifics of that reason using the techniques discussed in this sequence

b. If yes, loop back to 1.

It would sort of go like this:

Bob: “I believe [claim]!”

Alice: “Okay, why do you believe it?”

Bob: “Because of [Reason]!”

Alice: “If [Reason] wasn’t true, would you still be just as sure about [Claim]?”

And then we figure out if that ??% coming from [Reason] is a false reason (low/​zero percent), or an actual crux (higher percent).

Note: There is still a ??% about the confidence for the claim. Alice could ask “On a scale from 0 to 100, how confident are you about [Claim]?”, which can be a very fun question to ask! If they said “99%”, this would allow you to rephrase the crux question to:

“If [Reason] wasn’t true, would you still be 99% sure about [Claim]?”

Least Convenient World:

What’s the relationship between finding cruxes and the least convenient world?

The least convenient world is meant to prevent finding loopholes in a hard question and any other “avoid directly answering the hard question” technique. It’s a way of finding the crux, to figure out what is actually being valued.

Oftentimes when trying to find someone’s crux, and I say, “imagine a [least convenient world] where your reason is not true. Would you still hold your belief as strongly?”, there’s an objection that that imagined world isn’t true or likely! I can say, “Oh, I’m not trying to say it’s likely to happen, I’m just trying to figure out what you actually care about.” and then I find out if that reason is an actual crux.

(This is covered in Scott’s original post)

The beauty of finding cruxes this way is that you don’t actually have to have concrete information. In Proving Too Much, I need to know a counterexample to prove the logic isn’t perfect. In Category Qualifications, I need to know which qualifications for words my audience has in mind to choose which words I use. In False Dilemmas, I need to be able to know what object is being arbitrarily constrained, which qualifications correctly generalize that object, and have real-world information to brainstorm other objects that match those qualifications.

There is still an art to getting someone to understand for the first time the purpose of constructing a least convenient world (“Oh, it’s not meant to be realistic, just a tool for introspection!”), but that can be figured out through practice!

Final Exercise Set

Ex. 5: I believe that pornography destroys love and there’s a lot of scientific studies showing that it has negative affects. [Note: These are mostly all real life examples, and I’m not just weird]

“If I found a very well done study with a large sample size that determined that pornography consistently reduced crime rates without negative side effects, and the entire scientific field agreed that this was a well done study with robust results, would you still believe that pornography is bad?” (In one of the street epistemologist’s videos linked above, the guy replied, “Well ya, because I’m a Christian”)

Note how the scientific study didn’t even have to exist to figure out that scientific evidence wasn’t a crux

Ex. 6: I don’t eat meat because of animal suffering

What if it was reanimated meat like on Star Trek? No actual animals involved, just reconfigured atoms to form meat. Would you eat it then?

If you were at someone’s house, you’re hungry, and they ask if you want the leftover meat that they’re about to throw away. Do you eat it?

Ex. 7: I’m actually a really good singer, and I don’t know why you’re discouraging me from it.

If you heard a recording of yourself and it sounded bad, would you still think you’re a good singer?

Ex. 8: My recorded voice never actually sounds like me.

If I recorded my voice and it sounded like me, would you believe that your recorded voice sounds like you?

(These last two examples came from a conversation with my brother in high school. I was the one who thought I was a good singer, haha)

Conclusion

This is my favorite technique to use when talking to anyone about a seriously held belief. It makes it so easy to cut through the superficial/​apologetic/​”reasonably sounding” beliefs, and start to truly understand the person I’m talking to, to know what they actually care about. The other techniques in this sequence are useful for sure, but finding the crux of the argument saves time and makes communication tractable (Read: Find cruxes first! Then argue the specific points!)

Final Note: due to other priorities, the Arguing well sequence will be on hiatus. I’ve learned a tremendous amount writing these last 4 posts and responding to comments (I will still respond to comments!). With these new gears in place, I’m even more excited to solve communication problems and find more accurate truths. After a few [month/​year]s testing this out in the real world, I’ll be back with an updated model on how to argue well.