Finding Cruxes


This is the third post in the Ar­gu­ing Well se­quence, but it can be un­der­stood on its own. This post is in­fluenced by dou­ble crux, is that your true re­jec­tion, and this one re­ally nice street episte­mol­ogy guy.

The Prob­lem

Al the athe­ist tells you, “I don’t be­lieve in the Bible, I mean, there’s no way they could fit all the an­i­mals on the boat”.

So you sit Al down and give the most con­vinc­ing ar­gu­ment on it in­deed be­ing perfectly plau­si­ble, an­swer­ing ev­ery coun­ter­point he could throw at you, and pro­vid­ing moun­tains of ev­i­dence in fa­vor. After a few days you ac­tu­ally man­aged to con­vince Al that it’s plau­si­ble. Triumphantly you say, “So you be­lieve in the Bible now, right?”

Al replies, “Oh no, there’s no ev­i­dence that a great flood even hap­pened on Earth”

″...”

Some­times when you ask some­one why they be­lieve some­thing, they’ll give you a fake rea­son. They’ll do this with­out even re­al­iz­ing they gave you fake rea­son! In­stead of wast­ing time ar­gu­ing points that would never end up con­vinc­ing them, you can dis­cuss their cruxes.

Be­fore go­ing too deep, here’s a short­cut: ask “if this fact wasn’t true, would you still be just as sure about your be­lief?”

Ex. 1: Al the athe­ist tells you, “I don’t be­lieve in the Bible, I mean, there’s no way they could fit all the an­i­mals on the boat”

In­stead of the wasted ar­gu­ment from be­fore, I’d ask, “If I some­how con­vinced that there was a perfectly plau­si­ble way to fit all the an­i­mals on the boat, would you be­lieve in the Bible?” (This is effec­tive, but there’s some­thing sub­tly wrong with it dis­cussed later. Can you guess it?).

Ex. 2: It’s a his­tor­i­cal fact that Je­sus ex­isted and died on the cross. Jose­phus and other his­tor­i­cal writ­ers wrote about it and they weren’t Chris­ti­ans!

If you didn’t know about those sources, would you still be just as sure that Je­sus ex­isted?

Gen­eral Frame

A crux is an im­por­tant rea­son for be­liev­ing a claim. Every­thing else doesn’t re­ally carry any weight. How would you gen­er­al­ize/​frame the com­mon prob­lem in the above two ex­am­ples? You have 3 minutes

Us­ing the frame of prob­a­bil­ity the­ory, each crux would have a per­cent of the rea­son why you be­lieve that claim. For ex­am­ple, say I’m very sure (95%) my friend Bob is the best friend I’ve ever had. 10% for all the good laughs we had, 30% for all the times Bob ini­ti­ated call­ing me first/​ invit­ing to hang out, and 60% for that time he let me stay in his guest room for 6 months while I got back on my feet.

If I woke up in a hos­pi­tal and re­al­ized I dreamed up those 6 months at Bob’s, I wouldn’t be as sure that he was the best friend I’ve ever had since I just lost a ma­jor crux/​a ma­jor rea­son for be­liev­ing that.

What per­centage of weight would a crux need to have to be con­sid­ered a crux? What per­centage would you con­sider a waste of time? Which cruxes would you tackle first?

This is ar­bi­trary, and it may not mat­ter for most peo­ple’s pur­poses. I can say for sure I’d like to avoid any­thing that has 0% of the be­lief! But re­gard­less how you define “crux”, it makes sense to start with the high­est weighted cruxes first and go down from there.

Ex. 3: Eat­ing meat is fine, I mean, I’m not eat­ing that is that smart any­ways.

If I proved that pigs are just as in­tel­li­gent as dogs, would you still eat pigs?

Ex. 4: The Bible is hor­rible non­sense. There’s no way a “good” God would have any­body eter­nally tor­mented.

“If I proved that the bible had a be­liev­able in­ter­pre­ta­tion such that peo­ple were just per­ma­nently dead in­stead of tor­tured, would it make bet­ter sense?”

“What if, af­ter dig­ging into the greek and early manuscripts, the most be­liev­able in­ter­pre­ta­tion is that some peo­ple would be pun­ished tem­porar­ily, but even­tu­ally ev­ery­one would be saved?”

Al­gorithm:

What’s an ideal al­gorithm for find­ing cruxes?

1. “Why do you be­lieve in X”

2. “If that rea­son was no longer true, would you still be just as sure about X?”

a. If no, you can ar­gue the speci­fics of that rea­son us­ing the tech­niques dis­cussed in this sequence

b. If yes, loop back to 1.

It would sort of go like this:

Bob: “I be­lieve [claim]!”

Alice: “Okay, why do you be­lieve it?”

Bob: “Be­cause of [Rea­son]!”

Alice: “If [Rea­son] wasn’t true, would you still be just as sure about [Claim]?”

And then we figure out if that ??% com­ing from [Rea­son] is a false rea­son (low/​zero per­cent), or an ac­tual crux (higher per­cent).

Note: There is still a ??% about the con­fi­dence for the claim. Alice could ask “On a scale from 0 to 100, how con­fi­dent are you about [Claim]?”, which can be a very fun ques­tion to ask! If they said “99%”, this would al­low you to rephrase the crux ques­tion to:

“If [Rea­son] wasn’t true, would you still be 99% sure about [Claim]?”

Least Con­ve­nient World:

What’s the re­la­tion­ship be­tween find­ing cruxes and the least con­ve­nient world?

The least con­ve­nient world is meant to pre­vent find­ing loop­holes in a hard ques­tion and any other “avoid di­rectly an­swer­ing the hard ques­tion” tech­nique. It’s a way of find­ing the crux, to figure out what is ac­tu­ally be­ing val­ued.

Of­ten­times when try­ing to find some­one’s crux, and I say, “imag­ine a [least con­ve­nient world] where your rea­son is not true. Would you still hold your be­lief as strongly?”, there’s an ob­jec­tion that that imag­ined world isn’t true or likely! I can say, “Oh, I’m not try­ing to say it’s likely to hap­pen, I’m just try­ing to figure out what you ac­tu­ally care about.” and then I find out if that rea­son is an ac­tual crux.

(This is cov­ered in Scott’s origi­nal post)

The beauty of find­ing cruxes this way is that you don’t ac­tu­ally have to have con­crete in­for­ma­tion. In Prov­ing Too Much, I need to know a coun­terex­am­ple to prove the logic isn’t perfect. In Cat­e­gory Qual­ifi­ca­tions, I need to know which qual­ifi­ca­tions for words my au­di­ence has in mind to choose which words I use. In False Dilem­mas, I need to be able to know what ob­ject is be­ing ar­bi­trar­ily con­strained, which qual­ifi­ca­tions cor­rectly gen­er­al­ize that ob­ject, and have real-world in­for­ma­tion to brain­storm other ob­jects that match those qual­ifi­ca­tions.

There is still an art to get­ting some­one to un­der­stand for the first time the pur­pose of con­struct­ing a least con­ve­nient world (“Oh, it’s not meant to be re­al­is­tic, just a tool for in­tro­spec­tion!”), but that can be figured out through prac­tice!

Fi­nal Ex­er­cise Set

Ex. 5: I be­lieve that pornog­ra­phy de­stroys love and there’s a lot of sci­en­tific stud­ies show­ing that it has nega­tive af­fects. [Note: Th­ese are mostly all real life ex­am­ples, and I’m not just weird]

“If I found a very well done study with a large sam­ple size that de­ter­mined that pornog­ra­phy con­sis­tently re­duced crime rates with­out nega­tive side effects, and the en­tire sci­en­tific field agreed that this was a well done study with ro­bust re­sults, would you still be­lieve that pornog­ra­phy is bad?” (In one of the street episte­mol­o­gist’s videos linked above, the guy replied, “Well ya, be­cause I’m a Chris­tian”)

Note how the sci­en­tific study didn’t even have to ex­ist to figure out that sci­en­tific ev­i­dence wasn’t a crux

Ex. 6: I don’t eat meat be­cause of an­i­mal suffering

What if it was re­an­i­mated meat like on Star Trek? No ac­tual an­i­mals in­volved, just re­con­figured atoms to form meat. Would you eat it then?

If you were at some­one’s house, you’re hun­gry, and they ask if you want the lef­tover meat that they’re about to throw away. Do you eat it?

Ex. 7: I’m ac­tu­ally a re­ally good singer, and I don’t know why you’re dis­cour­ag­ing me from it.

If you heard a record­ing of your­self and it sounded bad, would you still think you’re a good singer?

Ex. 8: My recorded voice never ac­tu­ally sounds like me.

If I recorded my voice and it sounded like me, would you be­lieve that your recorded voice sounds like you?

(Th­ese last two ex­am­ples came from a con­ver­sa­tion with my brother in high school. I was the one who thought I was a good singer, haha)

Conclusion

This is my fa­vorite tech­nique to use when talk­ing to any­one about a se­ri­ously held be­lief. It makes it so easy to cut through the su­perfi­cial/​apolo­getic/​”rea­son­ably sound­ing” be­liefs, and start to truly un­der­stand the per­son I’m talk­ing to, to know what they ac­tu­ally care about. The other tech­niques in this se­quence are use­ful for sure, but find­ing the crux of the ar­gu­ment saves time and makes com­mu­ni­ca­tion tractable (Read: Find cruxes first! Then ar­gue the spe­cific points!)

Fi­nal Note: due to other pri­ori­ties, the Ar­gu­ing well se­quence will be on hi­a­tus. I’ve learned a tremen­dous amount writ­ing these last 4 posts and re­spond­ing to com­ments (I will still re­spond to com­ments!). With these new gears in place, I’m even more ex­cited to solve com­mu­ni­ca­tion prob­lems and find more ac­cu­rate truths. After a few [month/​year]s test­ing this out in the real world, I’ll be back with an up­dated model on how to ar­gue well.