Finding Cruxes
This is the third post in the Arguing Well sequence, but it can be understood on its own. This post is influenced by double crux, is that your true rejection, and this one really nice street epistemology guy.
The Problem
Al the atheist tells you, “I don’t believe in the Bible, I mean, there’s no way they could fit all the animals on the boat”.
So you sit Al down and give the most convincing argument on it indeed being perfectly plausible, answering every counterpoint he could throw at you, and providing mountains of evidence in favor. After a few days you actually managed to convince Al that it’s plausible. Triumphantly you say, “So you believe in the Bible now, right?”
Al replies, “Oh no, there’s no evidence that a great flood even happened on Earth”
″...”
Sometimes when you ask someone why they believe something, they’ll give you a fake reason. They’ll do this without even realizing they gave you fake reason! Instead of wasting time arguing points that would never end up convincing them, you can discuss their cruxes.
Before going too deep, here’s a shortcut: ask “if this fact wasn’t true, would you still be just as sure about your belief?”
Ex. 1: Al the atheist tells you, “I don’t believe in the Bible, I mean, there’s no way they could fit all the animals on the boat”
Instead of the wasted argument from before, I’d ask, “If I somehow convinced that there was a perfectly plausible way to fit all the animals on the boat, would you believe in the Bible?” (This is effective, but there’s something subtly wrong with it discussed later. Can you guess it?).
Ex. 2: It’s a historical fact that Jesus existed and died on the cross. Josephus and other historical writers wrote about it and they weren’t Christians!
If you didn’t know about those sources, would you still be just as sure that Jesus existed?
General Frame
A crux is an important reason for believing a claim. Everything else doesn’t really carry any weight. How would you generalize/frame the common problem in the above two examples? You have 3 minutes
Using the frame of probability theory, each crux would have a percent of the reason why you believe that claim. For example, say I’m very sure (95%) my friend Bob is the best friend I’ve ever had. 10% for all the good laughs we had, 30% for all the times Bob initiated calling me first/ inviting to hang out, and 60% for that time he let me stay in his guest room for 6 months while I got back on my feet.
If I woke up in a hospital and realized I dreamed up those 6 months at Bob’s, I wouldn’t be as sure that he was the best friend I’ve ever had since I just lost a major crux/a major reason for believing that.
What percentage of weight would a crux need to have to be considered a crux? What percentage would you consider a waste of time? Which cruxes would you tackle first?
This is arbitrary, and it may not matter for most people’s purposes. I can say for sure I’d like to avoid anything that has 0% of the belief! But regardless how you define “crux”, it makes sense to start with the highest weighted cruxes first and go down from there.
Ex. 3: Eating meat is fine, I mean, I’m not eating that is that smart anyways.
If I proved that pigs are just as intelligent as dogs, would you still eat pigs?
Ex. 4: The Bible is horrible nonsense. There’s no way a “good” God would have anybody eternally tormented.
“If I proved that the bible had a believable interpretation such that people were just permanently dead instead of tortured, would it make better sense?”
“What if, after digging into the greek and early manuscripts, the most believable interpretation is that some people would be punished temporarily, but eventually everyone would be saved?”
Algorithm:
What’s an ideal algorithm for finding cruxes?
1. “Why do you believe in X”
2. “If that reason was no longer true, would you still be just as sure about X?”
a. If no, you can argue the specifics of that reason using the techniques discussed in this sequence
b. If yes, loop back to 1.
It would sort of go like this:
Bob: “I believe [claim]!”
Alice: “Okay, why do you believe it?”
Bob: “Because of [Reason]!”
Alice: “If [Reason] wasn’t true, would you still be just as sure about [Claim]?”
And then we figure out if that ??% coming from [Reason] is a false reason (low/zero percent), or an actual crux (higher percent).
Note: There is still a ??% about the confidence for the claim. Alice could ask “On a scale from 0 to 100, how confident are you about [Claim]?”, which can be a very fun question to ask! If they said “99%”, this would allow you to rephrase the crux question to:
“If [Reason] wasn’t true, would you still be 99% sure about [Claim]?”
Least Convenient World:
What’s the relationship between finding cruxes and the least convenient world?
The least convenient world is meant to prevent finding loopholes in a hard question and any other “avoid directly answering the hard question” technique. It’s a way of finding the crux, to figure out what is actually being valued.
Oftentimes when trying to find someone’s crux, and I say, “imagine a [least convenient world] where your reason is not true. Would you still hold your belief as strongly?”, there’s an objection that that imagined world isn’t true or likely! I can say, “Oh, I’m not trying to say it’s likely to happen, I’m just trying to figure out what you actually care about.” and then I find out if that reason is an actual crux.
(This is covered in Scott’s original post)
The beauty of finding cruxes this way is that you don’t actually have to have concrete information. In Proving Too Much, I need to know a counterexample to prove the logic isn’t perfect. In Category Qualifications, I need to know which qualifications for words my audience has in mind to choose which words I use. In False Dilemmas, I need to be able to know what object is being arbitrarily constrained, which qualifications correctly generalize that object, and have real-world information to brainstorm other objects that match those qualifications.
There is still an art to getting someone to understand for the first time the purpose of constructing a least convenient world (“Oh, it’s not meant to be realistic, just a tool for introspection!”), but that can be figured out through practice!
Final Exercise Set
Ex. 5: I believe that pornography destroys love and there’s a lot of scientific studies showing that it has negative affects. [Note: These are mostly all real life examples, and I’m not just weird]
“If I found a very well done study with a large sample size that determined that pornography consistently reduced crime rates without negative side effects, and the entire scientific field agreed that this was a well done study with robust results, would you still believe that pornography is bad?” (In one of the street epistemologist’s videos linked above, the guy replied, “Well ya, because I’m a Christian”)
Note how the scientific study didn’t even have to exist to figure out that scientific evidence wasn’t a crux
Ex. 6: I don’t eat meat because of animal suffering
What if it was reanimated meat like on Star Trek? No actual animals involved, just reconfigured atoms to form meat. Would you eat it then?
If you were at someone’s house, you’re hungry, and they ask if you want the leftover meat that they’re about to throw away. Do you eat it?
Ex. 7: I’m actually a really good singer, and I don’t know why you’re discouraging me from it.
If you heard a recording of yourself and it sounded bad, would you still think you’re a good singer?
Ex. 8: My recorded voice never actually sounds like me.
If I recorded my voice and it sounded like me, would you believe that your recorded voice sounds like you?
(These last two examples came from a conversation with my brother in high school. I was the one who thought I was a good singer, haha)
Conclusion
This is my favorite technique to use when talking to anyone about a seriously held belief. It makes it so easy to cut through the superficial/apologetic/”reasonably sounding” beliefs, and start to truly understand the person I’m talking to, to know what they actually care about. The other techniques in this sequence are useful for sure, but finding the crux of the argument saves time and makes communication tractable (Read: Find cruxes first! Then argue the specific points!)
Final Note: due to other priorities, the Arguing well sequence will be on hiatus. I’ve learned a tremendous amount writing these last 4 posts and responding to comments (I will still respond to comments!). With these new gears in place, I’m even more excited to solve communication problems and find more accurate truths. After a few [month/year]s testing this out in the real world, I’ll be back with an updated model on how to argue well.
I don’t see an immediately obvious way to give percentages for which reasons are how responsible for a belief. What we can do is ask for each subset of reasons how likely they would find their belief if they only had that subset. Did you have some way in mind to get percentages from the following state of affairs?
A B C—reasons
{A, B, C} − 99%
{A, C} − 98%
{B, C} − 98%
{C} − 50%
{A, B} − 89%
{A} − 88%
{B} − 88%
{} − 40%
{} should be 0%, unless you’re talking about a uniform distribution over all possibilities? Like {} for a coin flip is 50% for heads, and {} for a dice roll being 4 is 1/6?
Though, when talking to someone, I would probably never go so technical as to ask for their confidence for each subset. They are probably not calibrated and probably can’t even enumerate every possible reason why they believe something. [This is also related to Scott Alexander’s “Not Sounding Like a Robot”]
Iterating through the algorithm in this post does allow someone else to think through possible reasons why they believe, and whether or not that reason is a crux.
{} is the subjective probability estimate given that reasons A, B abd C are not present.
You may not want to ask for ask this information, but it is in fact exactly all the relevant information. If you want to extract just one percentage per reason, you should define how you do this, just so it is clear what exactly you are asking. Those percentages may then again be available through more direct questions.
So there’s still other reasons, right? They’re just not in the set {A,B,C}?
I don’t understand your overall point. Like say I believe in ghosts with 99% confidence. Three reasons why are: A. Ghost shows I watch: 10% B. Internet stories: 15% C. Those few times a ghost girl stood at the foot of my bed and I couldn’t move or scream: 60%
Would you apply what you’re trying to say/ask to this example?
If you hadn’t experienced those ghost girls, what would be your confidence?
What does that 60% mean? What changes when we replace it by 50%? Can you unpack the definition of “How much of the belief is due to this reason?”?
Oh! That’s clear, thanks!
I give an example of this in the “bob is best friend” picture.
How you calculate it is just a proportion. I’m 99% sure of ghosts, and 60% of that is 60*.99=59.4 percentage points.
If I figure out that the ghost girl was actually just my brain rationalizing sleep paralysis, then my belief in ghosts loses 59.4 percentage points. So now I believe in ghosts with 99-59.4= 39.4% confidence. Note that the other two beliefs (and unmentioned beliefs not in set {A,B,C}) must now be normalized to equal 100%.
You should be able to verify that you understand this by getting the same answer I did in the “Bob is best friend” example.
With this you can also answer: how many percentage points of 99% do you lose when the ghost girl belief goes from 60% to 50%?
99% is much further from 98% than 51% from 50%. As an example, getting from a one in a million confidence that Alice killed Bob (because Alice is one of a million citizens) to ten suspects requires much more evidence than eliminating five of them. Probabiliy differences are measured on the log-odds scale, in order to make seeing reason A, then B have the same effect as seeing B, then A. On that scale, you could in fact take two statistically independent reasons and say how many times more evidence one gives than the other.
I don’t understand how your comment relates to mine. Are you claiming the math to update the confidence is wrong?
Are you claiming that I haven’t properly defined how to calculate the probabilities and that this is bad for a reason?
Yes to both. Suppose a coin has heads probability 33% and another 66%. We take a random coin and throw it three times. Afterwards, if we have seen 0, 1, 2 or 3 heads, the subjective probability of us having taken the 66% coin is 1⁄9, 1⁄3, 2⁄3 or 8⁄9. The absolute probability reduction is not the same each time we remove a reason to believe. On a log-odds scale, it is.
Thanks for explaining. I’m more convinced you’re right math wise, though I haven’t verified for myself.
I don’t think understanding this or working it out correctly will help in actual conversations with people about their beliefs though. (In fact, I get the most out of it by just drawing the picture of beliefs and connected reasons, and writing estimates probabilities. It really helps keep track of what’s said and makes circular reasoning very clear.)
Are you saying there is a practical reason for doing so? I can’t imagine one for the average university student I run into, let alone less technical people. Maybe with oneself or someone technical?
Having in mind that we are measuring bits of evidence tells us that to give percentages, we must establish a baseline prior probability that we would assign without reasons.
Mostly you should be fine, just have heuristics for the anomalies near 0 and 1 - if one belief pushes the probability to .5 and another to .6, then the prior was noticeably far from zero or getting only the second reason won’t be noticeable either.
This seems like the flip-side to the guideline “Write your true reasons for believing something, not what you think is more likely to persuade others...and note what would change your mind”. Just as the guideline is a method of getting yourself to the crux of an issue, this essay is about how to quickly get someone else to the crux of an issue.
If I argue someone from believing a claim to be false to them being open whether the claim is false or true that is often time well spent to get correct reasoning. Say that somebody put down a book because they became convinced that the book author has no clue. if you can argue that what the book says makes sense they will probably resume reading. But I can’t even argue about what would make them have the opinion that the book makes sense as a whole.
Even in the example it could be that the boat fitting is the most incredible thing but the book contains a lot of incredible things. It would be weird to start with a complete list. Instead it is more natural to lead with the most pressing reason. But that it is pressing doesn’t mean it’s the sole reason. Just because Al doesn’t mention percentages or other technicalities doesn’t mean he is misleading. And even in the case that Al continues to believe that bible is false he probably has milder grounds which would correspond to believing with a lower percentage certainty. Al can be read both as what happens without this technique and as what happens with the technique.
For example I believe there are multiple things with “flat earth” I have an issue with and I don’t even know the claims good enough to know there would be a “true crux” that would make me change mind about that. Does that mean that individual things that I disagree with are not true disagreements?
I guess differentiating between “I would believe in not-X” and “I would not believe in X” is often not essential but agnotism would be comparatively valuable to a flip. “Would you eat dog-smart pigs?”, “I do not know whether I would eat dog-smart pigs” means the person takes the intelligence level seriously.
I realize my other comment covers your 3rd paragraph, so moving on to the 4th one:
Not sure I fully understand it. But if I got someone to say “I don’t know”, then I count that as a major win, because they’ll usually think about it more and try to figure it out. So I think we agree that’s a good thing.
I don’t think it always implies that, in this case, intelligence is a crux. After introspecting, they could say “actually, I think I care about whether or not I had them as a pet growing up”.
I don’t understand the agnosticism Is valuable to flip part.
I think in the case that a more strong crux was found it still underlines that it was a true reason. If you examine your beliefs and change them on the fly it still means you changed beliefs even if you present that you had the original position all along. I also recently heard of a story of when my grandmother was little which would lead me to believe that “I care about bringing up as pet” could be a fake reason but sincerely held. Being correct or incorrect what would change your mind is not always that straightforward.
The edgecases higlight how finding cruxes is not about finding fake reasons if a true reason can faill to be a crux. That is a true update that doesn’t change opinion is frustrating in the argumenting sense but for getting more high quality cognition rewarding.
I don’t know what you mean by “true reason”. I’ve defined a “crux” as holding probability such that it being true or not actually affects the confidence in the belief. Could you define “true reason” in this level of detail?
I’m confused about where we disagree:
When someone gives me a reason for why they believe in something, I don’t assume that they gave me a crux.
When I ask someone “if that reason turned out to not be true, would you still be just as confident in your belief?”, I’ll usually trust them when they say “yes” or “no”.
If after 2, and I show them that their reason is actually false, they say “oh, that actually didn’t change my mind like I predicted”, then most people would feel weird/bad about being inconsistent and would try to resolve it. This situation is also good, but I predict it’s unlikely.
I thought that crux was a statement that if belief in that is altered then the conclusion is changed. A true reason is something that is actually used in reasoning. The opposite is a fake reason. It is a reason in that it entails the conclusion. But it is fake because it is not actually used in the reasoning. That is “plausible affirmability”. A non-crux would be a statement if changed would not move the conclusion. If you have a true reason that is a non-crux then after the argument you hold the same conclusion with different reasons/groundings. If you have a fake reason that is a non-crux then after the argument your belief is implausible (or its a talk point htat can be repeated for retorical losing but no change in positions). If you have a fake reason that is a crux then you will profess a different conclusion after the argument. And for a true reason that is a crux your position actually changes.
I do find it strange that the line of questioning doesn’t trust that “why do you believe that?” is answered accurately but does trust that “would you change you mind?” is. A common misreading of a question like “Why do you believe that?” could be “”please give a defence of your opinion” which would be an inaccurate answer. The point of a clarifying line of questions would be to disambiguate that “no really, I am interested in the why and would like you to share it”.
Thanks for explaining the differences and going into detail for the combinations. I defined crux in this post as anything with probability attached to it, such that if it wasn’t true, the confidence of the belief would lower. This is more general and covers cases that have multiple reasons that lead to a belief.
For ex. I believe with 99% confidence that I picked the fair coin after I flipped it 10 times. Each of those 10 flips contributes to the belief, and each is a crux.
I don’t quite understand the disambiguating of the last line. Some people do interpret it as “give me a defense/good arguments for that belief“, but I don’t see how “no really, …” couldn’t also be misinterpreted the same way.
To clarify why I trust one statement and not another, I used this technique a couple of days ago with two guys. I asked the “why do you believe this?”, and didn’t get a crux. I asked if he’d be just as confident if his reason didn’t exist and he said he’d be just as confident.
After then explaining that I’m looking for reasons that contribute to the confidence of his belief, he said “oh, I get it” and we had a very productive conversation.
I think that drawing the picture, asking for their confidence, asking why, and asking if it’s a crux helps tremendously towards a productive conversation. I think this process (which takes like 1-2 minutes per iteration) disambiguates the “why” question mentioned above. (Though, if you have better phrasing’s that are clearer, I’d be happy to hear them)
I had in mid when writing the cases that the confidences could be degrees. I don’t see how the realness instead of binariness explains any disparities.
Trying to be clear what I mean and how I effectively communicate it I did think that I probably define both “realness” and “cruxiness” to be similar continuums instead of binaries which probably wanst apparent in my expression but I was thinking of expressing it through the poles. But I think that a belief can be “half-real” where you have a mix of reasoning you don’t know whether you state in name only or whether they actually convinced/convince you. Similarly the other aspect has a slide in it. I will reserve the name “crux” for the concept used in the post. Rename my other aspect as “stateness”. I think they are the same but I am not entirely sure.
Someone close to me pointed out that being smart can make you argumentative as non-smartasses might disagree with a claim but if two smartasses agree with a claim they can disagree on why that position should be held. I have found it very helpdul for myself to have debates over whether the reasons I find something are correct or not, or to phrase it differently if I get something right based on luck, I did an error because I should have arrived at the answer in a systematic way that can be relied on. In more middle cases when I apply a heuristic, the heuristic can be more or less applicaple to the situation and applying a heuristic where it is not meant to is a error-like thing.
Using propabilities can be very model ambivalent which makes different models somewhat interoperable. But it makes it so that model sensitive aspects of arguing are going to be hidden or very hard to express in that kind of language. I still think there is a danger that any claim that has 0 probability moving power would be judged to be a free-spinning wheel. Then there is the issue whether valuable 0 probability moving issues exist and whether it is productive to such bring entities into a co-op deliberation or argument. The point of the crux method is to identify points that lead to effective resolvation. Thefore it depends whether we use resolvation to be more deliberative / communicative or view the agreement as the higher goal so that deliberation and communication are tools to get to agreement.
In the coin flip example, say that you flip the coins and somebody says “there are as many tails as heads”. When you start to think what you would belive if one of the flips had a different result there are multiple paths: 1) coin #1 is different, the bystander commenter doesn’t say that the amounts are equal 2) coin #1 is different, the bystander commenter does say the amounts are equal, coin #2 is different also 3) All coins are have the oppostie result, the bystander says the amounts are equal. It seems strange to me that every single one of your cruxes could be different yet you would still maintain that the coin was fair. That doesn’t sound like a move in confidence. That doesn’t seem to fullfill the definition of a crux.
I kinda get that you want to express that the result of the coin flips are materially connected to the belief that the coin is fair. But the definition of the crux as stated doesn’t really express that kind of thing. Or in the alternative if the definition is supposed to cover that kind of case then the case where the belief is materially connected but doesn’t push the conclusion confidence in any direction ought to also be covered.
I also furher more started to doubt if there is more hidden conceptual disagreements. To me “why” is about the past and the causal history. However the “why” asked here seems that it concerns the future as in “what keeps you believing that thing” as opposed to “what made you adopt that belief”. I also realised that I think it should be easy for participants to reveal if their beliefs formed under questionable circumstances (and that this is not trivial and not everybody has policies to this direction). Answering the attitude of “what keeps you believing” makes one endorse those standards differntly and might lead to endorsements that would not exist without asking.
Disambiguation are hard to get exhaustive. I guess the focus on that disamgiuation would be the “why” part. It can just get confusing whether we are talkikng about the level of what the speaker intends or what is offered for the hearer to interpret. I realised that big part of the crux approach can be that the YOU is emphasised (there is no crux for objective facts it require subjective judgement). It could also make sense to emphasise the WOULD (we are about to actually change opinions and not just wave flags for our sides) or BELIEF (we are not caring about professing or side-picking just what you think is the case). But it muddies the waters that context where any of the crucial part here are demphasised in perfectly sensible activities (YOU,why would the general population have sympathy for your views,WOULD for Aist which we don’t know whether they exist or their other attributes how they would take that, BELIEF winning a debate where your position is picked at random at start, WHY decision by vote possibly by irrational or populist voters)
I don’t understand the point you’re making in your first two paragraphs, could you explicitly relate it to finding cruxes and what you specifically disagree/agree with?
I did understand the Al part though! I never claimed that he was being purposely misleading, but I did want it to come across as “Al is giving a reason for his belief that only accounts for <10% of his confidence”, or “Al isn’t giving the main reason for his belief that accounts for the most probability”.
I agree it can account for a smaller probability, and this is mentioned in Ex. 1 as what’s subtly wrong with my phrasing.
“Do you believe in the bible now?” ask for a positive belief in bible while “You were wrong to dismiss bible as impossible” does not entail that you ought to believe in bible.
I guess there are multiple facets to the “process” point. That is the situation requires that the familiarity with the claims is overwhelming and that is only the matter of opinion what the judgement outcome is. There are multiple processes where our early opinion shapes how and how much information we clean from the object of interest. If you close a book early you dont’ know what the backhalf contains. if there is a trial and a witness is wrongly not heard you can not effectively cure this mistake without crossexamining the witness.
The second aspect to the “process” point is comparable cognitive work. That is agents are not typically logically omniscient and to the extenent they have not processed part of logic space they can’t really be blamed for. Thus if we argue we want to argue over judgement actually exerciced not that could have been exercised. The situation can be read so that it implictly assumes or endorses that Al should have formed an opinion on the plausibility for every single claim that bible contains (in order to be sure that the unexamined claims do not contain a point of higher disagreement).
Say that there is a freedom of speech case and the court must first decide whether the speech was first amendment protected speech and if it was whether there is a compelling state interest. If the court finds that the speech was not protected they don’t have to take an opinion whether a compelling state interest exisist, the question would me moot. If a higher court decides that actually the speech is protected then the further question is no longer moot. If you would ask whether not being protected kind of speech is a true reason for the verdict the logic here presented would ask whether orders of the court would have been different if this facet was different. The answer is not that “yes, the court would have provided protection” but rather “it depends on further facts” and the case is not that it would be up to chance. But just because the decision doesn’t flip on that fact doesn’t make it a untrue reason (and is in fact 100% dominant in that it doesn’t “share” the weight with other factors).
If Al was following a decision procedure like
Then if he makes a judgement error in evaluating a text units plausibility the correct cure would be to unmoot the rest of the sentences rather than arrive at the opposite conclusion (intelligence is not reversed stupidity).
I think you’re saying: Someone might not know all the relevant information, or all the logical implications, and it might be good to encourage them to read more information or think through more implications(?)
Regardless, I think using the recursive finding cruxes algorithm given in this post solves any of these issues in a real life conversation. Are you claiming that it doesn’t?
Trying to follow the algorithm would lead to dismissal if the answer to 2 was negative which would often be destructive (or more constructive paths would be followed without trying to adhere to the algorithm)
Oftentimes just being curious how the other things gets the ball rolling. The strategy outlined tries to avoid touching the foreign mental machinery as much as possible while still changing the stance. It can be a problem if you get bogged down to irrelevant curiosities. But often the sidetracks can be more valuable than the starting main objective.
The strategy wants for the other to tell a story how they would arrive at the new stance. But the inferential steps to get to that kind of story could be a lot. It works well when a point change in one belief has clearly seen consequences for other beliefs. But it becomes increasingly inapplicable when it is hard to imagine the consequences or if the consequences are hard to predict. Onus on doing the cognitive work and adding details on adopting new stances would be for those that suggest them. Doing work only on the condition that is can be guaranteed before hand that it will lead to progress makes people keep their minds far away from fields where guarantees can’t be given.
Thanks for trying to repair communications and confirming how much sense I am making
Of course, I’ve really appreciated your input.
I like using this formula as a guideline for introspection, and the overall purpose is understanding the other person (which is related to curiosity, but not my purpose).
A negative after step 2, “no, I would still be just as confident” helps focus the conversation on actual cruxes. However, I did have a guy, having already understood I was asking for cruxes, say that the reason was a crux, but it wasn’t the complete reason (“yes, if God didn’t give grace I wouldn’t believe in him at all, but the Bible and 2000 year history are also important”)
Maybe if I was talking to someone else, they wouldn’t be able to say the extra reasons, being more timid or less introspective. But I’m pretty good at noticing when someone doesn’t react like “Oh, I’ve been 100% convinced and nothing is wrong with this logic”. This skill is very useful, and isn’t mentioned in the post.
The best use of this method is definitely drawing the picture using the picture to keep up with all the reasons and reasons for reasons. It makes it so much easier for both of us to stay on track and remember what was said.
Okay, so that’s the benefits and caveats of the method, though I’m confused on your “the strategy wants the other to tell a story on how they’d arrive to a new stance”.
I don’t understand this. If I believe in ghosts. And you use this method. The story would be how I would arrive to not believing in ghosts? Like just the negative of the original belief, not anything else new, right?
If so, then I don’t think that story is very hard if, after introspecting using this method, I figure out my reasons for believing in ghosts are flawed.
But maybe if the belief is very important like a religious one, properly setting someone’s expectations would be good. Like I might need to tell them “yes, you can still be a good person, be happy, have great friends, find love, etc” if they change their belief.
Was that your point?
You need to answer the question before introspection so you don’t have time to doubt your stance. You would need to assume or guess that the other implications would not be so out of wack to make it implausible or impossible to adopt the new stance. If I declare no prospect of position moving the thing is declared moot and we don’t discuss.
I think there can be a big gap between the embeddeness of a proposition between the participants. Somebody that doesn’t belive in ghosts can treat it like a stand-alone fact. But somebody that does believe will (might) have it entangled with other beliefs. This effect is more pronounced the less anticipated the question is and the deeper it cuts. A ghost belief can be entangled to memories of fear of death. Those associations can be hard to articulate yet they can have real effects on positions held.
It is amgibious what you refer to as “telling them”. Doing reassurances without reasons would be equivalent to a kind of “we are just separately doing intellectual stuff, there won’t be any discussion breaking forces invoked”. The other would be to argue that belief in the important things can be justified even after changing the stance. That kind of guarantee probably can fail. One could argue that the other could just adopt your belief system verbatim to be atleast as prosperous as you are. But it would come with having to adopt your positions on everything. If being of different opinion in a different field comes to cognitive dissonance with the new stance that could be a psychological problem they would have to deal with that you do not have to. That is there is a chance that there is a legit crisis of worldview after the discussion.
I guess the contrast in my mind is that argumentation takes the form of very small steps that are very well founded where all doubt is resolved as soon as there is the smallest hint of it. In a mathematical proof as you follow along you should be convinced that each line is warranted by the previous line. Sometimes when somebody assumes a lot of mathematical competency they use fewer midsteps. Then you can say “I don’t see how that follows from that” and the other person can expand the one step into multiple smaller steps. The method here seems that the question “Would you adopt X if Y was not the case?” is not particualrly amendable to going closer into detail how it is answered in the positive or negative. But I think there are lot of hard/laborusome cases where a lot of judgement needs to happen and it happens not in the interactive space but hidden in the private space of one persons head.
I don’t feel that was my point but I think it cuts close to the space. I think it now gets lucky that treating the question as a short story or a isolated fact is commonly easy but it has no guarantees that it will be easy or any tools to tackle things when things are hard. Does the method offer any advice when there is no quick or clear “yes” or “no” answer to “would you belive X if Y were the case”?