Set your baseline to always assume you are rationalizing, and then have to prove that you aren’t, rather than vice versa.
Something I do, that I’m surprised I don’t see mentioned here, is to just assume that any point I am trying to make, or anything I think, is a rationalization (which is pretty likely).
So instead of having to think “Am I rationalizing?” (which can be hard to figure out), I change my baseline, to ALWAYS assume “I am probably rationalizing. How am I rationalizing?” and go from there. Sort of a quick run-through of what biases and semi-hidden desires could be influencing my decisions or statements at any given time. From there I can either accept or reject these “rationalizations”.
This also ends up leading to many disclaimers in conversations, as mentioned in the OP. (i.e. “Well I can’t know for sure what I HAD thought, because by now hindsight bias has taken hold....” or “Well, I’m completely anchored off your estimate now, but...”) I see “Conversation Disclaimers” themselves to be a major skill. Maybe an exercise could be made out of that?
Quick idea: Have people in pairs have a conversation about a debatable subject. Every sentence they say has to be pre-empted with a disclaimer.
(Note: This is my immediate reaction to this post. I’ll give it more thought later.)
Ask leading questions, or have everyone come up with a list of premises they think are true. (Examples- It’s good to be vegetarian/paleo/omni; Political Belief System X is strongest; I enjoy Activity B; Continuing grad school is good/bad idea; etc)
Once they have developed this list, have them list as many reasons as they can that they are actually just rationalizing. (i.e. have them ASSUME they are rationalizing, and then have them list ways this could be possible) The person who comes up with the MOST, wins.
Example (This is a real one for me)-
Premise: It’s good to be vegetarian
Think: “I am probably rationalizing. How? Why? Opposing evidence?
Possible rationalizations:
1) I’ve been a vegetarian for a long time, so consistency bias wants me to continue to be one 2) Admitting I’m wrong would mean that I’ve been wrong for the past 8 years. 3) Being a vegetarian allows me to hold moral high ground, despite actual morality, or lack thereof, of the choice. 4) Deciding that vegetarianism is NOT a moral choice would mean that I have no reason to remain one. It’s easier for me at this point to maintain the status quo, than to switch my diet back to omnivorous. 5) Social pressure- People I like and respect think that vegetarianism is a good choice, even if they themselves aren’t.
Arguments against: 1) Animals’ life in nature is also completely horrendous. Factory farmed animals may actually have it better off than the average animal in nature. 2) Animals are not sentient enough that I should care overly much for their well-being. 3) Meat is yummy (or at least it was at one point...Now I don’t much like the smell anymore)
At the end, they can pair up and try to add MORE rationalization possibilities to their partner’s list.
I’m concerned that this technique will just make people able to come up with lots of bad reasons to do things, thus making them better at rationalization. I feel like we would be better off encouraging people to come up with good reasons, and then perhaps comparing them to bad reasons.
Set your baseline to always assume you are rationalizing, and then have to prove that you aren’t, rather than vice versa.
Something I do, that I’m surprised I don’t see mentioned here, is to just assume that any point I am trying to make, or anything I think, is a rationalization (which is pretty likely).
So instead of having to think “Am I rationalizing?” (which can be hard to figure out), I change my baseline, to ALWAYS assume “I am probably rationalizing. How am I rationalizing?” and go from there. Sort of a quick run-through of what biases and semi-hidden desires could be influencing my decisions or statements at any given time. From there I can either accept or reject these “rationalizations”.
This also ends up leading to many disclaimers in conversations, as mentioned in the OP. (i.e. “Well I can’t know for sure what I HAD thought, because by now hindsight bias has taken hold....” or “Well, I’m completely anchored off your estimate now, but...”) I see “Conversation Disclaimers” themselves to be a major skill. Maybe an exercise could be made out of that?
Quick idea: Have people in pairs have a conversation about a debatable subject. Every sentence they say has to be pre-empted with a disclaimer.
(Note: This is my immediate reaction to this post. I’ll give it more thought later.)
Exercise Idea- Rationalization Listing
Ask leading questions, or have everyone come up with a list of premises they think are true. (Examples- It’s good to be vegetarian/paleo/omni; Political Belief System X is strongest; I enjoy Activity B; Continuing grad school is good/bad idea; etc)
Once they have developed this list, have them list as many reasons as they can that they are actually just rationalizing. (i.e. have them ASSUME they are rationalizing, and then have them list ways this could be possible) The person who comes up with the MOST, wins.
Example (This is a real one for me)-
Premise: It’s good to be vegetarian
Think: “I am probably rationalizing. How? Why? Opposing evidence?
Possible rationalizations: 1) I’ve been a vegetarian for a long time, so consistency bias wants me to continue to be one
2) Admitting I’m wrong would mean that I’ve been wrong for the past 8 years.
3) Being a vegetarian allows me to hold moral high ground, despite actual morality, or lack thereof, of the choice.
4) Deciding that vegetarianism is NOT a moral choice would mean that I have no reason to remain one. It’s easier for me at this point to maintain the status quo, than to switch my diet back to omnivorous.
5) Social pressure- People I like and respect think that vegetarianism is a good choice, even if they themselves aren’t.
Arguments against:
1) Animals’ life in nature is also completely horrendous. Factory farmed animals may actually have it better off than the average animal in nature.
2) Animals are not sentient enough that I should care overly much for their well-being.
3) Meat is yummy (or at least it was at one point...Now I don’t much like the smell anymore)
At the end, they can pair up and try to add MORE rationalization possibilities to their partner’s list.
I’m concerned that this technique will just make people able to come up with lots of bad reasons to do things, thus making them better at rationalization. I feel like we would be better off encouraging people to come up with good reasons, and then perhaps comparing them to bad reasons.