Alice: “But Z is irrelevant with respect to X’, which is what I actually mean.”
Now, Bob agrees with X’. What will Bob say?
“Fine, we agree after all.”
“Yes, but remember that X is problematic and not entirely equivalent to X’.”
“You should openly admit that you were wrong with X.”
If I were in place of Alice, (1) would cause me to abandon X and believe X’ instead. For some time I would deny that they aren’t equivalent or think that my saying X was only poor formulation on my part and that I have always believed X’. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob’s opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X’ too (I don’t like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.
(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)
Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion.
I think the default is that people change specific opinions more in response to the tactful debate style you’re identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one’s wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.
It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one’s biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.
Personally I am becoming inclined to give up the second goal.
Since here on LW changing one’s opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn’t. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
The downside is, of course, that writing beliefs down may make their holders even more entrenched.
Entirely plausible—cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
I think we are speaking about this scenario:
Alice says: “X is true.”
Bob: “No, X is false, because of Z.”
Alice: “But Z is irrelevant with respect to X’, which is what I actually mean.”
Now, Bob agrees with X’. What will Bob say?
“Fine, we agree after all.”
“Yes, but remember that X is problematic and not entirely equivalent to X’.”
“You should openly admit that you were wrong with X.”
If I were in place of Alice, (1) would cause me to abandon X and believe X’ instead. For some time I would deny that they aren’t equivalent or think that my saying X was only poor formulation on my part and that I have always believed X’. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob’s opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X’ too (I don’t like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.
(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)
Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
I think the default is that people change specific opinions more in response to the tactful debate style you’re identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one’s wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.
It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one’s biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.
Personally I am becoming inclined to give up the second goal.
Since here on LW changing one’s opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn’t. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
Entirely plausible—cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
I don’t think I have an idea how strong all relevant effects and measures are.