To me, this comment basically concedes that you’re wrong but attempts to disguise it in a face-saving way.
It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.
prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.
BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we’re also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we’re doing, and correct flaws when they are exposed to our scrutiny.
If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that’s worth doing, on lesswrong.
To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.
I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out.
The key question is: would you believe it if it were your opponent in a heated debate who told you?
I’d like to say yes, but I don’t really know. Am I way off-base here?
Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it’s not worth it. It’s too bad there aren’t more people weighing in on these comments because I’d like to know how the community thinks my priorities should be set. In any case you’ve been around for longer so you probably know better than I.
Alice: “But Z is irrelevant with respect to X’, which is what I actually mean.”
Now, Bob agrees with X’. What will Bob say?
“Fine, we agree after all.”
“Yes, but remember that X is problematic and not entirely equivalent to X’.”
“You should openly admit that you were wrong with X.”
If I were in place of Alice, (1) would cause me to abandon X and believe X’ instead. For some time I would deny that they aren’t equivalent or think that my saying X was only poor formulation on my part and that I have always believed X’. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob’s opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X’ too (I don’t like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.
(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)
Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion.
I think the default is that people change specific opinions more in response to the tactful debate style you’re identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one’s wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.
It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one’s biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.
Personally I am becoming inclined to give up the second goal.
Since here on LW changing one’s opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn’t. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
The downside is, of course, that writing beliefs down may make their holders even more entrenched.
Entirely plausible—cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will do my best to notice and acknowledge when I’m wrong”
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them” and
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so.”
True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.
This is also true in the immediate sense, but somehow implies that the debate-winning behaviours are a net positive with respect to truth seeking at least in some possible (non-negligibly frequent) circumstances. I find the claim dubious. Can you specify in what circumstances is the debate winning argumentation style superior to leaving a line of retreat?
Line of retreat is superior for convincing your debate partner, but debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault which makes it easier to discern the prevailing truth as a reader.
debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault
That isn’t actually the reason. The reason debate-winning behavior is superior for convincing bystanders is that it appeals to their natural desire to side with the status-gaining triumphant party. As such, it is a species of Dark Art.
This is what I am not sure about. I know that I will be more likely to admit being wrong when I have chance do do it in a face-saving way (this includes simply saying “you are right” when I am doing it voluntarily and the opponent has debated in a civillised way up to that point) than when my interlocutor tries to force me to do that. I know it but still can’t easily get rid of that bias.
There are several outcomes of a debate where one party is right and the other is wrong:
The wrong side admit their wrongness.
The wrong side don’t want to admit their wrongness but realise that they have no good arguments and drop from the debate.
The wrong side don’t want to admit their wrongness and still continue debating in hope of defeating the opponent or at least achieving a honourable draw.
The wrong side don’t even realise their wrongness.
The exact flavour of debate-winning behaviour I have criticised makes 2 difficult or impossible, consequently increasing probabilities of 1, 3 or 4. 1 is superior to 2 from almost any point of view, but 2 is similarly superior to 3 and 4 and it is far from clear whether the probability of 1 increases more than probabilities of 3 and 4 combined when 2 ceases to be an option, or whether it increases at all.
Or where both sides admit their wrongness and switch their opinions, or where a third side intervenes and bans them both for trolling. Next time I’ll try to compose a more exhaustive list.
Don’t forget the case where the two parties are talking at cross purposes (e.g. Alice means that a tree falling in a forest with no-one around generates no auditory sensations and Bob means that it does generate acoustic waves) but neither of them realizes that; it doesn’t even occur to each that the other might be meaning something else by sound. (I’m under the impression that this is relatively rare on LW, but it does constitute a sizeable fraction of all arguments I hear elsewhere, both online and in person.)
It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.
prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.
BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we’re also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we’re doing, and correct flaws when they are exposed to our scrutiny.
If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that’s worth doing, on lesswrong.
To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to “escape” from a debate in a face-saving way without realizing I had actually been wrong. I’m sure this still happens from time to time...and I want to know if it’s happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.
The key question is: would you believe it if it were your opponent in a heated debate who told you?
I’d like to say yes, but I don’t really know. Am I way off-base here?
Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it’s not worth it. It’s too bad there aren’t more people weighing in on these comments because I’d like to know how the community thinks my priorities should be set. In any case you’ve been around for longer so you probably know better than I.
I think we are speaking about this scenario:
Alice says: “X is true.”
Bob: “No, X is false, because of Z.”
Alice: “But Z is irrelevant with respect to X’, which is what I actually mean.”
Now, Bob agrees with X’. What will Bob say?
“Fine, we agree after all.”
“Yes, but remember that X is problematic and not entirely equivalent to X’.”
“You should openly admit that you were wrong with X.”
If I were in place of Alice, (1) would cause me to abandon X and believe X’ instead. For some time I would deny that they aren’t equivalent or think that my saying X was only poor formulation on my part and that I have always believed X’. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob’s opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X’ too (I don’t like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.
(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)
Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
I think the default is that people change specific opinions more in response to the tactful debate style you’re identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one’s wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.
It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one’s biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.
Personally I am becoming inclined to give up the second goal.
Since here on LW changing one’s opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn’t. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
Entirely plausible—cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
I don’t think I have an idea how strong all relevant effects and measures are.
There’s a big difference between:
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will do my best to notice and acknowledge when I’m wrong”
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them”
and
“it’s best if we notice and acknowledge when we’re wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so.”
True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.
This is also true in the immediate sense, but somehow implies that the debate-winning behaviours are a net positive with respect to truth seeking at least in some possible (non-negligibly frequent) circumstances. I find the claim dubious. Can you specify in what circumstances is the debate winning argumentation style superior to leaving a line of retreat?
Line of retreat is superior for convincing your debate partner, but debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault which makes it easier to discern the prevailing truth as a reader.
That isn’t actually the reason. The reason debate-winning behavior is superior for convincing bystanders is that it appeals to their natural desire to side with the status-gaining triumphant party. As such, it is a species of Dark Art.
This is what I am not sure about. I know that I will be more likely to admit being wrong when I have chance do do it in a face-saving way (this includes simply saying “you are right” when I am doing it voluntarily and the opponent has debated in a civillised way up to that point) than when my interlocutor tries to force me to do that. I know it but still can’t easily get rid of that bias.
There are several outcomes of a debate where one party is right and the other is wrong:
The wrong side admit their wrongness.
The wrong side don’t want to admit their wrongness but realise that they have no good arguments and drop from the debate.
The wrong side don’t want to admit their wrongness and still continue debating in hope of defeating the opponent or at least achieving a honourable draw.
The wrong side don’t even realise their wrongness.
The exact flavour of debate-winning behaviour I have criticised makes 2 difficult or impossible, consequently increasing probabilities of 1, 3 or 4. 1 is superior to 2 from almost any point of view, but 2 is similarly superior to 3 and 4 and it is far from clear whether the probability of 1 increases more than probabilities of 3 and 4 combined when 2 ceases to be an option, or whether it increases at all.
You left off all the cases where the right side admits their wrongess!
Or where both sides admit their wrongness and switch their opinions, or where a third side intervenes and bans them both for trolling. Next time I’ll try to compose a more exhaustive list.
Don’t forget the case where the two parties are talking at cross purposes (e.g. Alice means that a tree falling in a forest with no-one around generates no auditory sensations and Bob means that it does generate acoustic waves) but neither of them realizes that; it doesn’t even occur to each that the other might be meaning something else by sound. (I’m under the impression that this is relatively rare on LW, but it does constitute a sizeable fraction of all arguments I hear elsewhere, both online and in person.)
Well reasoned.