“Rational people can’t agree to disagree” is an oversimplification. Rational people can perfectly well reach a conclusion of the form: “Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn’t be much more likely to leave us both right than to leave us both wrong. We choose, instead, to leave the matter unresolved until either it matters more or we see better prospects of resolving it.”
Imperfectly rational people who are aware of their imperfect rationality (note: this is in fact the nearest any of us actually come to being rational people) might also reasonably reach a conclusion of this form: “Perhaps clear enough thinking on both sides would suffice to let us resolve this. However, it’s apparent that at least one of us is currently sufficiently irrational about it that trying to reach agreement poses a real danger of spoiling the good relations we currently enjoy, and while clearly that irrationality is a bad thing it doesn’t seem likely that trying to resolve our current disagreement now is the best way to address it, so let’s leave it for now.”
I suspect (with no actual evidence) that when two reasonably-rational people say they’re agreeing to disagree, what they mean is often approximately one of the above or a combination thereof, and that they’re often wise to “agree to disagree”. The fact that there are theorems saying that two perfect rationalists who care about nothing more than getting the right answer to the question they’re currently disputing won’t “agree to disagree” seems to me to have little bearing on this.
Eliezer, if you’re reading this: You may remember that a while back on OB you and Robin Hanson discussed the prospects of rapidly improving artificial intelligence in the nearish future. By no means did you resolve your differences in that discussion. Would it be fair to characterize the way it ended as “agreeing to disagree”? From the outside, it sure looks like that’s what it amounted to, whatever you may or may not have said to one another about it. Perhaps you and/or Robin might say “Yeah, but the other guy isn’t really rational about this”. Could be, but if the level of joint rationality required for “can’t agree to disagree” is higher than that of {Eliezer,Robin} then it’s not clear how widely applicable the principle “rational people can’t agree to disagree” really is. (Note for the avoidance of doubt: The foregoing is not intended to imply that Eliezer and Robin are equally rational; I do not intend to make any further comment on my opinions, if any, on that matter.)
Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn’t be much more likely to leave us both right than to leave us both wrong.
You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not “both right” or “both wrong”, but “both −3 decibels.”
No; in what I wrote “resolving a disagreement” means “agreeing to hold the same position, or something very close to it”.
Deciding “cheaply” that you’ll both set p=1/2 (note: I assume that’s what you mean by −3dB here, because the other interpretations I can think of don’t amount to “agreeing to disagree”) is no more rational than (even the least rational version of) “agreeing to disagree”.
If the evidence is very evenly balanced then of course you might end up doing that not-so-cheaply, but in such cases what more often happens is that you look at lots of evidence and see—or think you see—a gradual accumulation favouring one side.
Of course you could base your position purely on the number of people on each side of the issue, and then you might be able to reach p=1/2 (or something near it) cheaply and not entirely unprincipledly. Unfortunately, that procedure also tells you that Pr(Christianity) is somewhere around 1⁄4, a conclusion that I think most people here agree with me in regarding as silly. You can try to fix that by weighting people’s opinions according to how well they’re informed, how clever they are, how rational they are, etc. -- but then you once again have a lengthy, difficult and subjective task that you might reasonably worry will end up giving you a confident wrong answer.
I should perhaps clarify that what I mean by “wouldn’t be much more likely to leave us both right than to leave us both wrong” is: for each of the two people involved, who (at the outset) have quite different opinions, Pr(reach agreement on wrong answer | reach agreement) is quite high.
And, once again for the avoidance of doubt, I am not taking “reach agreement” to mean “reach agreement that one definite position or another is almost certainly right”. I just think that empirically, in practice, when people reach agreement with one another they more often do that than agree that Pr(each) ~= 1/2: I disagree with you about “the most common result” unless “cheaply” is taken in a sense that makes it irrelevant when discussing what rational people should do.
“Rational people can’t agree to disagree” is an oversimplification. Rational people can perfectly well reach a conclusion of the form: “Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn’t be much more likely to leave us both right than to leave us both wrong. We choose, instead, to leave the matter unresolved until either it matters more or we see better prospects of resolving it.”
Imperfectly rational people who are aware of their imperfect rationality (note: this is in fact the nearest any of us actually come to being rational people) might also reasonably reach a conclusion of this form: “Perhaps clear enough thinking on both sides would suffice to let us resolve this. However, it’s apparent that at least one of us is currently sufficiently irrational about it that trying to reach agreement poses a real danger of spoiling the good relations we currently enjoy, and while clearly that irrationality is a bad thing it doesn’t seem likely that trying to resolve our current disagreement now is the best way to address it, so let’s leave it for now.”
I suspect (with no actual evidence) that when two reasonably-rational people say they’re agreeing to disagree, what they mean is often approximately one of the above or a combination thereof, and that they’re often wise to “agree to disagree”. The fact that there are theorems saying that two perfect rationalists who care about nothing more than getting the right answer to the question they’re currently disputing won’t “agree to disagree” seems to me to have little bearing on this.
Eliezer, if you’re reading this: You may remember that a while back on OB you and Robin Hanson discussed the prospects of rapidly improving artificial intelligence in the nearish future. By no means did you resolve your differences in that discussion. Would it be fair to characterize the way it ended as “agreeing to disagree”? From the outside, it sure looks like that’s what it amounted to, whatever you may or may not have said to one another about it. Perhaps you and/or Robin might say “Yeah, but the other guy isn’t really rational about this”. Could be, but if the level of joint rationality required for “can’t agree to disagree” is higher than that of {Eliezer,Robin} then it’s not clear how widely applicable the principle “rational people can’t agree to disagree” really is. (Note for the avoidance of doubt: The foregoing is not intended to imply that Eliezer and Robin are equally rational; I do not intend to make any further comment on my opinions, if any, on that matter.)
You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not “both right” or “both wrong”, but “both −3 decibels.”
No; in what I wrote “resolving a disagreement” means “agreeing to hold the same position, or something very close to it”.
Deciding “cheaply” that you’ll both set p=1/2 (note: I assume that’s what you mean by −3dB here, because the other interpretations I can think of don’t amount to “agreeing to disagree”) is no more rational than (even the least rational version of) “agreeing to disagree”.
If the evidence is very evenly balanced then of course you might end up doing that not-so-cheaply, but in such cases what more often happens is that you look at lots of evidence and see—or think you see—a gradual accumulation favouring one side.
Of course you could base your position purely on the number of people on each side of the issue, and then you might be able to reach p=1/2 (or something near it) cheaply and not entirely unprincipledly. Unfortunately, that procedure also tells you that Pr(Christianity) is somewhere around 1⁄4, a conclusion that I think most people here agree with me in regarding as silly. You can try to fix that by weighting people’s opinions according to how well they’re informed, how clever they are, how rational they are, etc. -- but then you once again have a lengthy, difficult and subjective task that you might reasonably worry will end up giving you a confident wrong answer.
I should perhaps clarify that what I mean by “wouldn’t be much more likely to leave us both right than to leave us both wrong” is: for each of the two people involved, who (at the outset) have quite different opinions, Pr(reach agreement on wrong answer | reach agreement) is quite high.
And, once again for the avoidance of doubt, I am not taking “reach agreement” to mean “reach agreement that one definite position or another is almost certainly right”. I just think that empirically, in practice, when people reach agreement with one another they more often do that than agree that Pr(each) ~= 1/2: I disagree with you about “the most common result” unless “cheaply” is taken in a sense that makes it irrelevant when discussing what rational people should do.