Ordinary disagreements persist after hearing others’ estimates. A and B may start out asserting “50” and “10“, and then argue their way to “25” and “12”, then “23” and “17″. But if you want each estimate to be as accurate as possible, this is silly behavior; if A can predict that his estimate will go down over time (as he integrates more of B’s evidence), he can also predict that his current estimate is too high—and so he can improve his accuracy by lowering his estimate right now. The two parties should be as likely to overshoot as to undershoot in their disagreements, e.g.:
A: 50; B: 10
A: 18; B: 22
A: 21; B: 21.
So next time you’re in a dispute, try applying Principle 3: ask what an outside observer would say about the situation. If Alfred and Betty both apply this principle, they’ll each ask: “What would an outside observer guess about Lake L, given that Betty has studied geography and said “10”, while Alfred said “50″?” And, thus viewing the situation from the (same) outside, Betty and Alfred will both weigh Betty’s evidence about equally. Alfred may underweight Betty’s impression (e.g., because he doesn’t realize she wrote her thesis on Lake L) -- but he may equally overweight Betty’s opinion (e.g., because he doesn’t realize that she’s never heard of Lake L either). If he could predict that he was (over/under) weighting her opinion, he’d quit doing it.
More precisely: if you and your interlocutor can predict your direction of disagreement, at least one of you is forming needlessly inaccurate estimates.
Before I read your reply, I assume that Alfred will lower his estimate a lot, and Betty might raise her estimate a little. I expect Betty’s estimate to still be lower than Alfred’s, though the size of these effects would be dependent on how much more geography Betty knows than Alfred.
After reading your reply, I think you’re right about convergence, and definitely right about driving your answer towards what you think is correct as fast as possible rather than holding back for fear of seeming to give in.
It’s an interesting problem, and you’re not doing it justice.
A and B have a prior based on certain evidence. Their first guess conveys only the mean of that prior. You also posit that they have a shared belief about the (expected) amount of evidence behind their prior.
To update at each iteration, they need to infer what evidence about the world is behind the exchange of guesses so far.
I don’t agree with anything you’ve claimed about this scenario. I’ll grant you any simplifying assumptions you need to prove it, but let’s be clear about what those assumptions are.
If they’re only similarly rational rather than perfectly rational, they’ll probably both be biased toward their own estimates. It also depends on common knowledge assumptions. As far as I know two people can be perfectly rational, and both can think the other is irrational, or think the other is rational but thinks they’re irrational and therefore won’t update, and therefore not get to an equilibrium. So I would disagree with your statement that:
if you and your interlocutor can predict your direction of disagreement, at least one of you is forming needlessly inaccurate estimates
In general, the insights needed to answer the questions at the end of the post go beyond what one can learn from the ultra-simple “everyone can see the same evidence” example at the start of the post, I think.
Re: Problem 4: Roughly speaking: yes.
Ordinary disagreements persist after hearing others’ estimates. A and B may start out asserting “50” and “10“, and then argue their way to “25” and “12”, then “23” and “17″. But if you want each estimate to be as accurate as possible, this is silly behavior; if A can predict that his estimate will go down over time (as he integrates more of B’s evidence), he can also predict that his current estimate is too high—and so he can improve his accuracy by lowering his estimate right now. The two parties should be as likely to overshoot as to undershoot in their disagreements, e.g.: A: 50; B: 10 A: 18; B: 22 A: 21; B: 21.
So next time you’re in a dispute, try applying Principle 3: ask what an outside observer would say about the situation. If Alfred and Betty both apply this principle, they’ll each ask: “What would an outside observer guess about Lake L, given that Betty has studied geography and said “10”, while Alfred said “50″?” And, thus viewing the situation from the (same) outside, Betty and Alfred will both weigh Betty’s evidence about equally. Alfred may underweight Betty’s impression (e.g., because he doesn’t realize she wrote her thesis on Lake L) -- but he may equally overweight Betty’s opinion (e.g., because he doesn’t realize that she’s never heard of Lake L either). If he could predict that he was (over/under) weighting her opinion, he’d quit doing it.
More precisely: if you and your interlocutor can predict your direction of disagreement, at least one of you is forming needlessly inaccurate estimates.
Before I read your reply, I assume that Alfred will lower his estimate a lot, and Betty might raise her estimate a little. I expect Betty’s estimate to still be lower than Alfred’s, though the size of these effects would be dependent on how much more geography Betty knows than Alfred.
After reading your reply, I think you’re right about convergence, and definitely right about driving your answer towards what you think is correct as fast as possible rather than holding back for fear of seeming to give in.
It’s an interesting problem, and you’re not doing it justice.
A and B have a prior based on certain evidence. Their first guess conveys only the mean of that prior. You also posit that they have a shared belief about the (expected) amount of evidence behind their prior.
To update at each iteration, they need to infer what evidence about the world is behind the exchange of guesses so far.
I don’t agree with anything you’ve claimed about this scenario. I’ll grant you any simplifying assumptions you need to prove it, but let’s be clear about what those assumptions are.
If they’re only similarly rational rather than perfectly rational, they’ll probably both be biased toward their own estimates. It also depends on common knowledge assumptions. As far as I know two people can be perfectly rational, and both can think the other is irrational, or think the other is rational but thinks they’re irrational and therefore won’t update, and therefore not get to an equilibrium. So I would disagree with your statement that:
In general, the insights needed to answer the questions at the end of the post go beyond what one can learn from the ultra-simple “everyone can see the same evidence” example at the start of the post, I think.