Traditional Rationalists can agree to disagree. Traditional Rationality doesn’t have the ideal that thinking is an exact art in which there is only one correct probability estimate given the evidence.
This is also true of Bayesians. The probability estimate given the evidence is a property of the map, not the territory (hence “estimate”). One correct posterior implies one correct prior. What is this “Ultimate Prior”? There isn’t one.
Possibly, you meant that there’s one correct posterior given the evidence and the prior. That’s correct, but it doesn’t prevent Bayesians from disagreeing, because they do have different priors.
Alternatively, one can point out that the “given evidence” operator is, in expectation, always non-expansive, and contractive when the priors disagree. This means that the beliefs of Perfect Bayesians with shared observations converge (with probability 1) into a single posterior. But this convergence is too slow for humans. Agreeing to disagree is sometimes our only option.
Incidentally, it’s Traditional Rationalists who believed they should never agree to disagree: the set of hypotheses which aren’t “ruled out” by confirmed and repeatable experiments, they argued, is a property of the territory.
I’m aware of this result. It specifically requires the two Beyesians to have the same prior. My point is exactly that this doesn’t have to be the case, and in reality is sometimes not the case.
EDIT: The original paper by Aumann references a paper by Harsanyi which supposedly addresses my point. Aumann himself is careful in interpreting his result as supporting my point (since evidently there are people who disagree despite trusting each other). I’ll report here my understanding of the Harsanyi paper once I get past the paywall.
The Harsanyi paper is very enlightening, but he’s not really arguing that people have shared priors. Rather, he’s making the following points (section 14):
It is worthwhile for an agent to analyze the game as if all agents have the same prior, because it simplifies the analysis. In particular, the game (from that agent’s point of view) then becomes equivalent to a Bayesian complete-information game with private observations.
The same-prior assumption is less restrictive than it may seem, because agents can still have private observations.
A wide family of hypothetical scenarios can be analyzed as if all agents have the same prior. Other scenarios can be easily approximated by a member of this family (though the quality of the approximation is not studied).
All of this is mathematically very pleasing, but it doesn’t change my point. That’s mainly because in the context of the Harsanyi paper “prior” means before any observation, and in the context of this post “prior” means before the shared observation (but possibly after private observations).
This is also true of Bayesians. The probability estimate given the evidence is a property of the map, not the territory (hence “estimate”). One correct posterior implies one correct prior. What is this “Ultimate Prior”? There isn’t one.
Possibly, you meant that there’s one correct posterior given the evidence and the prior. That’s correct, but it doesn’t prevent Bayesians from disagreeing, because they do have different priors.
Alternatively, one can point out that the “given evidence” operator is, in expectation, always non-expansive, and contractive when the priors disagree. This means that the beliefs of Perfect Bayesians with shared observations converge (with probability 1) into a single posterior. But this convergence is too slow for humans. Agreeing to disagree is sometimes our only option.
Incidentally, it’s Traditional Rationalists who believed they should never agree to disagree: the set of hypotheses which aren’t “ruled out” by confirmed and repeatable experiments, they argued, is a property of the territory.
http://wiki.lesswrong.com/wiki/Aumann%27s_agreement_theorem
I’m aware of this result. It specifically requires the two Beyesians to have the same prior. My point is exactly that this doesn’t have to be the case, and in reality is sometimes not the case.
EDIT: The original paper by Aumann references a paper by Harsanyi which supposedly addresses my point. Aumann himself is careful in interpreting his result as supporting my point (since evidently there are people who disagree despite trusting each other). I’ll report here my understanding of the Harsanyi paper once I get past the paywall.
The Harsanyi paper is very enlightening, but he’s not really arguing that people have shared priors. Rather, he’s making the following points (section 14):
It is worthwhile for an agent to analyze the game as if all agents have the same prior, because it simplifies the analysis. In particular, the game (from that agent’s point of view) then becomes equivalent to a Bayesian complete-information game with private observations.
The same-prior assumption is less restrictive than it may seem, because agents can still have private observations.
A wide family of hypothetical scenarios can be analyzed as if all agents have the same prior. Other scenarios can be easily approximated by a member of this family (though the quality of the approximation is not studied).
All of this is mathematically very pleasing, but it doesn’t change my point. That’s mainly because in the context of the Harsanyi paper “prior” means before any observation, and in the context of this post “prior” means before the shared observation (but possibly after private observations).