Joshua, the thought had occurred to me, but with all due respect to universities, that’s the same sort of training-in-passing that you get from reading “Surely You’re Joking, Mr. Feynman” as a kid. It’s not systematic, and it’s not grounded in the recent advances in cognitive psychology or probability theory. If we continue with the muscle metaphor, then I would say that—if judged by the sole criterion of improving personal skills of rationality—then studying physics is the equivalent of playing tennis. If you actually want to do physics, of course, that’s a whole separate issue. But if you want to study rationality, you really have to study rationality; just as if you wanted to study physics it wouldn’t do to go off and study psychology instead.
Eliezer Yudkowsky
An Intuitive Explanation of Bayes’s Theorem
A Technical Explanation of Technical Explanation
Twelve Virtues of Rationality
The Martial Art of Rationality
Why Truth?
What’s a Bias?
The Proper Use of Humility
Robin, I’m not sure why you think the difference between “abstract” (?) and non-abstract beliefs is germane to the proper use of humility. It does seem germane to Dennett’s distinction between professing and believing, but that is not the main topic of the essay.
Either I’m missing something, or all of these comments pertain to the general question of why one wants to be rational, with no specialization for the particular question of how to use humility in the service of rationality (assuming from the start that you want to be rational, on which the essay is obviously premised).
The Modesty Argument
Hal, I changed the lead to say “When two or more human beings have common knowledge that they disagree”, which covers your counterexample.
pdf23ds, the prbolem is how to decide when the person you are conversing with is more or equally rational as yourself. What if you disagree about that? Then you have a disagreement about a new variable, your respective degrees of rationality. Do you both believe yourself to be more meta-rational than the other? And so on. See Hanson and Cowen’s “Are Disagreements Honest?”, http://hanson.gmu.edu/deceive.pdf.
Hal, that’s why I specified human beings. Human beings often find themselves with common knowledge that they disagree about a question of fact. And indeed, genuine Bayesians would not find themselves in such a pickle to begin with, which is why I question that we can clean up the mess by imitating the surface features of Bayesians (mutual agreement) while departing from their causal mechanisms (instituting an explicit internal drive to agreement, which is not present in ideal Bayesians).
The reason my addition fixes the problem is that in your scenario, the disagreement only holds while the two observers do not have common knowledge of their own probability estimates—this can easily happen to Bayesians; all they need to do is observe a piece of evidence they haven’t had the opportunity to communicate. So they disagree at first, but only while they don’t have common knowledge.
Hal, I’m not really the best person to explain the Modesty Argument because I don’t believe in it! You should ask a theory’s advocates, not its detractors, to explain it. You, yourself, have advocated that people should agree to agree—how do you think that people should go about it? If your preferred procedure differs from the Modesty Argument as I’ve presented it, it probably means that I got it wrong.
What I mean by the Modesty Argument is: You sit down at a table with someone else who disagrees with you, you each present your first-order arguments about the immediate issue—on the object level, as it were—and then you discover that you still seem to have a disagreement. Then at this point (I consider the Modesty Argument to say), you should consider as evidence the second-order, meta-level fact that the other person isn’t persuaded, and you should take that evidence into account by adjusting your estimate in his direction. And he should do likewise. Keep doing that until you agree.
As to how this fits into Aumann’s original theorem—I’m the wrong person to ask about that, because I don’t think it does fit! But in terms of real-world procedure, I think that’s what Modesty advocates are advocating, more or less. When we’re critiquing Inwagen for failing to agree with Lewis, this is more or less the sort of thing we think he ought to do instead—right?
There are times when I’m happy enough to follow Modest procedure, but the Verizon case, and the creationist case, aren’t on my list. I exercise my individual discretion, and judge based on particular cases. I feel free to not regard a creationist’s beliefs as evidence, despite the apparent symmetry of my belief that he’s the fool and his belief that I’m the fool. Thus I don’t concede that the Modesty Argument holds in general, while Robin Hanson seems (in “Are Disagreements Honest?”) to hold that it should be universal.
Okay, so what are Robin and Hal advocating, procedurally speaking? Let’s hear it from them. I defined the Modesty Argument because I had to say what I thought I was arguing against, but, as said, I’m not an advocate and therefore I’m not the first person to ask. Where do you think Inwagen went wrong in disagreeing with Lewis—what choice did he make that he should not have made? What should he have done instead? The procedure I laid out looks to me like the obvious one—it’s the one I’d follow with a perceived equal. It’s in applying the Modest procedure to disputes about rationality or meta-rationality that I’m likely to start wondering if the other guy is in the same reference class. But if I’ve invented a strawman, I’m willing to hear about it—just tell me the non-strawman version.
“I don’t know.”
Hal, you have to bet at scalar odds. You’ve got to use a scalar quantity to weight the force of your subjective anticipations, and their associated utilities. Giving just the probability, just the betting odds, just the degree of subjective anticipation, does throw away information. More than one set of possible worlds, more than one set of admissible hypotheses, more than one sequence of observable evidence, can yield the final summarized judgment that a certain probability is 1⁄6.
The amount of previously observed evidence can determine how easy it is for additional evidence to shift our beliefs, which in turn determines the expected utility of looking for more information. I think this is what you’re looking for.
But when you have to actually bet, you still bet at 1:5 odds. If that sounds strange to you, look up “ambiguity aversion”—considered a bias—as demonstrated at e.g. http://en.wikipedia.org/wiki/Ellsberg_paradox
PS: Personally I’d bet a lot lower than 1⁄6 on ancient Mars life. And Tom, you’re right that 0 is a safer estimate than 10, but so is 9, and I was assuming the tree was known to be an apple tree in bloom.
Robin, I agree that the main difficulty is figuring out how to pay off the bets, but it seems to me that—given such a measure—playing a prediction market around the measure makes the game more complex, and hopefully more of a lesson, and more socially involving and personally intriguing. In other words, it’s the difference between “Guess whether it will rain tomorrow?” and “Bob is smiling evilly; are you willing to bet $50 that his probability estimate of 36.3% is too low?” Or to look at it another way, fewer people would play poker if the whole theme was just “Estimate the probability that you can fill an inside straight.” I think Anissimov has a valid fun-amplifying suggestion here.