Argument screens off authority, and I’m interested in hearing arguments. While that information should be incorporated into your prior, I don’t see why it’s worth mentioning as a counterargument. (To be sure, I’m not claiming that jacobjacob isn’t a good predictor in general.)
It’s reasonable to be unsure whether the “people don’t ship things” consideration is stronger than the “people are excited” consideration. If you knew that the person who deployed the “people don’t ship things” consideration was generally a better predictor (which you don’t quite here, but let’s simplify a bit), then that would suggest that the “people don’t ship things” consideration is in fact stronger.
(Actually downvoted Daniel for reasons similar to what TurnTrout mentions. Aumannian updating is so boring, even though it’s profitable when you’re betting all-things-considered… I also did give arguments above, but people mostly made jokes about my punctuation! #grumpy )
Aumann updating involves trying to inhabit the inside perspective of somebody else and guess what they saw that made them believe what they do—hardly seems boring to me! Also the thing I was doing was ranking my friends at skills, which I think is one of the classic interesting things.
I’m associating it with doing exactly not that. Just using outside variables like “what do they believe” and “how generally competent do I expect them to be”. (I often see people going “but this great forecaster said 70%” and updating marginally closer, without even trying up build a model of that forecaster’s inside view.)
I guess I’m really making a bid for ‘Aumanning’ to refer to the thing that Aumann’s agreement theorem describes, rather than just partially deferring to somebody else.
Argument screens off authority, and I’m interested in hearing arguments. While that information should be incorporated into your prior, I don’t see why it’s worth mentioning as a counterargument. (To be sure, I’m not claiming that jacobjacob isn’t a good predictor in general.)
It’s reasonable to be unsure whether the “people don’t ship things” consideration is stronger than the “people are excited” consideration. If you knew that the person who deployed the “people don’t ship things” consideration was generally a better predictor (which you don’t quite here, but let’s simplify a bit), then that would suggest that the “people don’t ship things” consideration is in fact stronger.
(Actually downvoted Daniel for reasons similar to what TurnTrout mentions. Aumannian updating is so boring, even though it’s profitable when you’re betting all-things-considered… I also did give arguments above, but people mostly made jokes about my punctuation! #grumpy )
This is a timeless part of the LessWrong experience, my friend.
Aumann updating involves trying to inhabit the inside perspective of somebody else and guess what they saw that made them believe what they do—hardly seems boring to me! Also the thing I was doing was ranking my friends at skills, which I think is one of the classic interesting things.
I’m associating it with doing exactly not that. Just using outside variables like “what do they believe” and “how generally competent do I expect them to be”. (I often see people going “but this great forecaster said 70%” and updating marginally closer, without even trying up build a model of that forecaster’s inside view.)
Your version sounds fun.
I guess I’m really making a bid for ‘Aumanning’ to refer to the thing that Aumann’s agreement theorem describes, rather than just partially deferring to somebody else.