Honest rational agents should never agree to disagree.
There are no such agents. On many topics, NOBODY, including you and including me, is sufficiently honest NOR sufficiently rational for Aumann’s theorem to apply.
The other problem with Aumann’s agreement theorem is that it’s often applied too broadly. It should say, “Honest rational agents should never agree to disagree on matters of fact.” What to do about those facts is definitely up for disagreement, insofar as two honest, rational agents may value wildly different things.
An earlier draft actually specified ”… on questions of fact”, but I deleted that phrase because I didn’t think it was making the exposition stronger. (Omit needless words!) People who understand the fact/value distinction, instrumental goals, &c. usually don’t have trouble “relativizing” policy beliefs. (Even if I don’t want to maximize paperclips, I can still have a lawful discussion about what the paperclip-maximizing thing to do would be.)
I understand the point about omitting needless words, but I think the words are needed in this case. I think there’s a danger here of Aumann’s agreement theorem being misused to prolong disagreements when those disagreements are on matters of values and future actions rather than on the present state of the world. This is especially true in “hot” topics (like politics, religion, etc) where matters of fact and matters of value are closely intertwined.
A slightly different frame on this (I think less pessimistic) is something like “honesty hasn’t been invented yet”. Or, rather, explicit knowledge of how to implement honesty does not exist in a way that can be easily transferred. (Tacit knowledge of such may exist but it’s hard to validate and share)
(I’m referring, I think, to the same sort of honesty Zack is getting at here, although the aspects of it that are relevant to doublecrux that didn’t come up in that previous blogpost)
I think, obviously, that there have been massive strides (across human history, and yes on LW in particular) in how to implement “Idealized Honesty” (for lack of a better term for now). So, the problem seems pretty tractable. But it does not feel like a thing within spitting distance.
Go one step further.
There are no such agents. On many topics, NOBODY, including you and including me, is sufficiently honest NOR sufficiently rational for Aumann’s theorem to apply.
The other problem with Aumann’s agreement theorem is that it’s often applied too broadly. It should say, “Honest rational agents should never agree to disagree on matters of fact.” What to do about those facts is definitely up for disagreement, insofar as two honest, rational agents may value wildly different things.
An earlier draft actually specified ”… on questions of fact”, but I deleted that phrase because I didn’t think it was making the exposition stronger. (Omit needless words!) People who understand the fact/value distinction, instrumental goals, &c. usually don’t have trouble “relativizing” policy beliefs. (Even if I don’t want to maximize paperclips, I can still have a lawful discussion about what the paperclip-maximizing thing to do would be.)
I understand the point about omitting needless words, but I think the words are needed in this case. I think there’s a danger here of Aumann’s agreement theorem being misused to prolong disagreements when those disagreements are on matters of values and future actions rather than on the present state of the world. This is especially true in “hot” topics (like politics, religion, etc) where matters of fact and matters of value are closely intertwined.
A slightly different frame on this (I think less pessimistic) is something like “honesty hasn’t been invented yet”. Or, rather, explicit knowledge of how to implement honesty does not exist in a way that can be easily transferred. (Tacit knowledge of such may exist but it’s hard to validate and share)
(I’m referring, I think, to the same sort of honesty Zack is getting at here, although the aspects of it that are relevant to doublecrux that didn’t come up in that previous blogpost)
I think, obviously, that there have been massive strides (across human history, and yes on LW in particular) in how to implement “Idealized Honesty” (for lack of a better term for now). So, the problem seems pretty tractable. But it does not feel like a thing within spitting distance.
The kind of honesty Zack is talking about is desirable, but it’s unclear whether it’s sufficient for Aumann’s theorem to apply.