# The Mechanics of Disagreement

Two ideal Bayesi­ans can­not have com­mon knowl­edge of dis­agree­ment; this is a the­o­rem. If two ra­tio­nal­ist-wannabes have com­mon knowl­edge of a dis­agree­ment be­tween them, what could be go­ing wrong?

The ob­vi­ous in­ter­pre­ta­tion of these the­o­rems is that if you know that a cog­ni­tive ma­chine is a ra­tio­nal pro­ces­sor of ev­i­dence, its be­liefs be­come ev­i­dence them­selves.

If you de­sign an AI and the AI says “This fair coin came up heads with 80% prob­a­bil­ity”, then you know that the AI has ac­cu­mu­lated ev­i­dence with an like­li­hood ra­tio of 4:1 fa­vor­ing heads—be­cause the AI only emits that state­ment un­der those cir­cum­stances.

It’s not a mat­ter of char­ity; it’s just that this is how you think the other cog­ni­tive ma­chine works.

And if you tell an ideal ra­tio­nal­ist, “I think this fair coin came up heads with 80% prob­a­bil­ity”, and they re­ply, “I now think this fair coin came up heads with 25% prob­a­bil­ity”, and your sources of ev­i­dence are in­de­pen­dent of each other, then you should ac­cept this ver­dict, rea­son­ing that (be­fore you spoke) the other mind must have en­coun­tered ev­i­dence with a like­li­hood of 1:12 fa­vor­ing tails.

But this as­sumes that the other mind also thinks that you’re pro­cess­ing ev­i­dence cor­rectly, so that, by the time it says “I now think this fair coin came up heads, p=.25”, it has already taken into ac­count the full im­pact of all the ev­i­dence you know about, be­fore adding more ev­i­dence of its own.

If, on the other hand, the other mind doesn’t trust your ra­tio­nal­ity, then it won’t ac­cept your ev­i­dence at face value, and the es­ti­mate that it gives won’t in­te­grate the full im­pact of the ev­i­dence you ob­served.

So does this mean that when two ra­tio­nal­ists trust each other’s ra­tio­nal­ity less than com­pletely, then they can agree to dis­agree?

It’s not that sim­ple. Ra­tion­al­ists should not trust them­selves en­tirely, ei­ther.

So when the other mind ac­cepts your ev­i­dence at less than face value, this doesn’t say “You are less than a perfect ra­tio­nal­ist”, it says, “I trust you less than you trust your­self; I think that you are dis­count­ing your own ev­i­dence too lit­tle.”

Maybe your raw ar­gu­ments seemed to you to have a strength of 40:1, but you dis­counted for your own ir­ra­tional­ity to a strength of 4:1, but the other mind thinks you still over­es­ti­mate your­self and so it as­sumes that the ac­tual force of the ar­gu­ment was 2:1.

And if you be­lieve that the other mind is dis­count­ing you in this way, and is un­jus­tified in do­ing so, then when it says “I now think this fair coin came up heads with 25% prob­a­bil­ity”, you might bet on the coin at odds of 57% in fa­vor of heads—adding up your fur­ther-dis­counted ev­i­dence of 2:1 to the im­plied ev­i­dence of 1:6 that the other mind must have seen to give fi­nal odds of 2:6 - if you even fully trust the other mind’s fur­ther ev­i­dence of 1:6.

I think we have to be very care­ful to avoid in­ter­pret­ing this situ­a­tion in terms of any­thing like a re­cip­ro­cal trade, like two sides mak­ing equal con­ces­sions in or­der to reach agree­ment on a busi­ness deal.

Shift­ing be­liefs is not a con­ces­sion that you make for the sake of oth­ers, ex­pect­ing some­thing in re­turn; it is an ad­van­tage you take for your own benefit, to im­prove your own map of the world. I am, gen­er­ally speak­ing, a Millie-style al­tru­ist; but when it comes to be­lief shifts I es­pouse a pure and prin­ci­pled self­ish­ness: don’t be­lieve you’re do­ing it for any­one’s sake but your own.

Still, I once read that there’s a prin­ci­ple among con artists that the main thing is to get the mark to be­lieve that you trust them, so that they’ll feel obli­gated to trust you in turn.

And—even if it’s for com­pletely differ­ent the­o­ret­i­cal rea­sons—if you want to per­suade a ra­tio­nal­ist to shift be­lief to match yours, you ei­ther need to per­suade them that you have all of the same ev­i­dence they do and have already taken it into ac­count, or that you already fully trust their opinions as ev­i­dence, or that you know bet­ter than they do how much they them­selves can be trusted.

It’s that last one that’s the re­ally sticky point, for ob­vi­ous rea­sons of asym­me­try of in­tro­spec­tive ac­cess and asym­me­try of mo­tives for over­con­fi­dence—how do you re­solve that con­flict? (And if you started ar­gu­ing about it, then the ques­tion wouldn’t be which of these were more im­por­tant as a fac­tor, but rather, which of these fac­tors the Other had un­der- or over-dis­counted in form­ing their es­ti­mate of a given per­son’s ra­tio­nal­ity...)

If I had to name a sin­gle rea­son why two wannabe ra­tio­nal­ists wouldn’t ac­tu­ally be able to agree in prac­tice, it would be that, once you trace the ar­gu­ment to the meta-level where the­o­ret­i­cally ev­ery­thing can be and must be re­solved, the ar­gu­ment trails off into psy­cho­anal­y­sis and noise.

And if you look at what goes on in prac­tice be­tween two ar­gu­ing ra­tio­nal­ists, it would prob­a­bly mostly be trad­ing ob­ject-level ar­gu­ments; and the most meta it would get is try­ing to con­vince the other per­son that you’ve already taken their ob­ject-level ar­gu­ments into ac­count.

Still, this does leave us with three clear rea­sons that some­one might point to, to jus­tify a per­sis­tent dis­agree­ment—even though the frame of mind of jus­tifi­ca­tion and hav­ing clear rea­sons to point to in front of oth­ers, is it­self an­ti­thet­i­cal to the spirit of re­solv­ing dis­agree­ments—but even so:

• Clearly, the Other’s ob­ject-level ar­gu­ments are flawed; no amount of trust that I can have for an­other per­son will make me be­lieve that rocks fall up­ward.

• Clearly, the Other is not tak­ing my ar­gu­ments into ac­count; there’s an ob­vi­ous asym­me­try in how well I un­der­stand them and have in­te­grated their ev­i­dence, ver­sus how much they un­der­stand me and have in­te­grated mine.

• Clearly, the Other is com­pletely bi­ased in how much they trust them­selves over oth­ers, ver­sus how I humbly and even­hand­edly dis­count my own be­liefs alongside theirs.

Since we don’t want to go around en­courag­ing dis­agree­ment, one might do well to pon­der how all three of these ar­gu­ments are used by cre­ation­ists to jus­tify their per­sis­tent dis­agree­ments with sci­en­tists.

That’s one rea­son I say clearly—if it isn’t ob­vi­ous even to out­side on­look­ers, maybe you shouldn’t be con­fi­dent of re­solv­ing the dis­agree­ment there. Failure at any of these lev­els im­plies failure at the meta-lev­els above it, but the higher-or­der failures might not be clear.

• Per­haps there does ex­ist a route to­wards re­solv­ing this situ­a­tion.

Sup­pose Eliezer has a coin for one week, dur­ing which he flips it from time to time. He doesn’t write down the re­sults, record how many times he flips it, or even keeps a run­ning men­tal tally. In­stead, at the end of the week, rely­ing purely upon his di­rect mem­ory of par­tic­u­lar flips he can re­mem­ber, he makes an es­ti­mate: “Hmm, I think I can re­mem­ber about 20 of those flips fairly ac­cu­rately and, of those 20 flips, I have 90% con­fi­dence that 15 of them came up heads.”

The coin is then passed to Robin, who does the same ex­er­cise the fol­low­ing week. At the end of that week, Robin thinks to him­self “I think I can re­mem­ber do­ing about 40 flips, and I have 80% con­fi­dence that 10 of them came up heads.”

They then meet up and have the fol­low­ing con­ver­sa­tion:

• Eliezer: 75% chance of a head

• Robin: 25% chance of a head, not tak­ing your data into ac­count yet, just mine.

• Eliezer: Ok, so first level of com­plex­ity is we could just av­er­age that to get 50%. But can we im­prove upon that?

• Robin: My sam­ple size was 40

• Eliezer: My sam­ple size was 20 so, sec­ond level of com­plex­ity, we could add them to­gether to get 25 heads of out 60 flips, giv­ing 42% chance of a head

• Robin: Third level of com­plex­ity, how con­fi­dent are you about your num­bers? I’m 80% con­fi­dent of mine

• Eliezer: I’m 90% con­fi­dent of mine. So us­ing that as a weight­ing would give us (0.9x15+0.8x10)/​(0.9x20+0.8x40) which is 21.5 out of 50 which is 43% chance of a head.

• Robin: But Eliezer, you always over­es­ti­mate how con­fi­dent you are about your mem­ory, whereas I’m con­ser­va­tive. I don’t think your mem­ory is any bet­ter than mine. I think 42% is the right an­swer.

• Eliezer: Ok, let’s go to level 4. Can we find some ob­jec­tive ev­i­dence? Did you do any of your flips in the pres­ence of a third party? I can re­mem­ber 5 in­ci­dents where some­one else saw the flip I did. We could take a ran­dom sam­pling of my shared flips and then go ask the rele­vant third par­ties for con­fir­ma­tion, then do the same for a ran­dom sam­ple of your shared flips, and see if your the­ory about our mem­o­ries is bourne out.

In the end, as long as you can trace back at least some (a ran­dom sam­pling) of the facts peo­ple are bas­ing their es­ti­mates upon to things that can be checked against re­al­ity, you should have some ba­sis to move for­wards.

• If this looks like a re­cip­ro­cal trade, you’re do­ing it wrong. Done right, the change in your be­lief af­ter find­ing out how much the other per­sons’ be­lief changed would av­er­age at zero. They might change their be­lief more than you ex­pected, lead­ing to you re­al­iz­ing that they’re less sure of them­selves then you thought.

• Two ideal Bayesi­ans can­not have com­mon knowl­edge of dis­agree­ment; this is a the­o­rem.

To quote from AGREEING TO DISAGREE, By Robert J. Aumann

If two peo­ple have the same pri­ors, and their pos­te­ri­ors for a given event A are com­mon knowl­edge, then these pos­te­ri­ors must be equal. This is so even though they may base their pos­te­ri­ors on quite differ­ent in­for­ma­tion. In brief, peo­ple with the same pri­ors can­not agree to dis­agree. [...]
The key no­tion is that of ‘com­mon knowl­edge.’ Call the two peo­ple 1 and 2. When we say that an event is “com­mon knowl­edge,” we mean more than just that both 1 and 2 know it; we re­quire also that 1 knows that 2 knows it, 2 knows that 1 knows it, 1 knows that 2 knows that 1 knows it, and so on. For ex­am­ple, if 1 and 2 are both pre­sent when the event hap­pens and see each other there, then the event be­comes com­mon knowl­edge. In our case, if 1 and 2 tell each other their pos­te­ri­ors and trust each other, then the pos­te­ri­ors are com­mon knowl­edge. The re­sult is not true if we merely as­sume that the per­sons know each other’s pos­te­ri­ors.

So: the “two ideal Bayesi­ans” also need to have “the same pri­ors”—and the term “com­mon knowl­edge” is be­ing used in an es­o­teric tech­ni­cal sense. The im­pli­ca­tions are that both par­ti­ci­pants need to be mo­ti­vated to cre­ate a pool of shared knowl­edge. That effec­tively means they need to want to be­lieve the truth, and to pur­vey the truth to oth­ers. If they have other goals “com­mon knowl­edge” is much less likely to be reached. We know from evolu­tion­ary biol­ogy that such goals are not the top pri­or­ity for most or­ganisms. Or­ganisms of the same species of­ten have con­flict­ing goals—in that each wants to prop­a­gate their own genes, at the ex­pense of those of their com­peti­tors—and in the case of con­flict­ing goals, the situ­a­tion is par­tic­u­larly bad.

So: both par­ties be­ing Bayesi­ans is not enough to in­voke Au­mann’s re­sult. The par­ties also need com­mon pri­ors and a spe­cial type of mo­ti­va­tion which it is rea­son­able to ex­pect to be rare.

• The Au­mann re­sults re­quire that the two of your are hon­est, truth-seek­ing Bayesian wannabes, to first ap­prox­i­ma­tion, and that you see each other that way.

You also have to have the time, mo­ti­va­tion and en­ergy to share your knowl­edge. If some brat comes up to you and tells you that he’s a card-car­ry­ing Bayesian ini­ti­ate—and that p(rise(to­mor­row,sun)) < 0.0000000001 - and challenges you to prove him wrong—you would prob­a­bly just think he had ac­quired an odd prior some­how—and ig­nore him.

• Um… since we’re on the sub­ject of dis­agree­ment me­chan­ics, is there any way for Robin or Eliezer to con­cede points/​ar­gu­ments/​de­tails with­out loos­ing sta­tus? If that could be solved some­how then I sus­pect the di­cus­sion would be much more pro­duc­tive.

• “How about if it were an is­sue that you were not too heav­ily in­vested in [...]”

Hal, the sort of thing you sug­gest has already been tried a few times over at Black Belt Bayesian; check it out.

• In­ter­est­ing es­say—this is my fa­vorite topic right now. I am very happy to see that you clearly say, “Shift­ing be­liefs is not a con­ces­sion that you make for the sake of oth­ers, ex­pect­ing some­thing in re­turn; it is an ad­van­tage you take for your own benefit, to im­prove your own map of the world.” That is the key idea here. How­ever I am not so happy about some other com­ments:

“if you want to per­suade a ra­tio­nal­ist to shift be­lief to match yours”

You should never want this, not if you are a truth-seeker! I hope you mean this to be a de­sire of con artists and other crim­i­nals. Per­sua­sion is evil, is in di­rect op­po­si­tion to the goal of over­com­ing bias and reach­ing the truth. Do you agree?

“the frame of mind of jus­tifi­ca­tion and hav­ing clear rea­sons to point to in front of oth­ers, is it­self an­ti­thet­i­cal to the spirit of re­solv­ing dis­agree­ments”

Such an at­ti­tude is not merely op­posed to the spirit of re­solv­ing dis­agree­ments, it is an over­whelming ob­sta­cle to your own truth seek­ing. You must seek out and over­come this frame of mind at all costs. Agreed?

And what do you think would hap­pen if you were forced to re­solve a dis­agree­ment with­out mak­ing any ar­gu­ments, ob­ject-level or meta; but merely by tak­ing turns recit­ing your quan­ti­ta­tive es­ti­mates of like­li­hood? Do you think you could reach an agree­ment in that case, or would it be hope­less?

How about if it were an is­sue that you were not too heav­ily in­vested in—say, which of a cou­ple of up­com­ing movies will have greater box office re­ceipts? Sup­pose you and a ra­tio­nal­ist-wannabe like Robin had a differ­ence of opinion on this, and you merely re­cited your es­ti­mates. Re­mem­ber your only goal is to reach the truth (per­haps you will be re­warded if you guess right). Do you think you would reach agree­ment, or fail?

• You’ll find the whole thing pretty in­ter­est­ing, al­though it con­cerns de­ci­sion the­ory more than the ra­tio­nal­ity of be­lief, al­though these are deeply con­nected (the con­nec­tion is an in­ter­est­ing topic for spec­u­la­tion in it­self). Here’s a brief sum­mary of the book. I’m pretty par­tial to it.

Think­ing about Act­ing: Log­i­cal Foun­da­tions for Ra­tional De­ci­sion Mak­ing (Oxford Univer­sity Press, 2006).

The ob­jec­tive of this book is to pro­duce a the­ory of ra­tio­nal de­ci­sion mak­ing for re­al­is­ti­cally re­source-bounded agents. My in­ter­est is not in “What should I do if I were an ideal agent?”, but rather, “What should I do given that I am who I am, with all my ac­tual cog­ni­tive limi­ta­tions?”

The book has three parts. Part One ad­dresses the ques­tion of where the val­ues come from that agents use in ra­tio­nal de­ci­sion mak­ing. The most com­mon view among philoso­phers is that they are based on prefer­ences, but I ar­gue that this is com­pu­ta­tion­ally im­pos­si­ble. I pro­pose an al­ter­na­tive the­ory some­what rem­i­nis­cent of Ben­tham, and ex­plore how hu­man be­ings ac­tu­ally ar­rive at val­ues and how they use them in de­ci­sion mak­ing.

Part Two in­ves­ti­gates the knowl­edge of prob­a­bil­ity that is re­quired for de­ci­sion-the­o­retic rea­son­ing. I ar­gue that sub­jec­tive prob­a­bil­ity makes no sense as ap­plied to re­al­is­tic agents. I sketch a the­ory of ob­jec­tive prob­a­bil­ity to put in its place. Then I use that to define a va­ri­ety of causal prob­a­bil­ity and ar­gue that this is the kind of prob­a­bil­ity pre­sup­posed by ra­tio­nal de­ci­sion mak­ing. So what is to be defended is a va­ri­ety of causal de­ci­sion the­ory.

Part Three ex­plores how these val­ues and prob­a­bil­ities are to be used in de­ci­sion mak­ing. In chap­ter eight, it is ar­gued first that ac­tions can­not be eval­u­ated in terms of their ex­pected val­ues as or­di­nar­ily defined, be­cause that does not take ac­count of the fact that a cog­nizer may be un­able to perform an ac­tion, and may even be un­able to try to perform it. An al­ter­na­tive no­tion of “ex­pected util­ity” is defined to be used in place of ex­pected val­ues. In chap­ter nine it is ar­gued that in­di­vi­d­ual ac­tions can­not be the proper ob­jects of de­ci­sion-the­o­retic eval­u­a­tion. We must in­stead choose plans, and se­lect ac­tions in­di­rectly on the grounds that they are pre­scribed by the plans we adopt. How­ever, our ob­jec­tive can­not be to find plans with max­i­mal ex­pected util­ities. Plans can­not be mean­ingfully com­pared in that way. An al­ter­na­tive, called “lo­cally global plan­ning”, is pro­posed. Ac­cord­ing to lo­cally global plan­ning, in­di­vi­d­ual plans are to be as­sessed in terms of their con­tri­bu­tion to the cog­nizer’s “mas­ter plan”. Again, the ob­jec­tive can­not be to find mas­ter plans with max­i­mal ex­pected util­ities, be­cause there may be none, and even if they are, find­ing them is not a com­pu­ta­tion­ally fea­si­ble task for real agents. In­stead, the ob­jec­tive must be to find good mas­ter plans, and im­prove them as bet­ter ones come along. It is ar­gued that there are com­pu­ta­tion­ally fea­si­ble ways of do­ing this, based on defea­si­ble rea­son­ing about val­ues and prob­a­bil­ities.

• Of course if you knew that your dis­putant would only dis­agree with you when one of these three con­di­tions clearly held, you would take their per­sis­tent dis­agree­ment as show­ing one of these con­di­tions held, and then back off and stop dis­agree­ing. So to ap­ply these con­di­tions you need the ad­di­tional im­plicit con­di­tion that they do not be­lieve that you could only dis­agree un­der one of these con­di­tions.

• Hal, it also re­quires that you see each other as see­ing each other that way, that you see each other as see­ing each other as see­ing each other that way, that you see each other as see­ing each other as see­ing each other as see­ing each other that way, and so on.

• Don’t you think it’s pos­si­ble to con­sider some­one ir­ra­tional or non-truth­seek­ing enough to main­tain dis­agree­ment on one is­sue, but still re­spect them on the whole?

If you re­gard per­sis­tent dis­agree­ment as dis­re­spect­ful, and dis­re­spect­ing some­one as bad, this is likely to bias you to­wards agree­ing.

• Let me break down these “jus­tifi­ca­tions” a lit­tle:

Clearly, the Other’s ob­ject-level ar­gu­ments are flawed; no amount of trust that I can have for an­other per­son will make me be­lieve that rocks fall up­ward.
This points to the fact that the other is ir­ra­tional. It is perfectly rea­son­able for two peo­ple to dis­agree when at least one of them is ir­ra­tional. (It might be enough to ar­gue that at least one of the two of you is ir­ra­tional, since it is pos­si­ble that your own rea­son­ing ap­para­tus is badly bro­ken.)

Clearly, the Other is not tak­ing my ar­gu­ments into ac­count; there’s an ob­vi­ous asym­me­try in how well I un­der­stand them and have in­te­grated their ev­i­dence, ver­sus how much they un­der­stand me and have in­te­grated mine.
This would not ac­tu­ally ex­plain the dis­agree­ment. Even an Other who re­fused to study your ar­gu­ments (say, he didn’t have time), but who nev­er­the­less main­tains his po­si­tion, should be ev­i­dence that he has good rea­son for his views. Other­wise, why would your own greater un­der­stand­ing of the ar­gu­ments on both sides (not to men­tion your own per­sis­tence in your po­si­tion) not per­suade him? As­sum­ing he is ra­tio­nal (and thinks you are, etc) the only pos­si­ble ex­pla­na­tion is that he has good rea­sons, some­thing you are not see­ing. And that should per­suade you to start chang­ing your mind.

Clearly, the Other is com­pletely bi­ased in how much they trust them­selves over oth­ers, ver­sus how I humbly and even­hand­edly dis­count my own be­liefs alongside theirs.
Again this is ba­si­cally ev­i­dence that he is ir­ra­tional, and re­duces to case 1.

The Au­mann re­sults re­quire that the two of your are hon­est, truth-seek­ing Bayesian wannabes, to first ap­prox­i­ma­tion, and that you see each other that way. The key idea is not whether the two of you can un­der­stand each other’s ar­gu­ments, but that re­fusal to change po­si­tion sends a very strong sig­nal about the strength of the ev­i­dence.

If the two of you are wrap­ping things up by prepar­ing to agree to dis­agree, you have to bite the bul­let and say that the other is be­ing ir­ra­tional, or is ly­ing, or is not truth seek­ing. There is no re­spect­ful way to agree to dis­agree. You must ei­ther be ex­tremely rude, or reach agree­ment.

• If the two of you are wrap­ping things up by prepar­ing to agree to dis­agree, you have to bite the bul­let and say that the other is be­ing ir­ra­tional, or is ly­ing, or is not truth seek­ing. There is no re­spect­ful way to agree to dis­agree. You must ei­ther be ex­tremely rude, or reach agree­ment.

Hal, is this still your po­si­tion, a year later? If so, I’d like to ar­gue against it. Robin Han­son wrote in http://​​han­son.gmu.edu/​​dis­agree.pdf (page 9):

Since Bayesi­ans with a com­mon prior can­not agree to dis­agree, to what can we at­tribute per­sis­tent hu­man dis­agree­ment? We can gen­er­al­ize the con­cept of a Bayesian to that of a Bayesian wannabe, who makes com­pu­ta­tional er­rors while at­tempt­ing to be Bayesian. Agree­ments to dis­agree can then arise from pure differ­ences in pri­ors, or from pure differ­ences in com­pu­ta­tion, but it is not clear how ra­tio­nal these dis­agree­ments are. Disagree­ments due to differ­ing in­for­ma­tion seem more ra­tio­nal, but for Bayesi­ans dis­agree­ments can­not arise due to differ­ing in­for­ma­tion alone.

Robin ar­gues in an­other pa­per that differ­ences in pri­ors re­ally are ir­ra­tional. I pre­sume that he be­lieves that differ­ences in com­pu­ta­tion are also ir­ra­tional, al­though I don’t know if he made a de­tailed case for it some­where.

Sup­pose we grant that these differ­ences are ir­ra­tional. It seems to me that dis­agree­ments can still be “rea­son­able”, if we don’t know how to re­solve these differ­ences, even in prin­ci­ple. Be­cause we are prod­ucts of evolu­tion, we prob­a­bly have ran­dom differ­ences in pri­ors and com­pu­ta­tion, and since at this point we don’t seem to know how to re­solve these differ­ences, many dis­agree­ments may be both hon­est and rea­son­able. There­fore, there is no need to con­clude that the other dis­agreer must be be ir­ra­tional (as an in­di­vi­d­ual), or is ly­ing, or is not truth seek­ing.

As­sum­ing that the above is cor­rect, I think the role of a de­bate be­tween two Bayesian wannabes should be to pin­point the ex­act differ­ences in pri­ors and com­pu­ta­tion that caused the dis­agree­ment, not to reach im­me­di­ate agree­ment. Once those differ­ences are iden­ti­fied, we can try to find or in­vent new tools for re­solv­ing them, per­haps tools spe­cific to the differ­ence at hand.

• My Bayesian wannabe pa­per is an ar­gu­ment against dis­agree­ment based on com­pu­ta­tion differ­ences. You can “re­solve” a dis­agree­ment by mov­ing your opinion in the di­rec­tion of the other opinion. If failing to do this re­duces your av­er­age ac­cu­racy, I feel I can call that failure “ir­ra­tional”.

• It would be clearer if you said “epistem­i­cally ir­ra­tional”. In­stru­men­tal ra­tio­nal­ity can be con­sis­tent with stick­ing to your guns—es­pe­cially if your aim in­volves ap­pear­ing to be ex­cep­tion­ally con­fi­dent in your own views.

• You can “re­solve” a dis­agree­ment by mov­ing your opinion in the di­rec­tion of the other opinion. If failing to do this re­duces your av­er­age ac­cu­racy, I feel I can call that failure “ir­ra­tional”.

Do you have a sug­ges­tion for how much one should move one’s opinion in the di­rec­tion of the other opinion, and an ar­gu­ment that do­ing so would im­prove av­er­age ac­cu­racy?

If you don’t have time for that, can you just ex­plain what you mean by “av­er­age”? Aver­age over what, us­ing what dis­tri­bu­tion, and ac­cord­ing to whose com­pu­ta­tion?

• How con­fi­dent are you? How con­fi­dent do you think your op­po­nent is? Use those es­ti­mates to de­rive the dis­tance you move.

• PK: Un­for­tu­nately, no. Ar­gu­ing isn’t about be­ing in­formed. If they both ac­tu­ally ‘Over­came Bias’ we’d sup­pos­edly lose all re­spect for them. They have to trade that off with the fact that if they stick to stupid de­tails in the face of over­whelming ev­i­dence we also lose re­spect.

Of the ’12 virtues’ Eleizer men­tions, that ‘ar­gu­ment’ one is the least ap­peal­ing. The qual­ity of the in­de­pen­dent posts around here is far higher than the ar­gu­men­ta­tive ones. Still, it does quite clearly demon­strate the difficul­ties of Aum­man’s ideas in prac­tice.

• Shouldn’t your up­dat­ing also de­pend on the rel­a­tive num­ber of tri­als? (ex­pe­rience)

Part of this dis­agree­ment seems to be what kinds of ev­i­dence are rele­vant to the ob­ject level pre­dic­tions.

• Should be ‘a se­ri­ous challenge’

• There’s also an as­sump­tion that ideal ra­tio­nal­ity is co­her­ent (and even ra­tio­nal) for bounded agents like our­selves. Prob­a­bil­ity the­o­rist and episte­mol­o­gist John Pol­lock has launched a se­ries challenge to this model of de­ci­sion mak­ing in his re­cent 06 book Think­ing About Act­ing.

• Com­ing from a back­ground in sci­en­tific in­stru­ments, I always find this kind of anal­y­sis a bit jar­ring with its in­finite regress in­volv­ing the ra­tio­nal, self-in­ter­ested ac­tor at the core.

Of course two in­stru­ments will agree if they share the same na­ture, within the same en­vi­ron­ment, mea­sur­ing the same ob­ject. You can map onto that a model of pri­ors, like­li­hood func­tion and ob­served ev­i­dence if you wish. Trans­lated to agree­ment be­tween two agents, the only thing re­main­ing is an effec­tive model of the re­la­tion­ship of the ob­server to the ob­served.

• The cru­cial differ­ence here is that the two “in­stru­ments” share the same na­ture, but they are “mea­sur­ing” differ­ent ob­jects — that is, the hy­po­thet­i­cal ra­tio­nal­ists do not have ac­cess to the same ob­served ev­i­dence about the world. But by virtue of “mea­sur­ing”, among other things, one an­other, they are sup­posed to come into agree­ment.

• Ergh, yeah, mod­ified it away from 90% and 9:1 and was just silly, I guess. See, now there’s a jus­tified ex­am­ple of ob­ject-level dis­agree­ment—if not, per­haps, com­mon knowl­edge of dis­agree­ment.

• Silly typo: I’m sure you meant 4:1, not 8:1.