The Modesty Argument

The Modesty Ar­gu­ment states that when two or more hu­man be­ings have com­mon knowl­edge that they dis­agree about a ques­tion of sim­ple fact, they should each ad­just their prob­a­bil­ity es­ti­mates in the di­rec­tion of the oth­ers’. (For ex­am­ple, they might adopt the com­mon mean of their prob­a­bil­ity dis­tri­bu­tions. If we use the log­a­r­ith­mic scor­ing rule, then the score of the av­er­age of a set of prob­a­bil­ity dis­tri­bu­tions is bet­ter than the av­er­age of the scores of the in­di­vi­d­ual dis­tri­bu­tions, by Jensen’s in­equal­ity.)

Put more sim­ply: When you dis­agree with some­one, even af­ter talk­ing over your rea­sons, the Modesty Ar­gu­ment claims that you should each ad­just your prob­a­bil­ity es­ti­mates to­ward the other’s, and keep do­ing this un­til you agree. The Modesty Ar­gu­ment is in­spired by Au­mann’s Agree­ment The­o­rem, a very fa­mous and oft-gen­er­al­ized re­sult which shows that gen­uine Bayesi­ans liter­ally can­not agree to dis­agree; if gen­uine Bayesi­ans have com­mon knowl­edge of their in­di­vi­d­ual prob­a­bil­ity es­ti­mates, they must all have the same prob­a­bil­ity es­ti­mate. (“Com­mon knowl­edge” means that I know you dis­agree, you know I know you dis­agree, etc.)

I’ve always been sus­pi­cious of the Modesty Ar­gu­ment. It’s been a long-run­ning de­bate be­tween my­self and Robin Han­son.

Robin seems to en­dorse the Modesty Ar­gu­ment in pa­pers such as Are Disagree­ments Hon­est? I, on the other hand, have held that it can be ra­tio­nal for an in­di­vi­d­ual to not ad­just their own prob­a­bil­ity es­ti­mate in the di­rec­tion of some­one else who dis­agrees with them.

How can I main­tain this po­si­tion in the face of Au­mann’s Agree­ment The­o­rem, which proves that gen­uine Bayesi­ans can­not have com­mon knowl­edge of a dis­pute about prob­a­bil­ity es­ti­mates? If ge­nunie Bayesi­ans will always agree with each other once they’ve ex­changed prob­a­bil­ity es­ti­mates, shouldn’t we Bayesian wannabes do the same?

To ex­plain my re­ply, I be­gin with a metaphor: If I have five differ­ent ac­cu­rate maps of a city, they will all be con­sis­tent with each other. Some philoso­phers, in­spired by this, have held that “ra­tio­nal­ity” con­sists of hav­ing be­liefs that are con­sis­tent among them­selves. But, al­though ac­cu­racy nec­es­sar­ily im­plies con­sis­tency, con­sis­tency does not nec­es­sar­ily im­ply ac­cu­racy. If I sit in my liv­ing room with the cur­tains drawn, and make up five maps that are con­sis­tent with each other, but I don’t ac­tu­ally walk around the city and make lines on pa­per that cor­re­spond to what I see, then my maps will be con­sis­tent but not ac­cu­rate. When gen­uine Bayesi­ans agree in their prob­a­bil­ity es­ti­mates, it’s not be­cause they’re try­ing to be con­sis­tent—Au­mann’s Agree­ment The­o­rem doesn’t in­voke any ex­plicit drive on the Bayesi­ans’ part to be con­sis­tent. That’s what makes AAT sur­pris­ing! Bayesi­ans only try to be ac­cu­rate; in the course of seek­ing to be ac­cu­rate, they end up con­sis­tent. The Modesty Ar­gu­ment, that we can end up ac­cu­rate in the course of seek­ing to be con­sis­tent, does not nec­es­sar­ily fol­low.

How can I main­tain my po­si­tion in the face of my ad­mis­sion that dis­putants will always im­prove their av­er­age score if they av­er­age to­gether their in­di­vi­d­ual prob­a­bil­ity dis­tri­bu­tions?

Sup­pose a cre­ation­ist comes to me and offers: “You be­lieve that nat­u­ral se­lec­tion is true, and I be­lieve that it is false. Let us both agree to as­sign 50% prob­a­bil­ity to the propo­si­tion.” And sup­pose that by drugs or hyp­no­sis it was ac­tu­ally pos­si­ble for both of us to con­tract to ad­just our prob­a­bil­ity es­ti­mates in this way. This un­ques­tion­ably im­proves our com­bined log-score, and our com­bined squared er­ror. If as a mat­ter of al­tru­ism, I value the cre­ation­ist’s ac­cu­racy as much as my own—if my loss func­tion is sym­met­ri­cal around the two of us—then I should agree. But what if I’m try­ing to max­i­mize only my own in­di­vi­d­ual ac­cu­racy? In the former case, the ques­tion is ab­solutely clear, and in the lat­ter case it is not ab­solutely clear, to me at least, which opens up the pos­si­bil­ity that they are differ­ent ques­tions.

If I agree to a con­tract with the cre­ation­ist in which we both use drugs or hyp­no­sis to ad­just our prob­a­bil­ity es­ti­mates, be­cause I know that the group es­ti­mate must be im­proved thereby, I re­gard that as pur­su­ing the goal of so­cial al­tru­ism. It doesn’t make cre­ation­ism ac­tu­ally true, and it doesn’t mean that I think cre­ation­ism is true when I agree to the con­tract. If I thought cre­ation­ism was 50% prob­a­ble, I wouldn’t need to sign a con­tract—I would have already up­dated my be­liefs! It is tempt­ing but false to re­gard adopt­ing some­one else’s be­liefs as a fa­vor to them, and ra­tio­nal­ity as a mat­ter of fair­ness, of equal com­pro­mise. There­fore it is writ­ten: “Do not be­lieve you do oth­ers a fa­vor if you ac­cept their ar­gu­ments; the fa­vor is to you.” Am I re­ally do­ing my­self a fa­vor by agree­ing with the cre­ation­ist to take the av­er­age of our prob­a­bil­ity dis­tri­bu­tions?

I re­gard ra­tio­nal­ity in its purest form as an in­di­vi­d­ual thing—not be­cause ra­tio­nal­ists have only self­ish in­ter­ests, but be­cause of the form of the only ad­mis­si­ble ques­tion: “Is is ac­tu­ally true?” Other con­sid­er­a­tions, such as the col­lec­tive ac­cu­racy of a group that in­cludes your­self, may be le­gi­t­i­mate goals, and an im­por­tant part of hu­man ex­is­tence—but they differ from that sin­gle pure ques­tion.

In Au­mann’s Agree­ment The­o­rem, all the in­di­vi­d­ual Bayesi­ans are try­ing to be ac­cu­rate as in­di­vi­d­u­als. If their ex­plicit goal was to max­i­mize group ac­cu­racy, AAT would not be sur­pris­ing. So the im­prove­ment of group score is not a knock­down ar­gu­ment as to what an in­di­vi­d­ual should do if they are try­ing purely to max­i­mize their own ac­cu­racy, and it is that last quest which I iden­tify as ra­tio­nal­ity. It is writ­ten: “Every step of your rea­son­ing must cut through to the cor­rect an­swer in the same move­ment. More than any­thing, you must think of car­ry­ing your map through to re­flect­ing the ter­ri­tory. If you fail to achieve a cor­rect an­swer, it is fu­tile to protest that you acted with pro­pri­ety.” From the stand­point of so­cial al­tru­ism, some­one may wish to be Modest, and en­ter a drug-or-hyp­no­sis-en­forced con­tract of Modesty, even if they fail to achieve a cor­rect an­swer thereby.

The cen­tral ar­gu­ment for Modesty pro­poses some­thing like a Rawlsian veil of ig­no­rance—how can you know which of you is the hon­est truth­seeker, and which the stub­born self-de­ceiver? The cre­ation­ist be­lieves that he is the sane one and you are the fool. Doesn’t this make the situ­a­tion sym­met­ric around the two of you? If you av­er­age your es­ti­mates to­gether, one of you must gain, and one of you must lose, since the shifts are in op­po­site di­rec­tions; but by Jensen’s in­equal­ity it is a pos­i­tive-sum game. And since, by some­thing like a Rawlsian veil of ig­no­rance, you don’t know which of you is re­ally the fool, you ought to take the gam­ble. This ar­gues that the so­cially al­tru­is­tic move is also always the in­di­vi­d­u­ally ra­tio­nal move.

And there’s also the ob­vi­ous re­ply: “But I know perfectly well who the fool is. It’s the other guy. It doesn’t mat­ter that he says the same thing—he’s still the fool.”

This re­ply sounds bald and un­con­vinc­ing when you con­sider it ab­stractly. But if you ac­tu­ally face a cre­ation­ist, then it cer­tainly feels like the cor­rect an­swer—you’re right, he’s wrong, and you have valid ev­i­dence to know that, even if the cre­ation­ist can re­cite ex­actly the same claim in front of a TV au­di­ence.

Robin Han­son sides with sym­me­try—this is clear­est in his pa­per Un­com­mon Pri­ors Re­quire Ori­gin Dis­putes—and there­fore en­dorses the Modesty Ar­gu­ment. (Though I haven’t seen him an­a­lyze the par­tic­u­lar case of the cre­ation­ist.)

I re­spond: Those who dream do not know they dream; but when you wake you know you are awake. Dream­ing, you may think you are awake. You may even be con­vinced of it. But right now, when you re­ally are awake, there isn’t any doubt in your mind—nor should there be. If you, per­suaded by the clever ar­gu­ment, de­cided to start doubt­ing right now that you’re re­ally awake, then your Bayesian score would go down and you’d be­come that much less ac­cu­rate. If you se­ri­ously tried to make your­self doubt that you were awake—in the sense of won­der­ing if you might be in the midst of an or­di­nary hu­man REM cy­cle—then you would prob­a­bly do so be­cause you wished to ap­pear to your­self as ra­tio­nal, or be­cause it was how you con­ceived of “ra­tio­nal­ity” as a mat­ter of moral duty. Be­cause you wanted to act with pro­pri­ety. Not be­cause you felt gen­uinely cu­ri­ous as to whether you were awake or asleep. Not be­cause you felt you might re­ally and truly be asleep. But be­cause you didn’t have an an­swer to the clever ar­gu­ment, just an (ahem) in­com­mu­ni­ca­ble in­sight that you were awake.

Rus­sell Wal­lace put it thusly: “That we can pos­tu­late a mind of suffi­ciently low (dream­ing) or dis­torted (in­sane) con­scious­ness as to gen­uinely not know whether it’s Rus­sell or Napoleon doesn’t mean I (the en­tity cur­rently think­ing these thoughts) could have been Napoleon, any more than the num­ber 3 could have been the num­ber 7. If you doubt this, con­sider the ex­treme case: a rock doesn’t know whether it’s me or a rock. That doesn’t mean I could have been a rock.”

There are other prob­lems I see with the Modesty Ar­gu­ment, prag­matic mat­ters of hu­man ra­tio­nal­ity—if a fal­lible hu­man tries to fol­low the Modesty Ar­gu­ment in prac­tice, does this im­prove or dis­im­prove per­sonal ra­tio­nal­ity? To me it seems that the ad­her­ents of the Modesty Ar­gu­ment tend to pro­fess Modesty but not ac­tu­ally prac­tice it.

For ex­am­ple, let’s say you’re a sci­en­tist with a con­tro­ver­sial be­lief—like the Modesty Ar­gu­ment it­self, which is hardly a mat­ter of com­mon ac­cord—and you spend some sub­stan­tial amount of time and effort try­ing to prove, ar­gue, ex­am­ine, and gen­er­ally for­ward this be­lief. Then one day you en­counter the Modesty Ar­gu­ment, and it oc­curs to you that you should ad­just your be­lief to­ward the modal be­lief of the sci­en­tific field. But then you’d have to give up your cher­ished hy­poth­e­sis. So you do the ob­vi­ous thing—I’ve seen at least two peo­ple do this on two differ­ent oc­ca­sions—and say: “Pur­su­ing my per­sonal hy­poth­e­sis has a net ex­pected util­ity to Science. Even if I don’t re­ally be­lieve that my the­ory is cor­rect, I can still pur­sue it be­cause of the cat­e­gor­i­cal im­per­a­tive: Science as a whole will be bet­ter off if sci­en­tists go on pur­su­ing their own hy­pothe­ses.” And then they con­tinue ex­actly as be­fore.

I am skep­ti­cal to say the least. In­te­grat­ing the Modesty Ar­gu­ment as new ev­i­dence ought to pro­duce a large effect on some­one’s life and plans. If it’s be­ing re­ally in­te­grated, that is, rather than flushed down a black hole. Your per­sonal an­ti­ci­pa­tion of suc­cess, the bright emo­tion with which you an­ti­ci­pate the con­fir­ma­tion of your the­ory, should diminish by liter­ally or­ders of mag­ni­tude af­ter ac­cept­ing the Modesty Ar­gu­ment. The rea­son peo­ple buy lot­tery tick­ets is that the bright an­ti­ci­pa­tion of win­ning ten mil­lion dol­lars, the danc­ing vi­sions of speed­boats and man­sions, is not suffi­ciently diminished—as a strength of emo­tion—by the prob­a­bil­ity fac­tor, the odds of a hun­dred mil­lion to one. The ticket buyer may even pro­fess that the odds are a hun­dred mil­lion to one, but they don’t an­ti­ci­pate it prop­erly—they haven’t in­te­grated the mere ver­bal phrase “hun­dred mil­lion to one” on an emo­tional level.

So, when a sci­en­tist in­te­grates the Modesty Ar­gu­ment as new ev­i­dence, should the re­sult­ing nearly to­tal loss of hope have no effect on real-world plans origi­nally formed in blessed ig­no­rance and joy­ous an­ti­ci­pa­tion of triumph? Espe­cially when you con­sider that the sci­en­tist knew about the so­cial util­ity to start with, while mak­ing the origi­nal plans? I think that’s around as plau­si­ble as main­tain­ing your ex­act origi­nal in­vest­ment pro­file af­ter the ex­pected re­turns on some stocks change by a fac­tor of a hun­dred. What’s ac­tu­ally hap­pen­ing, one nat­u­rally sus­pects, is that the sci­en­tist finds that the Modesty Ar­gu­ment has un­com­fortable im­pli­ca­tions; so they reach for an ex­cuse, and in­vent on-the-fly the ar­gu­ment from so­cial util­ity as a way of ex­actly can­cel­ling out the Modesty Ar­gu­ment and pre­serv­ing all their origi­nal plans.

But of course if I say that this is an ar­gu­ment against the Modesty Ar­gu­ment, that is pure ad hominem tu quoque. If its ad­her­ents fail to use the Modesty Ar­gu­ment prop­erly, that does not im­ply it has any less force as logic.

Rather than go into more de­tail on the man­i­fold ram­ifi­ca­tions of the Modesty Ar­gu­ment, I’m go­ing to close with the thought ex­per­i­ment that ini­tially con­vinced me of the falsity of the Modesty Ar­gu­ment. In the be­gin­ning it seemed to me rea­son­able that if feel­ings of 99% cer­tainty were as­so­ci­ated with a 70% fre­quency of true state­ments, on av­er­age across the global pop­u­la­tion, then the state of 99% cer­tainty was like a “poin­ter” to 70% prob­a­bil­ity. But at one point I thought: “What should an (AI) su­per­in­tel­li­gence say in the same situ­a­tion? Should it treat its 99% prob­a­bil­ity es­ti­mates as 70% prob­a­bil­ity es­ti­mates be­cause so many hu­man be­ings make the same mis­take?” In par­tic­u­lar, it oc­curred to me that, on the day the first true su­per­in­tel­li­gence was born, it would be un­de­ni­ably true that—across the whole of Earth’s his­tory—the enor­mously vast ma­jor­ity of en­tities who had be­lieved them­selves su­per­in­tel­li­gent would be wrong. The ma­jor­ity of the refer­ents of the poin­ter “I am a su­per­in­tel­li­gence” would be schizophren­ics who be­lieved they were God.

A su­per­in­tel­li­gence doesn’t just be­lieve the bald state­ment that it is a su­per­in­tel­li­gence—it pre­sum­ably pos­sesses a very de­tailed, very ac­cu­rate self-model of its own cog­ni­tive sys­tems, tracks in de­tail its own cal­ibra­tion, and so on. But if you tell this to a men­tal pa­tient, the men­tal pa­tient can im­me­di­ately re­spond: “Ah, but I too pos­sess a very de­tailed, very ac­cu­rate self-model!” The men­tal pa­tient may even come to sincerely be­lieve this, in the mo­ment of the re­ply. Does that mean the su­per­in­tel­li­gence should won­der if it is a men­tal pa­tient? This is the op­po­site ex­treme of Rus­sell Wal­lace ask­ing if a rock could have been you, since it doesn’t know if it’s you or the rock.

One ob­vi­ous re­ply is that hu­man be­ings and su­per­in­tel­li­gences oc­cupy differ­ent classes—we do not have the same ur-pri­ors, or we are not part of the same an­thropic refer­ence class; some sharp dis­tinc­tion ren­ders it im­pos­si­ble to group to­gether su­per­in­tel­li­gences and schizophren­ics in prob­a­bil­ity ar­gu­ments. But one would then like to know ex­actly what this “sharp dis­tinc­tion” is, and how it is jus­tified rel­a­tive to the Modesty Ar­gu­ment. Can an evolu­tion­ist and a cre­ation­ist also oc­cupy differ­ent refer­ence classes? It sounds as­tound­ingly ar­ro­gant; but when I con­sider the ac­tual, prag­matic situ­a­tion, it seems to me that this is gen­uinely the case.

Or here’s a more re­cent ex­am­ple—one that in­spired me to write to­day’s blog post, in fact. It’s the true story of a cus­tomer strug­gling through five lev­els of Ver­i­zon cus­tomer sup­port, all the way up to floor man­ager, in an ul­ti­mately fu­tile quest to find some­one who could un­der­stand the differ­ence be­tween .002 dol­lars per kilo­byte and .002 cents per kilo­byte. Au­dio [27 min­utes], Tran­script. It has to be heard to be be­lieved. Sam­ple of con­ver­sa­tion: “Do you rec­og­nize that there’s a differ­ence be­tween point zero zero two dol­lars and point zero zero two cents?” “No.”

The key phrase that caught my at­ten­tion and in­spired me to write to­day’s blog post is from the floor man­ager: “You already talked to a few differ­ent peo­ple here, and they’ve all ex­plained to you that you’re be­ing billed .002 cents, and if you take it and put it on your calcu­la­tor… we take the .002 as ev­ery­body has told you that you’ve called in and spo­ken to, and as our sys­tem bills ac­cord­ingly, is cor­rect.”

Should Ge­orge—the cus­tomer—have started doubt­ing his ar­ith­metic, be­cause five lev­els of Ver­i­zon cus­tomer sup­port, some of whom cited mul­ti­ple years of ex­pe­rience, told him he was wrong? Should he have ad­justed his prob­a­bil­ity es­ti­mate in their di­rec­tion? A straight­for­ward ex­ten­sion of Au­mann’s Agree­ment The­o­rem to im­pos­si­ble pos­si­ble wor­lds, that is, un­cer­tainty about the re­sults of com­pu­ta­tions, proves that, had all par­ties been gen­uine Bayesi­ans with com­mon knowl­edge of each other’s es­ti­mates, they would have had the same es­ti­mate. Jensen’s in­equal­ity proves even more straight­for­wardly that, if Ge­orge and the five lev­els of tech sup­port had av­er­aged to­gether their prob­a­bil­ity es­ti­mates, they would have im­proved their av­er­age log score. If such ar­gu­ments fail in this case, why do they suc­ceed in other cases? And if you claim the Modesty Ar­gu­ment car­ries in this case, are you re­ally tel­ling me that if Ge­orge had wanted only to find the truth for him­self, he would have been wise to ad­just his es­ti­mate in Ver­i­zon’s di­rec­tion? I know this is an ar­gu­ment from per­sonal in­cre­dulity, but I think it’s a good one.

On the whole, and in prac­tice, it seems to me like Modesty is some­times a good idea, and some­times not. I ex­er­cise my in­di­vi­d­ual dis­cre­tion and judg­ment to de­cide, even know­ing that I might be bi­ased or self-fa­vor­ing in do­ing so, be­cause the al­ter­na­tive of be­ing Modest in ev­ery case seems to me much worse.

But the ques­tion also seems to have a definite an­thropic fla­vor. An­thropic prob­a­bil­ities still con­fuse me; I’ve read ar­gu­ments but I have been un­able to re­solve them to my own satis­fac­tion. There­fore, I con­fess, I am not able to give a full ac­count of how the Modesty Ar­gu­ment is re­solved.

Modest, aren’t I?