Principles of Disagreement

Fol­lowup to: The Rhythm of Disagree­ment

At the age of 15, a year be­fore I knew what a “Sin­gu­lar­ity” was, I had learned about evolu­tion­ary psy­chol­ogy. Even from that be­gin­ning, it was ap­par­ent to me that peo­ple talked about “dis­agree­ment” as a mat­ter of tribal sta­tus, pro­cess­ing it with the part of their brain that as­sessed peo­ple’s stand­ing in the tribe. The pe­cu­liar in­dig­na­tion of “How dare you dis­agree with Ein­stein?” has its ori­gins here: Even if the dis­agreer is wrong, we wouldn’t ap­ply the same emo­tions to an or­di­nary math er­ror like “How dare you write a for­mula that makes e equal to 1.718?”

At the age of 15, be­ing a Tra­di­tional Ra­tion­al­ist, and never hav­ing heard of Au­mann or Bayes, I thought the ob­vi­ous an­swer was, “En­tirely dis­re­gard peo­ple’s au­thor­ity and pay at­ten­tion to the ar­gu­ments. Only ar­gu­ments count.”

Ha ha! How naive.

I can’t say that this prin­ci­ple never served my younger self wrong.

I can’t even say that the prin­ci­ple gets you as close as pos­si­ble to the truth.

I doubt I ever re­ally clung to that prin­ci­ple in prac­tice. In real life, I judged my au­thor­i­ties with care then, just as I do now...

But my efforts to fol­low that prin­ci­ple, made me stronger. They fo­cused my at­ten­tion upon ar­gu­ments; be­liev­ing in au­thor­ity does not make you stronger. The prin­ci­ple gave me free­dom to find a bet­ter way, which I even­tu­ally did, though I wan­dered at first.

Yet both of these benefits were prag­matic and long-term, not im­me­di­ate and epistemic. And you can­not say, “I will dis­agree to­day, even though I’m prob­a­bly wrong, be­cause it will help me find the truth later.” Then you are try­ing to dou­ble­think. If you know to­day that you are prob­a­bly wrong, you must aban­don the be­lief to­day. Pe­riod. No clev­er­ness. Always use your truth-find­ing skills at their full im­me­di­ate strength, or you have aban­doned some­thing more im­por­tant than any other benefit you will be offered; you have aban­doned the truth.

So to­day, I some­times ac­cept things on au­thor­ity, be­cause my best guess is that they are re­ally truly true in real life, and no other crite­rion gets a vote.

But always in the back of my mind is that child­hood prin­ci­ple, di­rect­ing my at­ten­tion to the ar­gu­ments as well, re­mind­ing me that you gain no strength from au­thor­ity; that you may not even know any­thing, just be re­peat­ing it back.

Ear­lier I de­scribed how I dis­agreed with a math book and looked for proof, dis­agreed humbly with Judea Pearl and was proven (half) right, dis­agreed im­mod­estly with Se­bas­tian Thrun and was proven wrong, had a cou­ple of quick ex­changes with Steve Omo­hun­dro in which mod­esty-rea­son­ing would just have slowed us down, re­spect­fully dis­agreed with Daniel Den­nett and dis­re­spect­fully dis­agreed with Steven Pinker, dis­agreed with Robert Au­mann with­out a sec­ond thought, dis­agreed with Nick Bostrom with sec­ond thoughts...

What kind of rule am I us­ing, that cov­ers all these cases?

Er… “try to get the ac­tual is­sue re­ally right”? I mean, there are other rules but that’s the im­por­tant one. It’s why I dis­agree with Au­mann about Ortho­dox Ju­daism, and blindly ac­cept Judea Pearl’s word about the re­vised ver­sion of his anal­y­sis. Any ar­gu­ment that says I should take Au­mann se­ri­ously is wast­ing my time; any ar­gu­ment that says I should dis­agree with Pearl is wast­ing my truth.

There are all sorts of gen­eral rea­sons not to ar­gue with physi­cists about physics, but the rules are all there to help you get the is­sue right, so in the case of Many-Wor­lds you have to ig­nore them.

Yes, I know that’s not helpful as a gen­eral prin­ci­ple. But dammit, wave­func­tions don’t col­lapse! It’s a mas­sively stupid idea that sticks around due to sheer his­tor­i­cal con­tin­gency! I’m more con­fi­dent of that than any prin­ci­ple I would dare to gen­er­al­ize about dis­agree­ment.

No­tions of “dis­agree­ment” are psy­chol­ogy-de­pen­dent prag­matic philos­o­phy. Physics and Oc­cam’s ra­zor are much sim­pler. Ob­ject-level stuff is of­ten much clearer than meta-level stuff, even though this it­self is a meta-level prin­ci­ple.

In the­ory, you have to make a prior de­ci­sion whether to trust your own as­sess­ment of how ob­vi­ous it is that wave­func­tions don’t col­lapse, be­fore you can as­sess whether wave­func­tions don’t col­lapse. In prac­tice, it’s much more ob­vi­ous that wave­func­tions don’t col­lapse, than that I should trust my dis­agree­ment. Much more ob­vi­ous. So I just go with that.

I trust any given level of meta as far as I can throw it, but no fur­ther.

There’s a rhythm to dis­agree­ment. And over­sim­plified rules about when to dis­agree, can dis­tract from that rhythm. Even “Fol­low ar­gu­ments, not peo­ple” can dis­tract from the rhythm, be­cause no one, in­clud­ing my past self, re­ally uses that rule in prac­tice.

The way it works in real life is that I just do the stan­dard first-or­der dis­agree­ment anal­y­sis: Okay, in real life, how likely is it that this per­son knows stuff that I don’t?

Not, Okay, how much of the stuff that I know that they don’t, have they already taken into ac­count in a re­vised es­ti­mate, given that they know I dis­agree with them, and have formed guesses about what I might know that they don’t, based on their as­sess­ment of my and their rel­a­tive ra­tio­nal­ity...

Why don’t I try the higher-or­der analy­ses? Be­cause I’ve never seen a case where, even in ret­ro­spect, it seems like I could have got­ten real-life mileage out of it. Too com­pli­cated, too much of a ten­dency to col­lapse to tribal sta­tus, too dis­tract­ing from the ob­ject-level ar­gu­ments.

I have pre­vi­ously ob­served that those who gen­uinely reach up­ward as ra­tio­nal­ists, have usu­ally been bro­ken of their core trust in the san­ity of the peo­ple around them. In this world, we have to figure out who to trust, and who we have rea­sons to trust, and who might be right even when we be­lieve they’re wrong. But I’m kinda skep­ti­cal that we can—in this world of mostly crazy peo­ple and a few slightly-more-sane peo­ple who’ve spent their whole lives sur­rounded by crazy peo­ple who claim they’re saner than av­er­age—get real-world mileage out of com­pli­cated rea­son­ing that in­volves sane peo­ple as­sess­ing each other’s meta-san­ity. We’ve been bro­ken of that trust, you see.

Does Robin Han­son re­ally trust, deep down, that I trust him enough, that I would not dare to dis­agree with him, un­less he were re­ally wrong? I can’t trust that he does… so I don’t trust him so much… so he shouldn’t trust that I wouldn’t dare dis­agree...

It would be an in­ter­est­ing ex­per­i­ment: but I can­not liter­ally com­mit to walk­ing into a room with Robin Han­son and not walk­ing out un­til we have the same opinion about the Sin­gu­lar­ity. So that if I give him all my rea­sons and hear all his rea­sons, and Han­son tells me, “I still think you’re wrong,” I must then agree (or dis­agree in a net di­rec­tion Robin can’t pre­dict). I trust Robin but I don’t trust him THAT MUCH. Even if I tried to promise, I couldn’t make my­self be­lieve it was re­ally true—and that tells me I can’t make the promise.

When I think about who I would be will­ing to try this with, the name that comes to mind is Michael Vas­sar—which sur­prised me, and I asked my mind why. The an­swer that came back was, “Be­cause Michael Vas­sar knows viscer­ally what’s at stake if he makes you up­date the wrong way; he wouldn’t use the power lightly.” I’m not go­ing any­where in par­tic­u­lar with this; but it points in an in­ter­est­ing di­rec­tion—that a pri­mary rea­son I don’t always up­date when peo­ple dis­agree with me, is that I don’t think they’re tak­ing that dis­agree­ment with the ex­traor­di­nary grav­ity that would be re­quired, on both sides, for two peo­ple to trust each other in an Au­mann cage match.

Yes­ter­day, Robin asked me why I dis­agree with Roger Schank about whether AI will be gen­eral in the fore­see­able fu­ture.

Well, first, be it said that I am no hyp­ocrite; I have been ex­plic­itly defend­ing im­mod­esty against mod­esty since long be­fore this blog be­gan.

Roger Schank is a fa­mous old AI re­searcher who I learned about as the pi­o­neer of yet an­other false idol, “scripts”. He used sug­ges­tively named LISP to­kens, and I’d never heard it said of him that he had seen the light of Bayes.

So I noted that the war­riors of old are of­ten more formidable in­tel­lec­tu­ally than those who ven­ture into the Dun­geon of Gen­eral AI to­day, but their arms and ar­mor are ob­so­lete. And I pointed out that Schank’s pre­dic­tion with its stated rea­sons seemed more like an emo­tional re­ac­tion to dis­cour­age­ment, than a painstak­ingly crafted gen­eral model of the fu­ture of AI re­search that had hap­pened to yield a firm pre­dic­tion in this case.

Ah, said Robin, so it is good for the young to dis­agree with the old.

No, but if the old guy is Roger Schank, and the young guy is me, and we are dis­agree­ing about Ar­tifi­cial Gen­eral In­tel­li­gence, then sure.

If the old guy is, I don’t know, Mur­ray Gell-Mann, and we’re dis­agree­ing about, like, par­ti­cle masses or some­thing, I’d have to ask what I was even do­ing in that con­ver­sa­tion.

If the old fo­gey is Mur­ray Gell-Mann and the young up­start is Scott Aaron­son, I’d prob­a­bly stare at them hel­plessly like a deer caught in the head­lights. I’ve listed out the pros and cons here, and they bal­ance as far as I can tell:

  • Mur­ray Gell-Mann won a No­bel Prize back in the eigh­teenth cen­tury for work he did when he was four hun­dred years younger, or some­thing like that.

  • Scott Aaron­son has more re­cent train­ing.

  • ...but physics may not have changed all that much since Gell-Mann’s reign of ap­pli­ca­bil­ity, sad to say.

  • Aaron­son still has most of his neu­rons left.

  • I know Aaron­son is smart, but Gell-Mann doesn’t have any idea who Aaron­son is. Aaron­son knows Gell-Mann is a No­bel Lau­re­ate and wouldn’t dis­agree lightly.

  • Gell-Mann is a strong pro­po­nent of many-wor­lds and Aaron­son is not, which is one of the acid tests of a physi­cist’s abil­ity to choose cor­rectly amid con­tro­versy.

It is tra­di­tional—not Bayesian, not even re­motely re­al­is­tic, but tra­di­tional - that when some up­pity young sci­en­tist is push­ing their cho­sen field as far they pos­si­bly can, go­ing past the fron­tier, they have a right to eat any old sci­en­tists they come across, for nu­tri­tion.

I think there’s more than a grain of truth in that ideal. It’s not com­pletely true. It’s cer­tainly not up­held in prac­tice. But it’s not wrong, ei­ther.

It’s not that the young have a generic right to dis­agree with the old, but yes, when the young are push­ing the fron­tiers they of­ten end up leav­ing the old be­hind. Every­one knows that and what’s more, I think it’s true.

If some­day I get eaten, great.

I still agree with my fif­teen-year-old self about some things: The tribal-sta­tus part of our minds, that asks, “How dare you dis­agree?”, is just a hin­drance. The real is­sues of ra­tio­nal dis­agree­ment have noth­ing to do with that part of us; it ex­ists for other rea­sons and works by other rhythms. “How dare you dis­agree with Roger Schank?” ends up as a no-win ques­tion if you try to ap­proach it on the meta-level and think in terms of generic trust­wor­thi­ness: it forces you to ar­gue that you your­self are gener­i­cally above Schank and of higher tribal sta­tus; or al­ter­na­tively, ac­cept con­clu­sions that do not seem, er, care­fully rea­soned. In such a case there is a great deal to be said for sim­ply fo­cus­ing on the ob­ject-level ar­gu­ments.

But if there are no sim­ple rules that for­bid dis­agree­ment, can’t peo­ple always make up what­ever ex­cuse for dis­agree­ment they like, so they can cling to pre­cious be­liefs?

Look… it’s never hard to shoot off your own foot, in this art of ra­tio­nal­ity. And the more art you learn of ra­tio­nal­ity, the more po­ten­tial ex­cuses you have. If you in­sist on dis­agree­ing with Gell-Mann about physics, BLAM it goes. There is no set of rules you can fol­low to be safe. You will always have the op­por­tu­nity to shoot your own foot off.

I want to push my era fur­ther than the pre­vi­ous ones: cre­ate an ad­vanced art of ra­tio­nal­ity, to ad­vise peo­ple who are try­ing to reach as high as they can in real life. They will some­times have to dis­agree with oth­ers. If they are push­ing the fron­tiers of their sci­ence they may have to dis­agree with their el­ders. They will have to de­velop the skill—learn­ing from prac­tice—of when to dis­agree and when not to. “Don’t” is the wrong an­swer.

If oth­ers take that as a wel­come ex­cuse to shoot their own feet off, that doesn’t change what’s re­ally the truly true truth.

I once gave a talk on ra­tio­nal­ity at Peter Thiel’s Clar­ium Cap­i­tal. I did not want any­thing bad to hap­pen to Clar­ium Cap­i­tal. So I ended my talk by say­ing, “And above all, if any of these rea­son­able-sound­ing prin­ci­ples turn out not to work, don’t use them.

In ret­ro­spect, think­ing back, I could have given the differ­ent cau­tion: “And be care­ful to fol­low these prin­ci­ples con­sis­tently, in­stead of mak­ing spe­cial ex­cep­tions when it seems tempt­ing.” But it would not be a good thing for the Sin­gu­lar­ity In­sti­tute, if any­thing bad hap­pened to Clar­ium Cap­i­tal.

That’s as close as I’ve ever come to bet­ting on my high-minded ad­vice about ra­tio­nal­ity in a pre­dic­tion mar­ket—putting my skin in a game with near-term fi­nan­cial con­se­quences. I con­sid­ered just stay­ing home—Clar­ium was trad­ing suc­cess­fully; did I want to dis­turb their rhythm with Cen­tipede’s Dilem­mas? But be­cause past suc­cess is no guaran­tee of fu­ture suc­cess in fi­nance, I went, and offered what help I could give, em­pha­siz­ing above all the prob­lem of mo­ti­vated skep­ti­cism—when I had skin in the game. Yet at the end I said: “Don’t trust prin­ci­ples un­til you see them work­ing,” not “Be wary of the temp­ta­tion to make ex­cep­tions.”

I con­clude with one last tale of dis­agree­ment:

Nick Bostrom and I once took a taxi and split the fare. When we counted the money we’d as­sem­bled to pay the driver, we found an ex­tra twenty there.

“I’m pretty sure this twenty isn’t mine,” said Nick.

“I’d have been sure that it wasn’t mine ei­ther,” I said.

“You just take it,” said Nick.

“No, you just take it,” I said.

We looked at each other, and we knew what we had to do.

“To the best of your abil­ity to say at this point, what would have been your ini­tial prob­a­bil­ity that the bill was yours?” I said.

“Fif­teen per­cent,” said Nick.

“I would have said twenty per­cent,” I said.

So we split it $8.57 /​ $11.43, and went hap­pily on our way, guilt-free.

I think that’s the only time I’ve ever seen an Au­mann-in­spired al­gorithm used in real-world prac­tice.