Principles of Disagreement

Followup to: The Rhythm of Disagreement

At the age of 15, a year before I knew what a “Singularity” was, I had learned about evolutionary psychology. Even from that beginning, it was apparent to me that people talked about “disagreement” as a matter of tribal status, processing it with the part of their brain that assessed people’s standing in the tribe. The peculiar indignation of “How dare you disagree with Einstein?” has its origins here: Even if the disagreer is wrong, we wouldn’t apply the same emotions to an ordinary math error like “How dare you write a formula that makes e equal to 1.718?”

At the age of 15, being a Traditional Rationalist, and never having heard of Aumann or Bayes, I thought the obvious answer was, “Entirely disregard people’s authority and pay attention to the arguments. Only arguments count.”

Ha ha! How naive.

I can’t say that this principle never served my younger self wrong.

I can’t even say that the principle gets you as close as possible to the truth.

I doubt I ever really clung to that principle in practice. In real life, I judged my authorities with care then, just as I do now...

But my efforts to follow that principle, made me stronger. They focused my attention upon arguments; believing in authority does not make you stronger. The principle gave me freedom to find a better way, which I eventually did, though I wandered at first.

Yet both of these benefits were pragmatic and long-term, not immediate and epistemic. And you cannot say, “I will disagree today, even though I’m probably wrong, because it will help me find the truth later.” Then you are trying to doublethink. If you know today that you are probably wrong, you must abandon the belief today. Period. No cleverness. Always use your truth-finding skills at their full immediate strength, or you have abandoned something more important than any other benefit you will be offered; you have abandoned the truth.

So today, I sometimes accept things on authority, because my best guess is that they are really truly true in real life, and no other criterion gets a vote.

But always in the back of my mind is that childhood principle, directing my attention to the arguments as well, reminding me that you gain no strength from authority; that you may not even know anything, just be repeating it back.

Earlier I described how I disagreed with a math book and looked for proof, disagreed humbly with Judea Pearl and was proven (half) right, disagreed immodestly with Sebastian Thrun and was proven wrong, had a couple of quick exchanges with Steve Omohundro in which modesty-reasoning would just have slowed us down, respectfully disagreed with Daniel Dennett and disrespectfully disagreed with Steven Pinker, disagreed with Robert Aumann without a second thought, disagreed with Nick Bostrom with second thoughts...

What kind of rule am I using, that covers all these cases?

Er… “try to get the actual issue really right”? I mean, there are other rules but that’s the important one. It’s why I disagree with Aumann about Orthodox Judaism, and blindly accept Judea Pearl’s word about the revised version of his analysis. Any argument that says I should take Aumann seriously is wasting my time; any argument that says I should disagree with Pearl is wasting my truth.

There are all sorts of general reasons not to argue with physicists about physics, but the rules are all there to help you get the issue right, so in the case of Many-Worlds you have to ignore them.

Yes, I know that’s not helpful as a general principle. But dammit, wavefunctions don’t collapse! It’s a massively stupid idea that sticks around due to sheer historical contingency! I’m more confident of that than any principle I would dare to generalize about disagreement.

Notions of “disagreement” are psychology-dependent pragmatic philosophy. Physics and Occam’s razor are much simpler. Object-level stuff is often much clearer than meta-level stuff, even though this itself is a meta-level principle.

In theory, you have to make a prior decision whether to trust your own assessment of how obvious it is that wavefunctions don’t collapse, before you can assess whether wavefunctions don’t collapse. In practice, it’s much more obvious that wavefunctions don’t collapse, than that I should trust my disagreement. Much more obvious. So I just go with that.

I trust any given level of meta as far as I can throw it, but no further.

There’s a rhythm to disagreement. And oversimplified rules about when to disagree, can distract from that rhythm. Even “Follow arguments, not people” can distract from the rhythm, because no one, including my past self, really uses that rule in practice.

The way it works in real life is that I just do the standard first-order disagreement analysis: Okay, in real life, how likely is it that this person knows stuff that I don’t?

Not, Okay, how much of the stuff that I know that they don’t, have they already taken into account in a revised estimate, given that they know I disagree with them, and have formed guesses about what I might know that they don’t, based on their assessment of my and their relative rationality...

Why don’t I try the higher-order analyses? Because I’ve never seen a case where, even in retrospect, it seems like I could have gotten real-life mileage out of it. Too complicated, too much of a tendency to collapse to tribal status, too distracting from the object-level arguments.

I have previously observed that those who genuinely reach upward as rationalists, have usually been broken of their core trust in the sanity of the people around them. In this world, we have to figure out who to trust, and who we have reasons to trust, and who might be right even when we believe they’re wrong. But I’m kinda skeptical that we can—in this world of mostly crazy people and a few slightly-more-sane people who’ve spent their whole lives surrounded by crazy people who claim they’re saner than average—get real-world mileage out of complicated reasoning that involves sane people assessing each other’s meta-sanity. We’ve been broken of that trust, you see.

Does Robin Hanson really trust, deep down, that I trust him enough, that I would not dare to disagree with him, unless he were really wrong? I can’t trust that he does… so I don’t trust him so much… so he shouldn’t trust that I wouldn’t dare disagree...

It would be an interesting experiment: but I cannot literally commit to walking into a room with Robin Hanson and not walking out until we have the same opinion about the Singularity. So that if I give him all my reasons and hear all his reasons, and Hanson tells me, “I still think you’re wrong,” I must then agree (or disagree in a net direction Robin can’t predict). I trust Robin but I don’t trust him THAT MUCH. Even if I tried to promise, I couldn’t make myself believe it was really true—and that tells me I can’t make the promise.

When I think about who I would be willing to try this with, the name that comes to mind is Michael Vassar—which surprised me, and I asked my mind why. The answer that came back was, “Because Michael Vassar knows viscerally what’s at stake if he makes you update the wrong way; he wouldn’t use the power lightly.” I’m not going anywhere in particular with this; but it points in an interesting direction—that a primary reason I don’t always update when people disagree with me, is that I don’t think they’re taking that disagreement with the extraordinary gravity that would be required, on both sides, for two people to trust each other in an Aumann cage match.

Yesterday, Robin asked me why I disagree with Roger Schank about whether AI will be general in the foreseeable future.

Well, first, be it said that I am no hypocrite; I have been explicitly defending immodesty against modesty since long before this blog began.

Roger Schank is a famous old AI researcher who I learned about as the pioneer of yet another false idol, “scripts”. He used suggestively named LISP tokens, and I’d never heard it said of him that he had seen the light of Bayes.

So I noted that the warriors of old are often more formidable intellectually than those who venture into the Dungeon of General AI today, but their arms and armor are obsolete. And I pointed out that Schank’s prediction with its stated reasons seemed more like an emotional reaction to discouragement, than a painstakingly crafted general model of the future of AI research that had happened to yield a firm prediction in this case.

Ah, said Robin, so it is good for the young to disagree with the old.

No, but if the old guy is Roger Schank, and the young guy is me, and we are disagreeing about Artificial General Intelligence, then sure.

If the old guy is, I don’t know, Murray Gell-Mann, and we’re disagreeing about, like, particle masses or something, I’d have to ask what I was even doing in that conversation.

If the old fogey is Murray Gell-Mann and the young upstart is Scott Aaronson, I’d probably stare at them helplessly like a deer caught in the headlights. I’ve listed out the pros and cons here, and they balance as far as I can tell:

  • Murray Gell-Mann won a Nobel Prize back in the eighteenth century for work he did when he was four hundred years younger, or something like that.

  • Scott Aaronson has more recent training.

  • ...but physics may not have changed all that much since Gell-Mann’s reign of applicability, sad to say.

  • Aaronson still has most of his neurons left.

  • I know Aaronson is smart, but Gell-Mann doesn’t have any idea who Aaronson is. Aaronson knows Gell-Mann is a Nobel Laureate and wouldn’t disagree lightly.

  • Gell-Mann is a strong proponent of many-worlds and Aaronson is not, which is one of the acid tests of a physicist’s ability to choose correctly amid controversy.

It is traditional—not Bayesian, not even remotely realistic, but traditional - that when some uppity young scientist is pushing their chosen field as far they possibly can, going past the frontier, they have a right to eat any old scientists they come across, for nutrition.

I think there’s more than a grain of truth in that ideal. It’s not completely true. It’s certainly not upheld in practice. But it’s not wrong, either.

It’s not that the young have a generic right to disagree with the old, but yes, when the young are pushing the frontiers they often end up leaving the old behind. Everyone knows that and what’s more, I think it’s true.

If someday I get eaten, great.

I still agree with my fifteen-year-old self about some things: The tribal-status part of our minds, that asks, “How dare you disagree?”, is just a hindrance. The real issues of rational disagreement have nothing to do with that part of us; it exists for other reasons and works by other rhythms. “How dare you disagree with Roger Schank?” ends up as a no-win question if you try to approach it on the meta-level and think in terms of generic trustworthiness: it forces you to argue that you yourself are generically above Schank and of higher tribal status; or alternatively, accept conclusions that do not seem, er, carefully reasoned. In such a case there is a great deal to be said for simply focusing on the object-level arguments.

But if there are no simple rules that forbid disagreement, can’t people always make up whatever excuse for disagreement they like, so they can cling to precious beliefs?

Look… it’s never hard to shoot off your own foot, in this art of rationality. And the more art you learn of rationality, the more potential excuses you have. If you insist on disagreeing with Gell-Mann about physics, BLAM it goes. There is no set of rules you can follow to be safe. You will always have the opportunity to shoot your own foot off.

I want to push my era further than the previous ones: create an advanced art of rationality, to advise people who are trying to reach as high as they can in real life. They will sometimes have to disagree with others. If they are pushing the frontiers of their science they may have to disagree with their elders. They will have to develop the skill—learning from practice—of when to disagree and when not to. “Don’t” is the wrong answer.

If others take that as a welcome excuse to shoot their own feet off, that doesn’t change what’s really the truly true truth.

I once gave a talk on rationality at Peter Thiel’s Clarium Capital. I did not want anything bad to happen to Clarium Capital. So I ended my talk by saying, “And above all, if any of these reasonable-sounding principles turn out not to work, don’t use them.

In retrospect, thinking back, I could have given the different caution: “And be careful to follow these principles consistently, instead of making special exceptions when it seems tempting.” But it would not be a good thing for the Singularity Institute, if anything bad happened to Clarium Capital.

That’s as close as I’ve ever come to betting on my high-minded advice about rationality in a prediction market—putting my skin in a game with near-term financial consequences. I considered just staying home—Clarium was trading successfully; did I want to disturb their rhythm with Centipede’s Dilemmas? But because past success is no guarantee of future success in finance, I went, and offered what help I could give, emphasizing above all the problem of motivated skepticism—when I had skin in the game. Yet at the end I said: “Don’t trust principles until you see them working,” not “Be wary of the temptation to make exceptions.”

I conclude with one last tale of disagreement:

Nick Bostrom and I once took a taxi and split the fare. When we counted the money we’d assembled to pay the driver, we found an extra twenty there.

“I’m pretty sure this twenty isn’t mine,” said Nick.

“I’d have been sure that it wasn’t mine either,” I said.

“You just take it,” said Nick.

“No, you just take it,” I said.

We looked at each other, and we knew what we had to do.

“To the best of your ability to say at this point, what would have been your initial probability that the bill was yours?” I said.

“Fifteen percent,” said Nick.

“I would have said twenty percent,” I said.

So we split it $8.57 /​ $11.43, and went happily on our way, guilt-free.

I think that’s the only time I’ve ever seen an Aumann-inspired algorithm used in real-world practice.