Conversation Halters

Re­lated to: Log­i­cal Ru­de­ness, Se­man­tic Stopsigns

While work­ing on my book, I found in pass­ing that I’d de­vel­oped a list of what I started out call­ing “stonewalls”, but have since de­cided to re­fer to as “con­ver­sa­tion halters”. Th­ese tac­tics of ar­gu­ment are dis­t­in­guished by their be­ing at­tempts to cut off the flow of de­bate—which is rarely the wis­est way to think, and should cer­tainly rate an alarm bell.

Here’s my as­sem­bled list, on which I shall ex­pand shortly:

  • Ap­peal to per­ma­nent un­knowa­bil­ity;

  • Ap­peal to hu­mil­ity;

  • Ap­peal to egal­i­tar­i­anism;

  • Ap­peal to com­mon guilt;

  • Ap­peal to in­ner pri­vacy;

  • Ap­peal to per­sonal free­dom;

  • Ap­peal to ar­bi­trari­ness;

  • Ap­peal to in­escapable as­sump­tions.

  • Ap­peal to un­ques­tion­able au­thor­ity;

  • Ap­peal to ab­solute cer­tainty.

Now all of these might seem like dodgy moves, some dodgier than oth­ers. But they be­come dodgier still when you take a step back, feel the flow of de­bate, ob­serve the cog­ni­tive traf­fic sig­nals, and view these as at­tempts to cut off the flow of fur­ther de­bate.

Hope­fully, most of these are ob­vi­ous, but to define terms:

Ap­peal to per­ma­nent un­knowa­bil­ity—some­thing along the lines of “Why did God al­low smal­l­pox? Well, no one can know the mind of God.” Or, “There’s no way to dis­t­in­guish among in­ter­pre­ta­tions of quan­tum me­chan­ics, so we’ll never know.” Ar­gu­ments like these can be re­futed eas­ily enough by any­one who knows the rules for rea­son­ing un­der un­cer­tainty and how they im­ply a cor­rect prob­a­bil­ity es­ti­mate given a state of knowl­edge… but of course you’ll prob­a­bly have to ex­plain the rules to the other, and the rea­son they ap­pealed to un­knowa­bil­ity is prob­a­bly to cut off fur­ther dis­cus­sion.

Ap­peal to hu­mil­ity—much the same as above, but said with a differ­ent em­pha­sis: “How can we know?”, where of course the speaker doesn’t much want to know, and so the real mean­ing is “How can you know?” Of course one may gather en­tan­gled ev­i­dence in most such cases, and Oc­cam’s Ra­zor or ex­trap­o­la­tion from already-known facts takes care of the other cases. But you’re not likely to get a chance to ex­plain it, be­cause by con­tin­u­ing to speak, you are com­mit­ting the sin of pride.

Ap­peal to egal­i­tar­i­anism—some­thing along the lines of “No one’s opinion is bet­ter than any­one else’s.” Now if you keep talk­ing you’re com­mit­ting an offense against tribal equal­ity.

Ap­peal to com­mon guilt—“ev­ery­one is ir­ra­tional now and then”, so if you keep talk­ing, you’re claiming to be bet­ter than them. An im­plicit sub­species of ap­peal to egal­i­tar­i­anism.

Ap­peal to in­ner pri­vacy—“you can’t pos­si­bly know how I feel!” It’s true that mod­ern tech­nol­ogy still en­coun­ters some slight difficul­ties in read­ing thoughts out of the brain, though work is un­der­way as we speak. But it is rare that the ex­act de­tails of how you feel are the key sub­ject mat­ter be­ing dis­puted. Here the bony bor­ders of the skull are be­ing re­de­ployed as a hard bar­rier to keep out fur­ther ar­gu­ments.

Ap­peal to per­sonal free­dom—“I can define a word any way I want!” Now if you keep talk­ing you’re in­fring­ing on their civil rights.

Ap­peal to ar­bi­trari­ness—again, the no­tion that word defi­ni­tions are ar­bi­trary serves as a good ex­am­ple (in fact I was har­vest­ing some of these ap­peals from that se­quence). It’s not just that this is wrong, but that it serves to cut off fur­ther dis­course. Gen­er­ally, any­thing that peo­ple are mo­ti­vated to ar­gue about is not ar­bi­trary. It is be­ing con­trol­led by in­visi­ble crite­ria of eval­u­a­tion, it has con­no­ta­tions with con­se­quences, and if that isn’t true ei­ther, the topic of dis­course is prob­a­bly not “ar­bi­trary” but just “mean­ingless”. No map that cor­re­sponds to an ex­ter­nal ter­ri­tory can be ar­bi­trary.

Ap­peal to in­escapable as­sump­tions—closely re­lated, the idea that you need some as­sump­tions and there­fore ev­ery­one is free to choose what­ever as­sump­tions they want. This again is al­most never true. In the realm of phys­i­cal re­al­ity, re­al­ity is one way or an­other and you don’t get to make it that way by choos­ing an opinion, and so some “as­sump­tions” are right and oth­ers wrong. In the realm of math, once you choose enough ax­ioms to spec­ify the sub­ject mat­ter, the re­main­ing the­o­rems are mat­ters of log­i­cal im­pli­ca­tion. What I want you to no­tice is not just that “ap­peal to in­escapable as­sump­tions” is a bad idea, but that it is sup­posed to halt fur­ther con­ver­sa­tion.

Ap­peal to un­ques­tion­able au­thor­ity—for ex­am­ple, defend­ing a defi­ni­tion by ap­peal­ing to the dic­tio­nary, which is sup­posed to be a fi­nal set­tle­ment of the ar­gu­ment. Of course it is very rare that what­ever is re­ally at stake is some­thing that ought to turn out differ­ently if a Mer­riam-Web­ster ed­i­tor writes a differ­ent defi­ni­tion. Only in mat­ters of the solidest, most repli­ca­ble sci­ence, do we have in­for­ma­tion so au­thor­i­ta­tive that there is no longer much point in con­sid­er­ing other sources of ev­i­dence. And even then we shouldn’t ex­pect to see strong winds of ev­i­dence blow­ing in an op­pos­ing di­rec­tion—un­der the Bayesian defi­ni­tion of ev­i­dence, strong ev­i­dence is just that sort of ev­i­dence which you only ever ex­pect to find on at most one side of a fac­tual ques­tion. More usu­ally, this ar­gu­ment runs some­thing along the lines of “How dare you ar­gue with the dic­tio­nary?” or “How dare you ar­gue with Pro­fes­sor Pick­lepumper of Har­vard Univer­sity?”

Ap­peal to ab­solute cer­tainty—if you did have some source of ab­solute cer­tainty, it would do no harm to cut off de­bate at that point. Need­less to say, this usu­ally doesn’t hap­pen.

And again: Th­ese ap­peals are all flawed in their sep­a­rate ways, but what I want you to no­tice is the thing they have in com­mon, the stonewall-effect, the con­ver­sa­tion-halt­ing cog­ni­tive traf­fic sig­nal.

The only time it would ac­tu­ally be ap­pro­pri­ate to use such a traf­fic sig­nal is when you have in­for­ma­tion so strong, or cov­er­age so com­plete, that there re­ally is no point in fur­ther de­bate. This con­di­tion is rarely if ever met. A truly definite se­ries of repli­cated ex­per­i­ments might set­tle an is­sue pend­ing re­ally sur­pris­ing new ex­per­i­men­tal re­sults, a la New­ton’s laws of grav­ity ver­sus Ein­stein’s GR. Or a gross prior im­prob­a­bil­ity, com­bined with failure of the ad­vo­cates to provide con­firm­ing ev­i­dence in the face of re­peated op­por­tu­ni­ties to do so. Or you might sim­ply run out of time.

But then you should state the stop­page con­di­tion out­right and plainly, not pack­age it up in one of these ap­peals. By and large, these traf­fic sig­nals are sim­ply bad traf­fic sig­nals.