Varieties Of Argumentative Experience

In 2008, Paul Gra­ham wrote How To Disagree Bet­ter, rank­ing ar­gu­ments on a scale from name-call­ing to ex­plic­itly re­fut­ing the other per­son’s cen­tral point.

And that’s why, ever since 2008, In­ter­net ar­gu­ments have gen­er­ally been civil and pro­duc­tive.

Gra­ham’s hi­er­ar­chy is use­ful for its in­tended pur­pose, but it isn’t re­ally a hi­er­ar­chy of dis­agree­ments. It’s a hi­er­ar­chy of types of re­sponse, within a dis­agree­ment. Some­times things are re­fu­ta­tions of other peo­ple’s points, but the points should never have been made at all, and re­fut­ing them doesn’t help. Some­times it’s un­clear how the ar­gu­ment even con­nects to the sorts of things that in prin­ci­ple could be proven or re­futed.

If we were to clas­sify dis­agree­ments them­selves – talk about what peo­ple are do­ing when they’re even hav­ing an ar­gu­ment – I think it would look some­thing like this:

Most peo­ple are ei­ther meta-de­bat­ing – de­bat­ing whether some par­ties in the de­bate are vi­o­lat­ing norms – or they’re just sham­ing, try­ing to push one side of the de­bate out­side the bounds of re­spectabil­ity.

If you can get past that level, you end up dis­cussing facts (blue column on the left) and/​or philoso­phiz­ing about how the ar­gu­ment has to fit to­gether be­fore one side is “right” or “wrong” (red column on the right). Either of these can be any­where from throw­ing out a one-line claim and adding “Check­mate, athe­ists” at the end of it, to co­op­er­at­ing with the other per­son to try to figure out ex­actly what con­sid­er­a­tions are rele­vant and which sources best re­solve them.

If you can get past that level, you run into re­ally high-level dis­agree­ments about over­all moral sys­tems, or which goods are more valuable than oth­ers, or what “free­dom” means, or stuff like that. Th­ese are ba­si­cally un­re­solv­able with any­thing less than a life­time of philo­soph­i­cal work, but they usu­ally al­low mu­tual un­der­stand­ing and re­spect.

I’m not say­ing ev­ery­thing fits into this model, or even that most things do. It’s just a way of think­ing that I’ve found helpful. More de­tail on what I mean by each level:

Meta-de­bate is dis­cus­sion of the de­bate it­self rather than the ideas be­ing de­bated. Is one side be­ing hyp­o­crit­i­cal? Are some of the ar­gu­ments in­volved offen­sive? Is some­one be­ing silenced? What bi­ases mo­ti­vate ei­ther side? Is some­one ig­no­rant? Is some­one a “fa­natic”? Are their be­liefs a “re­li­gion”? Is some­one defy­ing a con­sen­sus? Who is the un­der­dog? I’ve placed it in a sphinx out­side the pyra­mid to em­pha­size that it’s not a bad ar­gu­ment for the thing, it’s just an ar­gu­ment about some­thing com­pletely differ­ent.

“Gun con­trol pro­po­nents are just ter­rified of guns, and if they had more ex­pe­rience with them their fear would go away.”

“It was wrong for gun con­trol op­po­nents to pre­vent the CDC from re­search­ing gun statis­tics more thor­oughly.”

“Se­na­tors who op­pose gun con­trol are in the pocket of the NRA.”

“It’s in­sen­si­tive to start bring­ing up gun con­trol hours af­ter a mass shoot­ing.”

Some­times meta-de­bate can be good, pro­duc­tive, or nec­es­sary. For ex­am­ple, I think dis­cussing “the ori­gins of the Trump phe­nomenon” is in­ter­est­ing and im­por­tant, and not just an at­tempt to bul­ver­iz­ing the ques­tion of whether Trump is a good pres­i­dent or not. And if you want to main­tain dis­cus­sion norms, some­times you do have to have dis­cus­sions about who’s vi­o­lat­ing them. I even think it can some­times be helpful to ar­gue about which side is the un­der­dog.

But it’s not the de­bate, and also it’s much more fun than the de­bate. It’s an in­her­ently so­cial ques­tion, the sort of who’s-high-sta­tus and who’s-defect­ing-against-group-norms ques­tions that we like a lit­tle too much. If peo­ple have to choose be­tween this and some sort of bor­ing sci­en­tific ques­tion about when fe­tuses gain brain func­tion, they’ll choose this ev­ery time; given the chance, meta-de­bate will crowd out ev­ery­thing else.

The other rea­son it’s in the sphinx is be­cause its proper func­tion is to guard the de­bate. Sure, you could spend your time writ­ing a long es­say about why cre­ation­ists’ ob­jec­tions to ra­dio­car­bon dat­ing are wrong. But the meta-de­bate is what tells you cre­ation­ists gen­er­ally aren’t good de­bate part­ners and you shouldn’t get in­volved.

So­cial sham­ing also isn’t an ar­gu­ment. It’s a de­mand for listen­ers to place some­one out­side the bound­ary of peo­ple who de­serve to be heard; to clas­sify them as so re­pug­nant that ar­gu­ing with them is only dig­nify­ing them. If it works, sup­port­ing one side of an ar­gu­ment im­poses so much rep­u­ta­tional cost that only a few weirdos dare to do it, it sinks out­side the Over­ton Win­dow, and the other side wins by de­fault.

“I can’t be­lieve it’s 2018 and we’re still let­ting trans­pho­bes on this fo­rum.”

“Just an­other pur­ple-haired SJW snowflake who thinks all dis­agree­ment is op­pres­sion.”

“Really, do con­ser­va­tives have any con­sis­tent be­liefs other than hat­ing black peo­ple and want­ing the poor to starve?”

“I see we’ve got a Sili­con Valley tech­bro STEMlord autist here.”

No­body ex­pects this to con­vince any­one. That’s why I don’t like the term “ad hominem”, which im­plies that shamers are idiots who are too stupid to re­al­ize that call­ing some­one names doesn’t re­fute their point. That’s not the prob­lem. Peo­ple who use this strat­egy know ex­actly what they’re do­ing and are of­ten quite suc­cess­ful. The goal is not to con­vince their op­po­nents, or even to hurt their op­po­nent’s feel­ings, but to demon­strate so­cial norms to by­stan­ders. “Ad hominem” has the wrong im­pli­ca­tions. “So­cial sham­ing” gets it right.

Some­times this works on a so­ciety-wide level. More of­ten, it’s an at­tempt to claim a cer­tain space, kind of like the in­tel­lec­tual equiv­a­lent of a gang sign. If the Jets can graf­fiti “FUCK THE SHARKS” on a cer­tain bridge, but the Sharks can’t get away with graf­fit­ing “NO ACTUALLY FUCK THE JETS” on the same bridge, then al­most by defi­ni­tion that bridge is in the Jets’ ter­ri­tory. This is part of the pro­cess that cre­ates po­lariza­tion and echo cham­bers. If you see an at­tempt at so­cial sham­ing and feel trig­gered, that’s the sec­ond-best re­sult from the per­spec­tive of the per­son who put it up. The best re­sult is that you never went into that space at all. This isn’t just about keep­ing con­ser­va­tives out of so­cial­ist spaces. It’s also about defin­ing what kind of so­cial­ist the so­cial­ist space is for, and what kind of ideas good so­cial­ists are or aren’t al­lowed to hold.

I think eas­ily 90% of on­line dis­cus­sion is of this form right now, in­clud­ing some long and care­fully-writ­ten think­pieces with lots of cita­tions. The point isn’t that it liter­ally uses the word “fuck”, the point is that the ac­tive in­gre­di­ent isn’t per­sua­sive­ness, it’s the abil­ity to make some peo­ple feel like they’re suffer­ing so­cial costs for their opinion. Even re­ally good ar­gu­ments that are per­sua­sive can be used this way if some­one links them on Face­book with “This is why I keep say­ing Democrats are dumb” un­der­neath it.

This is similar to meta-de­bate, ex­cept that meta-de­bate can some­times be co­op­er­a­tive and pro­duc­tive – both Trump sup­port­ers and Trump op­po­nents could in the­ory work to­gether try­ing to figure out the ori­gins of the “Trump phe­nomenon” – and that sham­ing is at least sort of an at­tempt to re­solve the ar­gu­ment, in a sense.

Gotchas are short claims that pur­port to be dev­as­tat­ing proof that one side can’t pos­si­bly be right.

“If you like big gov­ern­ment so much, why don’t you move to Cuba?”

“Isn’t it ironic that most pro-lifers are also against welfare and free health care? Guess they only care about ba­bies un­til they’re born.”

“When guns are out­lawed, only out­laws will have guns.”

Th­ese are snappy but al­most always stupid. Peo­ple may not move to Cuba be­cause they don’t want gov­ern­ment that big, be­cause gov­ern­ments can be big in many ways some of which are bad, be­cause gov­ern­ments can vary along di­men­sions other than how big they are, be­cause coun­tries can vary along di­men­sions other than what their gov­ern­ments are, or just be­cause mov­ing is hard and dis­rup­tive.

They may some­times sug­gest what might, with a lot more work, be a good point. For ex­am­ple, the last one could be trans­formed into an ar­gu­ment like “Since it’s pos­si­ble to get guns ille­gally with some effort, and crim­i­nals need guns to com­mit their crimes and are com­fortable with break­ing laws, it might only slightly de­crease the num­ber of guns available to crim­i­nals. And it might greatly de­crease the num­ber of guns available to law-abid­ing peo­ple hop­ing to defend them­selves. So the cost of peo­ple not be­ing able to defend them­selves might be greater than the benefit of fewer crim­i­nals be­ing able to com­mit crimes.” I don’t think I agree with this ar­gu­ment, and I might challenge as­sump­tions like “crim­i­nals aren’t that much likely to have guns if they’re ille­gal” or “law-abid­ing gun own­ers us­ing guns in self-defense is com­mon and an im­por­tant fac­tor to in­clude in our calcu­la­tions”. But this would be a rea­son­able ar­gu­ment and not just a gotcha. The origi­nal is a gotcha pre­cisely be­cause it doesn’t in­vite this level of anal­y­sis or even seem aware that it’s pos­si­ble. It’s not say­ing “calcu­late the value of these pa­ram­e­ters, be­cause I think they work out in a way where this is a pretty strong ar­gu­ment against con­trol­ling guns”. It’s say­ing “gotcha!”.

Sin­gle facts are when some­one pre­sents one fact, which ad­mit­tedly does sup­port their ar­gu­ment, as if it solves the de­bate in and of it­self. It’s the same sort of situ­a­tion as one of the bet­ter gotchas – it could be changed into a de­cent ar­gu­ment, with work. But pre­sent­ing it as if it’s sup­posed to change some­one’s mind in and of it­self is naive and sort of an ag­gres­sive act.

“The UK has gun con­trol, and the mur­der rate there is only a quar­ter of ours.”

“The fe­tus has a work­ing brain as early as the first trimester.”

“Don­ald Trump is known to have cheated his em­ploy­ees and sub­con­trac­tors.”

“Hillary Clin­ton han­dled her emails in a scan­dalously in­com­pe­tent man­ner and tried to cover it up.”

Th­ese are all po­ten­tially good points, with at least two caveats. First, cor­re­la­tion isn’t cau­sa­tion – the UK’s low mur­der rates might not be caused by their gun con­trol, and maybe not all com­mu­nist coun­tries in­evitably end up like the USSR. Se­cond, even things with some bad fea­tures are over­all net good. Trump could be a dishon­est busi­ness­man, but still have other good qual­ities. Hillary Clin­ton may be crap at email se­cu­rity, but skil­led at other things. Even if these facts are true and causal, they only prove that a plan has at least one bad qual­ity. At best they would be fol­lowed up by an ar­gu­ment for why this is re­ally im­por­tant.

I think the move from sham­ing to good ar­gu­ment is kind of a con­tinuum. This level is around the mid­dle. At some point, say­ing “I can’t be­lieve you would sup­port some­one who could do that with her emails!” is just try­ing to bait Hillary sup­port­ers. And any Hillary sup­porter who thinks it’s re­ally im­por­tant to ar­gue speci­fics of why the emails aren’t that bad, in­stead of fo­cus­ing on the big­ger pic­ture, is tak­ing the bait, or get­ting stuck in this mind­set where they feel threat­ened if they ad­mit there’s any­thing bad about Hillary, or just feel­ing too defen­sive.

Sin­gle stud­ies are bet­ter than scat­tered facts since they at least prove some com­pe­tent per­son looked into the is­sue for­mally.

“This pa­per from Gary Kleck shows that more guns ac­tu­ally cause less crime.”

“Th­ese peo­ple looked at the ev­i­dence and proved that sup­port for Trump is mo­ti­vated by au­thor­i­tar­i­anism.”

“I think you’ll find economists have already in­ves­ti­gated this and that the min­i­mum wage doesn’t cost jobs.”

“There’s ac­tu­ally a lot of proof by peo­ple an­a­lyz­ing many differ­ent elec­tions that money doesn’t in­fluence poli­tics.”

We’ve already dis­cussed this here be­fore. Scien­tific stud­ies are much less re­li­able guides to truth than most peo­ple think. On any con­tro­ver­sial is­sue, there are usu­ally many peer-re­viewed stud­ies sup­port­ing each side. Some­times these stud­ies are just wrong. Other times they in­ves­ti­gate a much weaker sub­prob­lem but get billed as solv­ing the larger prob­lem.

There are dozens of stud­ies prov­ing the min­i­mum wage does de­stroy jobs, and dozens of stud­ies prov­ing it doesn’t. Prob­a­bly it de­pends a lot on the par­tic­u­lar job, the size of the min­i­mum wage, how the econ­omy is do­ing oth­er­wise, etc, etc, etc. Gary Kleck does have a lot of stud­ies show­ing that more guns de­crease crime, but a lot of other crim­i­nol­o­gists dis­agree with him. Both sides will have plau­si­ble-sound­ing rea­sons for why the other’s stud­ies have been con­clu­sively de­bunked on ac­count of all sorts of bias and con­founders, but you will ac­tu­ally have to look through those rea­sons and see if they’re right.

Usu­ally the sci­en­tific con­sen­sus on sub­jects like these will be as good as you can get, but don’t trust that you know the sci­en­tific con­sen­sus un­less you have read ac­tual well-con­ducted sur­veys of sci­en­tists in the field. Your echo cham­ber tel­ling you “the sci­en­tific con­sen­sus agrees with us” is definitely not suffi­cient.

A good-faith sur­vey of ev­i­dence is what you get when you take all of the above into ac­count, stop try­ing to dev­as­tate the other per­son with a moun­tain of facts that can’t pos­si­bly be wrong, and starts look­ing at the stud­ies and ar­gu­ments on both sides and figur­ing out what kind of com­plex pic­ture they paint.

“Of the meta-analy­ses on the min­i­mum wage, three seem to sug­gest it doesn’t cost jobs, and two seem to sug­gest it does. Look­ing at the po­ten­tial con­founders in each, I trust the ones say­ing it doesn’t cost jobs more.”

“The lat­est sur­veys say more than 97% of cli­mate sci­en­tists think the earth is warm­ing, so even though I’ve looked at your ar­gu­ments for why it might not be, I think we have to go with the con­sen­sus on this one.”

“The jus­tice sys­tem seems racially bi­ased at the sen­tenc­ing stage, but not at the ar­rest or ver­dict stages.”

“It looks like this level of gun con­trol would cause 500 fewer mur­ders a year, but also pre­vent 50 law-abid­ing gun own­ers from defend­ing them­selves. Over­all I think that would be worth it.”

Iso­lated de­mands for rigor are at­tempts to de­mand that an op­pos­ing ar­gu­ment be held to such strict in­vented-on-the-spot stan­dards that noth­ing (in­clud­ing com­mon-sense state­ments ev­ery­one agrees with) could pos­si­bly clear the bar.

“You can’t be an athe­ist if you can’t prove God doesn’t ex­ist.”

“Since you benefit from cap­i­tal­ism and all the wealth it’s made available to you, it’s hyp­o­crit­i­cal for you to op­pose it.”

“Cap­i­tal pun­ish­ment is just state-sanc­tioned mur­der.”

“When peo­ple still crit­i­cize Trump even though the econ­omy is do­ing so well, it proves they never cared about pros­per­ity and are just blindly loyal to their party.”

The first is wrong be­cause you can dis­be­lieve in Bigfoot with­out be­ing able to prove Bigfoot doesn’t ex­ist – “you can never doubt some­thing un­less you can prove it doesn’t ex­ist” is a fake rule we never ap­ply to any­thing else. The sec­ond is wrong be­cause you can be against racism even if you are a white per­son who pre­sum­ably benefits from it; “you can never op­pose some­thing that benefits you” is a fake rule we never ap­ply to any­thing else. The third is wrong be­cause eg prison is just state-sanc­tioned kid­nap­ping; “it is ex­actly as wrong for the state to do some­thing as for a ran­dom crim­i­nal to do it” is a fake rule we never ap­ply to any­thing else. The fourth is wrong be­cause Repub­li­cans have also been against lead­ers who presided over good economies and pre­sum­ably thought this was a rea­son­able thing to do; “it’s im­pos­si­ble to hon­estly op­pose some­one even when there’s a good econ­omy” is a fake rule we never ap­ply to any­thing else.

I don’t think these are nec­es­sar­ily badly-in­ten­tioned. We don’t have a good ex­plicit un­der­stand­ing of what high-level prin­ci­ples we use, and tend to make them up on the spot to fit ob­ject-level cases. But here they act to de­rail the ar­gu­ment into a stupid de­bate over whether it’s okay to even dis­cuss the is­sue with­out hav­ing 100% perfect im­pos­si­ble rigor. The solu­tion is ex­actly the sort of “prov­ing too much” ar­gu­ments in the last para­graph. Then you can agree to use nor­mal stan­dards of rigor for the ar­gu­ment and move on to your real dis­agree­ments.

Th­ese are re­lated to fully gen­eral coun­ter­ar­gu­ments like “sorry, you can’t solve ev­ery prob­lem with X”, though usu­ally these are more meta-de­bate than de­bate.

Disput­ing defi­ni­tions is when an ar­gu­ment hinges on the mean­ing of words, or whether some­thing counts as a mem­ber of a cat­e­gory or not.

“Trans­gen­der is a men­tal ill­ness.”

“The Soviet Union wasn’t re­ally com­mu­nist.”

“Want­ing English as the offi­cial lan­guage is racist.”

“Abor­tion is mur­der.”

“No­body in the US is re­ally poor, by global stan­dards.”

It might be im­por­tant on a so­cial ba­sis what we call these things; for ex­am­ple, the so­cial per­cep­tion of trans­gen­der might shift based on whether it was com­monly thought of as a men­tal ill­ness or not. But if a spe­cific ar­gu­ment be­tween two peo­ple starts hing­ing on one of these ques­tions, chances are some­thing has gone wrong; nei­ther fac­tual nor moral ques­tions should de­pend on a dis­pute over the way we use words. This Guide To Words is a long and com­pre­hen­sive re­source about these situ­a­tions and how to get past them into what­ever the real dis­agree­ment is.

Clar­ify­ing is when peo­ple try to figure out ex­actly what their op­po­nent’s po­si­tion is.

“So com­mu­nists think there shouldn’t be pri­vate own­er­ship of fac­to­ries, but there might still be pri­vate own­er­ship of things like houses and fur­ni­ture?”

“Are you op­posed to laws say­ing that con­victed felons can’t get guns? What about laws say­ing that there has to be a wait­ing pe­riod?”

“Do you think there can ever be such a thing as a just war?”

This can some­times be hos­tile and coun­ter­pro­duc­tive. I’ve seen too many ar­gu­ments de­gen­er­ate into some form of “So you’re say­ing that rape is good and we should have more of it, are you?” No. No­body is ever say­ing that. If some­one thinks the other side is say­ing that, they’ve stopped do­ing hon­est clar­ifi­ca­tion and got­ten more into the perfor­ma­tive sham­ing side.

But there are a lot of mi­s­un­der­stand­ings about peo­ple’s po­si­tions. Some of this is be­cause the space of things peo­ple can be­lieve is very wide and it’s hard to un­der­stand ex­actly what some­one is say­ing. More of it is be­cause par­ti­san echo cham­bers can de­liber­ately spread mis­rep­re­sen­ta­tions or cliched ver­sions of an op­po­nent’s ar­gu­ments in or­der to make them look stupid, and it takes some time to re­al­ize that real op­po­nents don’t always match the stereo­type. And some­times it’s be­cause peo­ple don’t always have their po­si­tions down in de­tail them­selves (eg com­mu­nists’ un­cer­tainty about what ex­actly a com­mu­nist state would look like). At its best, clar­ifi­ca­tion can help the other per­son no­tice holes in their own opinions and re­veal leaps in logic that might le­gi­t­i­mately de­serve to be ques­tioned.

Oper­a­tional­iz­ing is where both par­ties un­der­stand they’re in a co­op­er­a­tive effort to fix ex­actly what they’re ar­gu­ing about, where the goal­posts are, and what all of their terms mean.

“When I say the Soviet Union was com­mu­nist, I mean that the state con­trol­led ba­si­cally all of the econ­omy. Do you agree that’s what we’re de­bat­ing here?”

“I mean that a gun buy­back pro­gram similar to the one in Aus­tralia would prob­a­bly lead to less gun crime in the United States and hun­dreds of lives saved per year.”

“If the US were to raise the na­tional min­i­mum wage to $15, the av­er­age poor per­son would be bet­ter off.”

“I’m not in­ter­ested in de­bat­ing whether the IPCC es­ti­mates of global warm­ing might be too high, I’m in­ter­ested in whether the real es­ti­mate is still bad enough that mil­lions of peo­ple could die.”

An ar­gu­ment is op­er­a­tional­ized when ev­ery part of it has ei­ther been re­duced to a fac­tual ques­tion with a real an­swer (even if we don’t know what it is), or when it’s ob­vi­ous ex­actly what kind of non-fac­tual dis­agree­ment is go­ing on (for ex­am­ple, a differ­ence in moral sys­tems, or a differ­ence in in­tu­itions about what’s im­por­tant).

The Cen­ter for Ap­plied Ra­tion­al­ity pro­motes dou­ble-crux­ing, a spe­cific tech­nique that helps peo­ple op­er­a­tional­ize ar­gu­ments. A dou­ble-crux is a sin­gle sub­ques­tion where both sides ad­mit that if they were wrong about the sub­ques­tion, they would change their mind. For ex­am­ple, if Alice (gun con­trol op­po­nent) would sup­port gun con­trol if she knew it low­ered crime, and Bob (gun con­trol sup­porter) would op­pose gun con­trol if he knew it would make crime worse – then the only thing they have to talk about is crime. They can ig­nore whether guns are im­por­tant for re­sist­ing tyranny. They can ig­nore the role of mass shoot­ings. They can ig­nore whether the NRA spokesman made an offen­sive com­ment one time. They just have to fo­cus on crime – and that’s the sort of thing which at least in prin­ci­ple is tractable to stud­ies and statis­tics and sci­en­tific con­sen­sus.

Not ev­ery ar­gu­ment will have dou­ble-cruxes. Alice might still op­pose gun con­trol if it only low­ered crime a lit­tle, but also vastly in­creased the risk of the gov­ern­ment be­com­ing au­thor­i­tar­ian. A lot of things – like a de­ci­sion to vote for Hillary in­stead of Trump – might be based on a hun­dred lit­tle con­sid­er­a­tions rather than a sin­gle de­bat­able point.

But at the very least, you might be able to find a bunch of more limited cruxes. For ex­am­ple, a Trump sup­porter might ad­mit he would prob­a­bly vote Hillary if he learned that Trump was more likely to start a war than Hillary was. This isn’t quite as likely to end the whole dis­agreemnt in a fell swoop – but it still gives a more fruit­ful av­enue for de­bate than the usual fact-scat­ter­ing.

High-level gen­er­a­tors of dis­agree­ment are what re­mains when ev­ery­one un­der­stands ex­actly what’s be­ing ar­gued, and agrees on what all the ev­i­dence says, but have vague and hard-to-define rea­sons for dis­agree­ing any­way. In ret­ro­spect, these are prob­a­bly why the dis­agree­ment arose in the first place, with a lot of the more spe­cific points be­ing down­stream of them and kind of made-up jus­tifi­ca­tions. Th­ese are al­most im­pos­si­ble to re­solve even in prin­ci­ple.

“I feel like a pop­u­lace that owns guns is free and has some level of con­trol over its own des­tiny, but that if they take away our guns we’re pretty much just sub­jects and have to hope the gov­ern­ment treats us well.”

“Yes, there are some ar­gu­ments for why this war might be just, and how it might liber­ate peo­ple who are suffer­ing ter­ribly. But I feel like we always hear this kind of thing and it never pans out. And ev­ery time we de­clare war, that re­in­forces a cul­ture where things can be solved by force. I think we need to take an un­con­di­tional stance against ag­gres­sive war, always and for­ever.”

“Even though I can’t tell you how this reg­u­la­tion would go wrong, in past ex­pe­rience a lot of well-in­ten­tioned reg­u­la­tions have ended up back­firing hor­ribly. I just think we should have a bias against solv­ing all prob­lems by reg­u­lat­ing them.”

“Cap­i­tal pun­ish­ment might de­crease crime, but I draw the line at in­ten­tion­ally kil­ling peo­ple. I don’t want to live in a so­ciety that does that, no mat­ter what its rea­sons.”

Some of these in­volve what so­cial sig­nal an ac­tion might send; for ex­am­ple, even a just war might have the sub­tle effect of le­gi­t­imiz­ing war in peo­ple’s minds. Others in­volve cases where we ex­pect our in­for­ma­tion to be bi­ased or our anal­y­sis to be in­ac­cu­rate; for ex­am­ple, if past reg­u­la­tions that seemed good have gone wrong, we might ex­pect the next one to go wrong even if we can’t think of ar­gu­ments against it. Others in­volve differ­ences in very vague and long-term pre­dic­tions, like whether it’s rea­son­able to worry about the gov­ern­ment de­scend­ing into tyranny or an­ar­chy. Others in­volve fun­da­men­tally differ­ent moral sys­tems, like if it’s okay to kill some­one for a greater good. And the most frus­trat­ing in­volve chaotic and un­com­putable situ­a­tions that have to be solved by metis or phrone­sis or similar-sound­ing Greek words, where differ­ent peo­ple’s Greek words give them differ­ent opinions.

You can always try de­bat­ing these points fur­ther. But these sorts of high-level gen­er­a­tors are usu­ally formed from hun­dreds of differ­ent cases and can’t eas­ily be sim­plified or dis­proven. Maybe the best you can do is share the situ­a­tions that led to you hav­ing the gen­er­a­tors you do. Some­times good art can help.

The high-level gen­er­a­tors of dis­agree­ment can sound a lot like re­ally bad and stupid ar­gu­ments from pre­vi­ous lev­els. “We just have fun­da­men­tally differ­ent val­ues” can sound a lot like “You’re just an evil per­son”. “I’ve got a heuris­tic here based on a lot of other cases I’ve seen” can sound a lot like “I pre­fer anec­do­tal ev­i­dence to facts”. And “I don’t think we can trust ex­plicit rea­son­ing in an area as fraught as this” can sound a lot like “I hate logic and am go­ing to do what­ever my bi­ases say”. If there’s a differ­ence, I think it comes from hav­ing gone through all the pre­vi­ous steps – hav­ing con­firmed that the other per­son knows as much as you might be in­tel­lec­tual equals who are both equally con­cerned about do­ing the moral thing – and re­al­iz­ing that both of you al­ike are con­trol­led by high-level gen­er­a­tors. High-level gen­er­a­tors aren’t bi­ases in the sense of mis­takes. They’re the strate­gies ev­ery­one uses to guide them­selves in un­cer­tain situ­a­tions.

This doesn’t mean ev­ery­one is equally right and okay. You’ve reached this level when you agree that the situ­a­tion is com­pli­cated enough that a rea­son­able per­son with rea­son­able high-level gen­er­a­tors could dis­agree with you. If 100% of the ev­i­dence sup­ports your side, and there’s no rea­son­able way that any set of sane heuris­tics or caveats could make some­one dis­agree, then (un­less you’re miss­ing some­thing) your op­po­nent might just be an idiot.

Some thoughts on the over­all ar­range­ment:

1. If any­body in an ar­gu­ment is op­er­at­ing on a low level, the en­tire ar­gu­ment is now on that low level. First, be­cause peo­ple will feel com­pel­led to re­fute the low-level point be­fore con­tin­u­ing. Se­cond, be­cause we’re only hu­man, and if some­one tries to shame/​gotcha you, the nat­u­ral re­sponse is to try to shame/​gotcha them back.

2. The blue column on the left is fac­tual dis­agree­ments; the red column on the right is philo­soph­i­cal dis­agree­ments. The high­est level you’ll be able to get to is the low­est of where you are on the two columns.

3. Higher lev­els re­quire more vuln­er­a­bil­ity. If you ad­mit that the data are mixed but seem to slightly fa­vor your side, and your op­po­nent says that ev­ery good study ever has always fa­vored his side plus also you are a racist com­mu­nist – well, you kind of walked into that one. In par­tic­u­lar, ex­plor­ing high-level gen­er­a­tors of dis­agree­ment re­quires a lot of trust, since some­one who is at all hos­tile can eas­ily frame this as “See! He ad­mits that he’s bi­ased and just go­ing off his in­tu­itions!”

4. If you hold the con­ver­sa­tion in pri­vate, you’re al­most guaran­teed to avoid ev­ery­thing be­low the lower dot­ted line. Every­thing be­low that is a show put on for spec­ta­tors.

5. If you’re in­tel­li­gent, de­cent, and philo­soph­i­cally so­phis­ti­cated, you can avoid ev­ery­thing be­low the higher dot­ted line. Every­thing be­low that is ei­ther a show or some form of mis­take; ev­ery­thing above it is im­pos­si­ble to avoid no mat­ter how great you are.

6. The shorter and more pub­lic the medium, the more pres­sure there is to stick to the lower lev­els. Twit­ter is great for sham­ing, but it’s al­most im­pos­si­ble to have a good-faith sur­vey of ev­i­dence there, or use it to op­er­a­tional­ize a tricky defi­ni­tional ques­tion.

7. Some­times the high-level gen­er­a­tors of dis­agree­ment are other, even more com­pli­cated ques­tions. For ex­am­ple, a lot of peo­ple’s views come from their re­li­gion. Now you’ve got a whole differ­ent de­bate.

8. And a lot of the facts you have to agree on in a sur­vey of the ev­i­dence are also com­pli­cated. I once saw a com­mu­nism vs. cap­i­tal­ism ar­gu­ment de­gen­er­ate into a dis­cus­sion of whether gov­ern­ment works bet­ter than pri­vate in­dus­try, then whether NASA was bet­ter than SpaceX, then whether some par­tic­u­lar NASA rocket en­g­ine de­sign was bet­ter than a cor­re­spond­ing SpaceX de­sign. I never did learn if they figured whose rocket en­g­ine was bet­ter, or whether that helped them solve the com­mu­nism vs. cap­i­tal­ism ques­tion. But it seems pretty clear that the de­gen­er­a­tion into sub­ques­tions and dis­cov­ery of su­perques­tions can go on for­ever. This is the stage a lot of dis­cus­sions get bogged down in, and one rea­son why prun­ing tech­niques like dou­ble-cruxes are so im­por­tant.

9. Try to clas­sify ar­gu­ments you see in the wild on this sys­tem, and you find that some fit and oth­ers don’t. But the main thing you find is how few real ar­gu­ments there are. This is some­thing I tried to ham­mer in dur­ing the last elec­tion, when peo­ple were com­plain­ing “Well, we tried to de­bate Trump sup­port­ers, they didn’t change their mind, guess rea­son and democ­racy don’t work”. Ar­gu­ments above the first dot­ted line are rare; ar­gu­ments above the sec­ond ba­si­cally nonex­is­tent in pub­lic un­less you look re­ally hard.

But what’s the point? If you’re just go­ing to end up at the high-level gen­er­a­tors of dis­agree­ment, why do all the work?

First, be­cause if you do it right you’ll end up re­spect­ing the other per­son. Go­ing through all the mo­tions might not pro­duce agree­ment, but it should pro­duce the feel­ing that the other per­son came to their be­lief hon­estly, isn’t just stupid and evil, and can be rea­soned with on other sub­jects. The nat­u­ral ten­dency is to as­sume that peo­ple on the other side just don’t know (or de­liber­ately avoid know­ing) the facts, or are us­ing weird per­verse rules of rea­son­ing to en­sure they get the con­clu­sions they want. Go through the whole pro­cess, and you will find some ig­no­rance, and you will find some bias, but they’ll prob­a­bly be on both sides, and the ex­act way they work might sur­prise you.

Se­cond, be­cause – and this is to­tal con­jec­ture – this deals a tiny bit of dam­age to the high-level gen­er­a­tors of dis­agree­ment. I think of these as Bayesian pri­ors; you’ve looked at a hun­dred cases, all of them have been X, so when you see some­thing that looks like not-X, you can as­sume you’re wrong – see the ex­am­ple above where the liber­tar­ian ad­mits there is no clear ar­gu­ment against this par­tic­u­lar reg­u­la­tion, but is wary enough of reg­u­la­tions to sus­pect there’s some­thing they’re miss­ing. But in this kind of math, the prior shifts the per­cep­tion of the ev­i­dence, but the ev­i­dence also shifts the per­cep­tion of the prior.

Imag­ine that, through­out your life, you’ve learned that UFO sto­ries are fakes and hoaxes. Some friend of yours sees a UFO, and you as­sume (based on your pri­ors) that it’s prob­a­bly fake. They try to con­vince you. They show you the spot in their back­yard where it landed and singed the grass. They show you the mys­te­ri­ous metal ob­ject they took as a sou­ve­nier. It seems plau­si­ble, but you still have too much of a prior on UFOs be­ing fake, and so you as­sume they made it up.

Now imag­ine an­other friend has the same ex­pe­rience, and also shows you good ev­i­dence. And you hear about some­one the next town over who says the same thing. After ten or twenty of these, maybe you start won­der­ing if there’s some­thing to all of this UFOs. Your over­all skep­ti­cism of UFOs has made you dis­miss each par­tic­u­lar story, but each story has also dealt a lit­tle dam­age to your over­all skep­ti­cism.

I think the high-level gen­er­a­tors might work the same way. The liber­tar­ian says “Every­thing I’ve learned thus far makes me think gov­ern­ment reg­u­la­tions fail.” You demon­strate what looks like a suc­cess­ful gov­ern­ment reg­u­la­tion. The liber­tar­ian doubts, but also be­comes slightly more re­cep­tive to the pos­si­bil­ity of those reg­u­la­tions oc­ca­sion­ally be­ing use­ful. Do this a hun­dred times, and they might be more will­ing to ac­cept reg­u­la­tions in gen­eral.

As the old say­ing goes, “First they ig­nore you, then they laugh at you, then they fight you, then they fight you half-heart­edly, then they’re neu­tral, then they then they grudg­ingly say you might have a point even though you’re an­noy­ing, then they say on bal­ance you’re mostly right al­though you ig­nore some of the most im­por­tant facets of the is­sue, then you win.”

I no­tice SSC com­menter John Nerst is talk­ing about a sci­ence of dis­agree­ment and has set up a sub­red­dit for dis­cussing it. I only learned about it af­ter mostly finish­ing this post, so I haven’t looked into it as much as I should, but it might make good fol­lowup read­ing.