How to offend a rationalist (who hasn’t thought about it yet): a life lesson

Usu­ally, I don’t get offended at things that peo­ple say to me, be­cause I can see at what points in their ar­gu­ment we differ, and what sort of coun­ter­ar­gu­ment I could make to that. I can’t get mad at peo­ple for hav­ing be­liefs I think are wrong, since I my­self reg­u­larly have be­liefs that I later re­al­ize were wrong. I can’t get mad at the idea, ei­ther, since ei­ther it’s a thing that’s right, or wrong, and if it’s wrong, I have the power to say why. And if it turns out I’m wrong, so be it, I’ll adopt new, right be­liefs. And so I never got offended about any­thing.

Un­til one day.

One day, I en­coun­tered a be­lief that should have been easy to re­fute. Or, rather, easy to dis­sect, and see whether there was any­thing wrong with it, and if there was, for­mu­late a coun­ter­ar­gu­ment. But for seem­ingly no rea­son at all, it frus­trated me to great, great, lengths. My ex­pe­rience was as fol­lows:

I was ask­ing the opinion of a so­cially pro­gres­sive friend on what they feel are the found­ing ax­ioms of so­cial jus­tice, be­cause I was hav­ing trou­ble think­ing of them on my own. (They can be de­rived from any set of fun­da­men­tal ax­ioms that gov­ern moral­ity, but I wanted some­thing that you could speci­fi­cally use to de­scribe who is be­ing op­pressed, and why.) They seemed to be hav­ing trou­ble un­der­stand­ing what I was say­ing, and it was hard to get an opinion out of them. They also got an­gry at me for dis­miss­ing Tum­blr as a le­git­mate source of so­cial jus­tice. But even­tu­ally we got to the heart of the mat­ter, and I dis­cov­ered a ba­sic dis­con­necf be­tween us: they asked, “Wait, you’re se­ri­ously ap­ply­ing a math thing to so­cial jus­tice?” And I pon­dered that for a mo­ment and ex­plained that it isn’t re­stricted to math at all, and an ax­iom in this con­text can be any be­lief that you use to base your be­liefs on. How­ever, then the true prob­lem came to light (af­ter a com­par­i­son of me to mis­guided 18th-cen­tury philosophes): “Sorry if it offends you, I just don’t think in gen­eral that you should ap­ply this stuff to so­ciety. Like… no.”

And that did it. For the rest of the day, I wreaked phys­i­cal havoc, and emo­tion­ally alienated ev­ery­one I in­ter­acted with. I even se­ri­ously con­tem­plated suicide. I wasn’t an­gry at my friend in par­tic­u­lar for hav­ing said that. For the first time, I was an­gry at an idea: that be­lief sys­tems about cer­tain things should not be in­ter­nally con­sis­tent, should not fol­low log­i­cal rules. It was ex­tremely difficult to con­struct an ar­gu­ment against, be­cause all of my ar­gu­ments had log­i­cally con­sis­tent bases, and were thus in­valid in its face.

I’m glad that I en­coun­tered that be­lief, though, like all be­liefs, since I was able to solve it in the end, and make peace with it. I came to the fol­low­ing con­clu­sions:

  1. In or­der to make a ra­tio­nal­ist ex­tremely ag­gra­vated, you can tell them that you don’t think that be­lief struc­tures should be in­ter­nally log­i­cally con­sis­tent. (After 12-24 hours, they ac­quire life­time im­mu­nity to this trick.)

  2. Belief struc­tures do not nec­es­sar­ily have to be in­ter­nally log­i­cally con­sis­tent. How­ever, con­sis­tent sys­tems are bet­ter, for the fol­low­ing rea­son: be­lief sys­tems are used for de­riv­ing ac­tions to take. Many ac­tions that are ori­ented to­wards the same goal will make progress in ac­com­plish­ing that goal. Mak­ing progress in ac­com­plish­ing goals is a de­sir­able thing. An in­con­sis­tent be­lief sys­tem will gen­er­ate ac­tions that are ori­ented to­wards non-con­stant goals, and in­terfere de­struc­tively with each other, and not make much progress. A con­sis­tent be­lief sys­tem will gen­er­ate many ac­tions ori­ented to­wards the same goal, and so will make much progress. There­fore, as­sum­ing the first few state­ments, hav­ing an in­ter­nally con­sis­tent be­lief sys­tem is de­sir­able! Hav­ing re­duced it to an episte­molog­i­cal prob­lem (do peo­ple re­ally de­sire progress? can ac­tions ac­tu­ally ac­com­plish things?), I now only have episte­molog­i­cal an­ar­chism to deal with, which seems to work less well in prac­tice than the sci­en­tific method, so I can ig­nore it.

  3. No mat­ter how offended you are about some­thing, think­ing about it will still re­solve the is­sue.

Does any­one have any­thing to add to this? Did I miss any sort of deeper rea­sons I could be us­ing for this? Granted, my solu­tion only works if you want to ac­com­plish goals, and use your be­lief sys­tem to gen­er­ate ac­tions to ac­com­plish goals, but I think that’s fairly uni­ver­sal.