Moral Anti-Epistemology

This post is a half-baked idea that I’m post­ing here in or­der to get feed­back and fur­ther brain­storm­ing. There seem to be some in­ter­est­ing par­allels be­tween episte­mol­ogy and ethics.

Part 1: Mo­ral Anti-Epistemology

“Anti-Episte­mol­ogy” refers to bad rules of rea­son­ing that ex­ist not be­cause they are use­ful/​truth-track­ing, but be­cause they are good at pre­serv­ing peo­ple’s cher­ished be­liefs about the world. But cher­ished be­liefs don’t just con­cern fac­tual ques­tions, they also very much con­cern moral is­sues. There­fore, we should ex­pect there to be a lot of moral anti-episte­mol­ogy.

Tra­di­tion as a moral ar­gu­ment, tu quoque, op­po­si­tion to the use of thought ex­per­i­ments, the non­cen­tral fal­lacy, slo­gans like “moral­ity is from hu­mans for hu­mans” – all these are in­stances of the same gen­eral phe­nomenon. This is triv­ial and doesn’t add much to the already well-known fact that hu­mans of­ten ra­tio­nal­ize, but it does add the memetic per­spec­tive: Mo­ral ra­tio­nal­iza­tions some­times con­cern more than a sin­gu­lar in­stance, they can af­fect the en­tire way peo­ple rea­son about moral­ity. And like with re­li­gion or pseu­do­science in episte­mol­ogy about fac­tual claims, there could be en­tire meme­plexes cen­tered around moral anti-episte­mol­ogy.

A com­pli­ca­tion is that metaethics is com­pli­cated; it is un­clear what ex­actly moral rea­son­ing is, and whether ev­ery­one is try­ing to do the same thing when they en­gage in what they think of as moral rea­son­ing. La­bel­ling some­thing “moral anti-episte­mol­ogy” would sug­gest that there is a cor­rect way to think about moral­ity. Is there? As long as we always make sure to clar­ify what it is that we’re try­ing to ac­com­plish, it would seem pos­si­ble to differ­en­ti­ate be­tween valid and in­valid ar­gu­ments in re­gard to the speci­fied goal. And this is where moral anti-episte­mol­ogy might cause trou­bles.

Are there rea­sons to as­sume that cer­tain pop­u­lar eth­i­cal be­liefs are a re­sult of moral anti-episte­mol­ogy? Deon­tol­ogy comes to mind (mostly be­cause it’s my usual sus­pect when it comes to odd rea­son­ing in ethics), but what is it about de­on­tol­ogy that re­lies on “faulty moral rea­son­ing”, if in­deed there is some­thing about it that does? How much of it re­lies on the non­cen­tral fal­lacy, for in­stance? Is Yvain’s per­sonal opinion that “much of de­on­tol­ogy is just an at­tempt to for­mal­ize and jus­tify this fal­lacy” cor­rect? The per­spec­tive of moral anti-episte­mol­ogy would sug­gest that it is the other way around: Deon­tol­ogy might be the by-product of peo­ple ap­ply­ing the non­cen­tral fal­lacy, which is done be­cause it helps pro­tect cher­ished be­liefs. Which be­liefs would that be? Per­haps the strongly felt in­tu­ition that “Some things are JUST WRONG?”, which doesn’t han­dle fuzzy con­cepts/​bound­aries well and there­fore has to be com­bined with a dog­matic ap­proach. It sounds some­what plau­si­ble, but also re­ally spec­u­la­tive.

Part 2: Memetics

A lot of peo­ple are skep­ti­cal to­wards these memet­i­cal just-so sto­ries. They ar­gue that the points made are ei­ther too triv­ial, or too spec­u­la­tive. I have the in­tu­ition that a memetic per­spec­tive of­ten helps clar­ify things, and my thoughts about ap­ply­ing the con­cept of anti-episte­mol­ogy to ethics seemed like an in­sight, but I have a hard time com­ing up with how my ex­pec­ta­tions about the world have changed be­cause of it. What, if any­thing, is the value of the idea I just pre­sented? Can I now form a pre­dic­tion to test whether de­on­tol­o­gists want to pri­mar­ily for­mal­ize and jus­tify the non­cen­tral fal­lacy, or whether they in­stead want to jus­tify some­thing else by mak­ing use of the non­cen­tral fal­lacy?

Anti-episte­mol­ogy is a more gen­eral model of what is go­ing on in the world than ra­tio­nal­iza­tions are, so it should all re­duce to ra­tio­nal­iza­tions in the end. So it shouldn’t be wor­ry­ing that I don’t mag­i­cally find more stuff. Per­haps my ex­pec­ta­tions were too high and I should be con­tent with hav­ing found a way to cat­e­go­rize moral ra­tio­nal­iza­tions, the knowl­edge of which will make me slightly quicker at spot­ting or pre­dict­ing them.

Thoughts?