To first order, moral realism and moral anti-realism are the same thing

I’ve taken a some­what car­i­ca­tur­ist view of moral re­al­ism[1], de­scribing it, es­sen­tially, as the ran­dom walk of a pro­cess defined by its “stop­ping” prop­er­ties.

In this view, peo­ple start im­prov­ing their moral­ity ac­cord­ing to cer­tain crite­ria (self-con­sis­tency, sim­plic­ity, what they would be­lieve if they were smarter, etc...) and con­tinue on this un­til the crite­ria are fi­nally met. Be­cause there is no way of know­ing how “far” this pro­cess can con­tinue un­til the crite­ria are met, this can drift very far in­deed from its start­ing point.

Now I would like to be able to ar­gue, from a very anti-re­al­ist per­spec­tive, that:

  • Ar­gu­ment A: I want to be able to judge that moral­ity is bet­ter than moral­ity , based on some per­sonal in­tu­ition or judge­ment of cor­rect­ness. I want to be able to judge that is alien and evil, even if it is fully self-con­sis­tent ac­cord­ing to for­mal crite­ria, while is not fully self-con­sis­tent.

Mo­ral re­al­ists look like moral anti-realists

Now, I main­tain that this “ran­dom walk to stop­ping point” is an ac­cu­rate de­scrip­tion of many (most?) moral re­al­ist sys­tems. But it’s a ter­rible de­scrip­tion of moral re­al­ists. In prac­tice, most moral re­al­ists al­low for the pos­si­bil­ity of moral un­cer­tainty, and hence that their prefer­ence ap­proach might have a small chance of be­ing wrong.

And how would they iden­tify that wrong­ness? By look­ing out­side the for­mal pro­cess, and check­ing if the path that the moral “self-im­prove­ment” is tak­ing is plau­si­ble, and doesn’t lead to ob­vi­ously ter­rible out­comes.

So, to pick one ex­am­ple from Wei Dai (similar ex­am­ples can be found in this post on self-de­cep­tion, and in the “Se­na­tor Cruz” sec­tion of Scott Alexan­der’s “de­bate ques­tions” post):

I’m en­vi­sion­ing that in the fu­ture there will also be sys­tems where you can in­put any con­clu­sion that you want to ar­gue (in­clud­ing moral con­clu­sions) and the tar­get au­di­ence, and the sys­tem will give you the most con­vinc­ing ar­gu­ments for it. At that point peo­ple won’t be able to par­ti­ci­pate in any on­line (or offline for that mat­ter) dis­cus­sions with­out risk­ing their ob­ject-level val­ues be­ing hi­jacked.

If the moral re­al­ist ap­proach in­cluded get­ting into con­ver­sa­tions with such sys­tems and thus get­ting ran­domly sub­verted, then the moral re­al­ists I know would agree that the ap­proach had failed, no mat­ter how in­ter­nally con­sis­tent it seems. Thus, they al­low, in prac­tice, some con­sid­er­a­tions akin to Ar­gu­ment A: where the moral pro­cess ends up (or at least the path that it takes) can af­fect their be­lief that the moral re­al­ist con­clu­sion is cor­rect.

So moral re­al­ists, in prac­tice, do have con­di­tional meta-prefer­ences that can over­ride their moral re­al­ist sys­tem. In­deed, most moral re­al­ists don’t have a fully-de­signed sys­tem yet, but have a rough overview of what they want, with some de­tails they ex­pect to fill in later; from the per­spec­tive of here and now, they have some prefer­ences, some strong meta-prefer­ences (on how the sys­tem should work) and some con­di­tional meta-prefer­ences (on how the de­sign of the sys­tem should work, con­di­tional on cer­tain facts or ar­gu­ments they will learn later).

Mo­ral anti-re­al­ists look like moral realists

Enough pick­ing on moral re­al­ists; let’s look now at moral anti-re­al­ists, which is rel­a­tively easy for me as I’m one of them. Sup­pose I was to in­ves­ti­gate an area of moral­ity that I haven’t in­ves­ti­gated be­fore; say, poli­ti­cal the­ory of jus­tice.

Then, I would ex­pect that as I in­ves­ti­gated this area, I would start to de­velop bet­ter cat­e­gories than what I have now, with crisper and more prin­ci­pled bound­aries. I would ex­pect to meet ar­gu­ments that would change how I feel and what I value in these ar­eas. I would ap­ply sim­plic­ity ar­gu­ments to make more el­e­gant the hodge­podge of half-baked ideas that I cur­rently have in that area.

In short, I would ex­pect to en­gage in moral learn­ing. Which is a pe­cu­liar thing for a moral anti-re­al­ist to ex­pect...

The first-or­der similarity

So, to gen­er­al­ise a bit across the two cat­e­gories:

  1. Mo­ral re­al­ists are will­ing to ques­tion the truth of their sys­tems based on facts about the world that should for­mally be ir­rele­vant to that truth, and use their own pri­vate judge­ment in these cases.

  2. Mo­ral anti-re­al­ists are will­ing to en­gage in some­thing that looks like moral learn­ing.

Note that the jus­tifi­ca­tions of the two points of view are differ­ent—the moral re­al­ist can point to moral un­cer­tainty, the moral anti-re­al­ist to per­sonal prefer­ences for a more con­sis­tent sys­tem. And the long-term per­spec­tives are differ­ent: the moral re­al­ist ex­pects that their pro­cess will likely con­verge to some­thing with fan­tas­tic prop­er­ties, the moral anti-re­al­ist thinks it likely that the de­gree of moral learn­ing is sharply limited, only a few “iter­a­tions” be­yond their cur­rent moral­ity.

Still, in prac­tice, and to a short-term, first-or­der ap­prox­i­ma­tion, moral re­al­ists and moral-anti re­al­ists seem very similar. Which is prob­a­bly why they can con­tinue to have con­ver­sa­tions and de­bates that are not im­me­di­ately pointless.

  1. I apol­o­gise for my sim­plis­tic un­der­stand­ing and defi­ni­tions of moral re­al­ism. How­ever, my par­tial ex­pe­rience in this field has been enough to con­vince me that there are many in­com­pat­i­ble defi­ni­tion of moral re­al­ism, and many ar­gu­ments about them, so it’s not clear there is a sin­gle sim­ple thing to un­der­stand. So I’ve tried to define is very roughly, enough so that the gist of this post makes sense. ↩︎