Moral uncertainty vs related concepts

Overview

How im­por­tant is the well‐be­ing of non‐hu­man an­i­mals com­pared with the well‐be­ing of hu­mans?

How much should we spend on helping strangers in need?

How much should we care about fu­ture gen­er­a­tions?

How should we weigh rea­sons of au­ton­omy and re­spect against rea­sons of benev­olence?

Few could hon­estly say that they are fully cer­tain about the an­swers to these press­ing moral ques­tions. Part of the rea­son we feel less than fully cer­tain about the an­swers has to do with un­cer­tainty about em­piri­cal facts. We are un­cer­tain about whether fish can feel pain, whether we can re­ally help strangers far away, or what we could do for peo­ple in the far fu­ture. How­ever, some­times, the un­cer­tainty is fun­da­men­tally moral. [...] Even if were to come to know all the rele­vant non‐nor­ma­tive facts, we could still wa­ver about whether it is right to kill an an­i­mal for a very small benefit for a hu­man, whether we have strong du­ties to help strangers in need, and whether fu­ture peo­ple mat­ter as much as cur­rent ones. Fun­da­men­tal moral un­cer­tainty can also be more gen­eral as when we are un­cer­tain about whether a cer­tain moral the­ory is cor­rect. (Bykvist; em­pha­sis added)[1]

I con­sider the above quote a great start­ing point for un­der­stand­ing what moral un­cer­tainty is; it gives clear ex­am­ples of moral un­cer­tain­ties, and con­trasts these with re­lated em­piri­cal un­cer­tain­ties. From what I’ve seen, a lot of aca­demic work on moral un­cer­tainty es­sen­tially opens with some­thing like the above, then notes that the ra­tio­nal ap­proach to de­ci­sion-mak­ing un­der em­piri­cal un­cer­tainty is typ­i­cally con­sid­ered to be ex­pected util­ity the­ory, then dis­cusses var­i­ous ap­proaches for de­ci­sion-mak­ing un­der moral un­cer­tainty.

That’s fair enough, as no one ar­ti­cle can cover ev­ery­thing, but it also leaves open some ma­jor ques­tions about what moral un­cer­tainty ac­tu­ally is.[2] Th­ese in­clude:

  1. How, more pre­cisely, can we draw lines be­tween moral and em­piri­cal un­cer­tainty?

  2. What are the over­laps and dis­tinc­tions be­tween moral un­cer­tainty and other re­lated con­cepts, such as nor­ma­tive, metanor­ma­tive, de­ci­sion-the­o­retic, and metaeth­i­cal un­cer­tainty, as well as value plu­ral­ism?

    • My prior post an­swers similar ques­tions about how moral­ity over­laps with and differs from re­lated con­cepts, and may be worth read­ing be­fore this one.

  3. Is what we “ought to do” un­der moral un­cer­tainty an ob­jec­tive or sub­jec­tive mat­ter?

  4. Is what we “ought to do” un­der moral un­cer­tainty a mat­ter of ra­tio­nal­ity or moral­ity?

  5. Are we talk­ing about “moral risk” or about “moral (Knigh­tian) un­cer­tainty” (if such a dis­tinc­tion is truly mean­ingful)?

  6. What “types” of moral un­cer­tainty are mean­ingful for moral an­tire­al­ists and/​or sub­jec­tivists?[3]

In this post, I col­lect and sum­marise ideas from aca­demic philos­o­phy and the LessWrong and EA com­mu­ni­ties in an at­tempt to an­swer the first two of the above ques­tions (or to at least clar­ify what the ques­tions mean, and what the most plau­si­ble an­swers are). My next few posts will do the same for the re­main­ing ques­tions.

I hope this will benefit read­ers by fa­cil­i­tat­ing clearer think­ing and dis­cus­sion. For ex­am­ple, a bet­ter un­der­stand­ing of the na­ture and types of moral un­cer­tainty may aid in de­ter­min­ing how to re­solve (i.e., re­duce or clar­ify) one’s un­cer­tainty, which I’ll dis­cuss two posts from now. (How to make de­ci­sions given moral un­cer­tainty is dis­cussed later in this se­quence.)

Epistemic sta­tus: The con­cepts cov­ered here are broad, fuzzy, and over­lap in var­i­ous ways, mak­ing defi­ni­tions and dis­tinc­tions be­tween them al­most in­evitably de­bat­able. Ad­di­tion­ally, I’m not an ex­pert in these top­ics (though I have now spent a cou­ple weeks mostly read­ing about them). I’ve tried to mostly col­lect, sum­marise, and syn­the­sise ex­ist­ing ideas. I’d ap­pre­ci­ate feed­back or com­ments in re­la­tion to any mis­takes, un­clear phras­ings, etc. (and just in gen­eral!).

Em­piri­cal uncertainty

In the quote at the start of this post, Bykvist (the au­thor) seemed to im­ply that it was easy to iden­tify which un­cer­tain­ties in that ex­am­ple were em­piri­cal and which were moral. How­ever, in many cases, the lines aren’t so clear. This is per­haps most ob­vi­ous with re­gards to, as Chris­tian Tarsney puts it:

Cer­tain cases of un­cer­tainty about moral con­sid­er­abil­ity (or moral sta­tus more gen­er­ally) [which] turn on meta­phys­i­cal un­cer­tain­ties that re­sist easy clas­sifi­ca­tion as em­piri­cal or moral.

[For ex­am­ple,] In the abor­tion de­bate, un­cer­tainty about when in the course of de­vel­op­ment the fe­tus/​in­fant comes to count as a per­son is nei­ther straight­for­wardly em­piri­cal nor straight­for­wardly moral. Like­wise for un­cer­tainty in Catholic moral the­ol­ogy about the time of en­soul­ment, the mo­ment be­tween con­cep­tion and birth at which God en­dows the fe­tus with a hu­man soul [...]. Nev­er­the­less, it seems strange to re­gard these un­cer­tain­ties as fun­da­men­tally differ­ent from more clearly em­piri­cal un­cer­tain­ties about the moral sta­tus of the de­vel­op­ing fe­tus (e.g., un­cer­tainty about where in the ges­ta­tion pro­cess com­plex men­tal ac­tivity, self-aware­ness, or the ca­pac­ity to ex­pe­rience pain first emerge), or from more clearly moral un­cer­tain­ties (e.g., un­cer­tainty, given a cer­tainty that the fe­tus is a per­son, whether it is per­mis­si­ble to cause the death of such a per­son when do­ing so will re­sult in more to­tal hap­piness and less to­tal suffer­ing).[4]

And there are also other types of cases in which it seems hard to find clear, non-ar­bi­trary lines be­tween moral and em­piri­cal un­cer­tain­ties (some of which Tarsney [p. 140-146] also dis­cusses).[5] Al­to­gether, I ex­pect draw­ing such lines will quite of­ten be difficult.

For­tu­nately, we may not ac­tu­ally need to draw such lines any­way. In fact, as I dis­cuss in my post on mak­ing de­ci­sions un­der both moral and em­piri­cal un­cer­tainty, many ap­proaches for han­dling moral un­cer­tainty were con­sciously de­signed by anal­ogy to ap­proaches for han­dling em­piri­cal un­cer­tainty, and it seems to me that they can eas­ily be ex­tended to han­dle both moral and em­piri­cal un­cer­tainty, with­out hav­ing to dis­t­in­guish be­tween those “types” of un­cer­tainty.[6][7]

The situ­a­tion is a lit­tle less clear when it comes to re­solv­ing one’s un­cer­tainty (rather than just mak­ing de­ci­sions given un­cer­tainty). It seems at first glance that you might need to in­ves­ti­gate differ­ent “types” of un­cer­tainty in differ­ent ways. For ex­am­ple, if I’m un­cer­tain whether fish re­act to pain in a cer­tain way, I might need to read stud­ies about that, whereas if I’m un­cer­tain what “moral sta­tus” fish de­serve (even as­sum­ing that I know all the rele­vant em­piri­cal facts), then I might need to en­gage in moral re­flec­tion. How­ever, it seems to me that the key differ­ence in such ex­am­ples is what the un­cer­tain­ties are ac­tu­ally about, rather than speci­fi­cally whether a given un­cer­tainty should be clas­sified as “moral” or “em­piri­cal”.

(It’s also worth quickly not­ing that the topic of “clue­less­ness” is only about em­piri­cal un­cer­tainty—speci­fi­cally, un­cer­tainty re­gard­ing the con­se­quences that one’s ac­tions will have. Clue­less­ness thus won’t be ad­dressed in my posts on moral un­cer­tainty, al­though I do plan to later write about it sep­a­rately.)

Nor­ma­tive uncertainty

As I noted in my prior post:

A nor­ma­tive state­ment is any state­ment re­lated to what one should do, what one ought to do, which of two things are bet­ter, or similar. [...] Nor­ma­tivity is thus the over­ar­ch­ing cat­e­gory (su­per­set) of which things like moral­ity, pru­dence [es­sen­tially mean­ing the part of nor­ma­tivity that has to do with one’s own self-in­ter­est, hap­piness, or wellbe­ing], and ar­guably ra­tio­nal­ity are just sub­sets.

In the same way, nor­ma­tive un­cer­tainty is a broader con­cept, of which moral un­cer­tainty is just one com­po­nent. Other com­po­nents could in­clude:

  • pru­den­tial uncertainty

  • de­ci­sion-the­o­retic un­cer­tainty (cov­ered be­low)

  • metaeth­i­cal un­cer­tainty (also cov­ered be­low) - al­though per­haps it’d make more sense to see metaeth­i­cal un­cer­tainty as in­stead just feed­ing into one’s moral uncertainty

De­spite this, aca­demic sources seem to com­monly ei­ther:

  • fo­cus only on moral un­cer­tainty, or

  • state or im­ply that es­sen­tially the same ap­proaches for de­ci­sion-mak­ing will work for both moral un­cer­tainty in par­tic­u­lar and nor­ma­tive un­cer­tainty in gen­eral (which seems to me a fairly rea­son­able as­sump­tion).

On this mat­ter, Tarsney writes:

Fun­da­men­tally, the topic of the com­ing chap­ters will be the prob­lem of nor­ma­tive un­cer­tainty, which can be roughly char­ac­ter­ized as un­cer­tainty about one’s ob­jec­tive rea­sons that is not a re­sult of some un­der­ly­ing em­piri­cal un­cer­tainty (un­cer­tainty about the state of con­cre­tia). How­ever, I will con­fine my­self al­most ex­clu­sively to ques­tions about moral un­cer­tainty: un­cer­tainty about one’s ob­jec­tive moral rea­sons that is not a re­sult of etc etc. This is in part merely a mat­ter of vo­cab­u­lary: “moral un­cer­tainty” is a bit less cum­ber­some than “nor­ma­tive un­cer­tainty,” a con­sid­er­a­tion that bears some weight when the cho­sen ex­pres­sion must oc­cur dozens of times per chap­ter. It is also in part be­cause the vast ma­jor­ity of the liter­a­ture on nor­ma­tive un­cer­tainty deals speci­fi­cally with moral un­cer­tainty, and be­cause moral un­cer­tainty pro­vides more than enough difficult prob­lems and in­ter­est­ing ex­am­ples, so that there is no need to ven­ture out­side the moral do­main.

Ad­di­tion­ally, how­ever, fo­cus­ing on moral un­cer­tainty is a use­ful sim­plifi­ca­tion that al­lows us to avoid difficult ques­tions about the re­la­tion­ship be­tween moral and non-moral rea­sons (though I am hope­ful that the the­o­ret­i­cal frame­work I de­velop can be ap­plied straight­for­wardly to nor­ma­tive un­cer­tain­ties of a non-moral kind). For my­self, I have no taste for the moral/​non-moral dis­tinc­tion: To put it as crudely and polem­i­cally as pos­si­ble, it seems to me that all ob­jec­tive rea­sons are moral rea­sons. But this view de­pends on sub­stan­tive nor­ma­tive eth­i­cal com­mit­ments that it is well be­yond the scope of this dis­ser­ta­tion to defend. [...]

If one does think that all rea­sons are moral rea­sons, or that moral rea­sons always over­ride non-moral rea­sons, then a com­plete ac­count of how agents ought to act un­der moral un­cer­tainty can be given with­out any dis­cus­sion of non-moral rea­sons (Lock­hart, 2000, p. 16). To the ex­tent that one does not share ei­ther of these as­sump­tions, the­o­ries of choice un­der moral un­cer­tainty must gen­er­ally be qual­ified with “in­so­far as there are no rele­vant non-moral con­sid­er­a­tions.”

Some­what similarly, this se­quence will nom­i­nally fo­cus on moral un­cer­tainty, even though:

  • some of the work I’m draw­ing on was nom­i­nally fo­cused on nor­ma­tive un­cer­tainty (e.g., Will MacAskill’s the­sis)

  • I in­tend most of what I say to be fairly eas­ily gen­er­al­is­able to nor­ma­tive un­cer­tainty more broadly.

Me­tanor­ma­tive uncertainty

In MacAskill’s the­sis, he writes that metanor­ma­tivism is “the view that there are sec­ond-or­der norms that gov­ern ac­tion that are rel­a­tive to a de­ci­sion-maker’s un­cer­tainty about first-or­der nor­ma­tive claims. [...] The cen­tral metanor­ma­tive ques­tion is [...] about which op­tion it’s ap­pro­pri­ate to choose [when a de­ci­sion-maker is un­cer­tain about which first-or­der nor­ma­tive the­ory to be­lieve in]”. MacAskill goes on to write:

A note on ter­minol­ogy: Me­tanor­ma­tivism isn’t about nor­ma­tivity, in the way that meta-ethics is about ethics, or that a meta-lan­guage is about a lan­guage. Rather, ‘meta’ is used in the sense of ‘over’ or ‘be­yond’

In essence, metanor­ma­tivism fo­cuses on what metanor­ma­tive the­o­ries (or “ap­proaches”) should be used for mak­ing de­ci­sions un­der nor­ma­tive un­cer­tainty.

We can there­fore imag­ine be­ing metanor­ma­tively un­cer­tain: un­cer­tain about what metanor­ma­tive the­o­ries to use for mak­ing de­ci­sions un­der nor­ma­tive un­cer­tainty. For ex­am­ple:

  • You’re nor­ma­tively un­cer­tain if you see mul­ti­ple (“first-or­der”) moral the­o­ries as pos­si­ble and these give con­flict­ing sug­ges­tions.

  • You’re _meta_nor­ma­tively un­cer­tain if you’re also un­sure whether the best ap­proach for de­cid­ing what to do given this un­cer­tainty is the “My Favourite The­ory” ap­proach or the “Max­imis­ing Ex­pected Choice-wor­thi­ness” ap­proach (both of which are ex­plained later in this se­quence).

This leads in­evitably to the fol­low­ing thought:

It seems that, just as we can suffer [first-or­der] nor­ma­tive un­cer­tainty, we can suffer [sec­ond-or­der] metanor­ma­tive un­cer­tainty as well: we can as­sign pos­i­tive prob­a­bil­ity to con­flict­ing [sec­ond-or­der] metanor­ma­tive the­o­ries. [Third-or­der] Me­tametanor­ma­tive the­o­ries, then, are col­lec­tions of claims about how we ought to act in the face of [sec­ond-or­der] metanor­ma­tive un­cer­tainty. And so on. In the end, it seems that the very ex­is­tence of nor­ma­tive claims—the very no­tion that there are, in some sense or an­other, ways “one ought to be­have”—or­gan­i­cally gives rise to an in­finite hi­er­ar­chy of metanor­ma­tive un­cer­tainty, with which an agent may have to con­tend in the course of mak­ing a de­ci­sion. (Philip Tram­mell)

I re­fer read­ers in­ter­ested in this pos­si­bil­ity of in­finite regress—and po­ten­tial solu­tions or rea­sons not to worry—to Tram­mell, Tarsney, and MacAskill (p. 217-219). (I won’t dis­cuss those mat­ters fur­ther here, and I haven’t prop­erly read those Tram­mell or Tarsney pa­pers my­self.)

De­ci­sion-the­o­retic uncertainty

(Read­ers who are un­fa­mil­iar with the topic of de­ci­sion the­o­ries may wish to read up on that first, or to skip this sec­tion.)

MacAskill writes:

Given the tren­chant dis­agree­ment be­tween in­tel­li­gent and well-in­formed philoso­phers, it seems highly plau­si­ble that one should not be cer­tain in ei­ther causal or ev­i­den­tial de­ci­sion the­ory. In light of this fact, Robert Noz­ick briefly raised an in­ter­est­ing idea: that per­haps one should take de­ci­sion-the­o­retic un­cer­tainty into ac­count in one’s de­ci­sion-mak­ing.

This is pre­cisely analo­gous to tak­ing un­cer­tainty about first-or­der moral the­o­ries into ac­count in de­ci­sion-mak­ing. Thus, de­ci­sion-the­o­retic un­cer­tainty is just an­other type of nor­ma­tive un­cer­tainty. Fur­ther­more, ar­guably, it can be han­dled us­ing the same sorts of “metanor­ma­tive the­o­ries” sug­gested for han­dling moral un­cer­tainty (which are dis­cussed later in this se­quence).

Chap­ter 6 of MacAskill’s the­sis is ded­i­cated to dis­cus­sion of this mat­ter, and I re­fer in­ter­ested read­ers there. For ex­am­ple, he writes:

metanor­ma­tivism about de­ci­sion the­ory [is] the idea that there is an im­por­tant sense of ‘ought’ (though cer­tainly not the only sense of ‘ought’) ac­cord­ing which a de­ci­sion-maker ought to take de­ci­sion-the­o­retic un­cer­tainty into ac­count. I call any metanor­ma­tive the­ory that takes de­ci­sion-the­o­retic un­cer­tainty into ac­count a type of meta de­ci­sion the­ory [- in] con­trast to a metanor­ma­tive view ac­cord­ing to which there are norms that are rel­a­tive to moral and pru­den­tial un­cer­tainty, but not rel­a­tive to de­ci­sion-the­o­retic un­cer­tainty.[8]

Me­taeth­i­cal uncertainty

While nor­ma­tive ethics ad­dresses such ques­tions as “What should I do?”, eval­u­at­ing spe­cific prac­tices and prin­ci­ples of ac­tion, meta-ethics ad­dresses ques­tions such as “What is good­ness?” and “How can we tell what is good from what is bad?”, seek­ing to un­der­stand the na­ture of eth­i­cal prop­er­ties and eval­u­a­tions. (Wikipe­dia)

To illus­trate, nor­ma­tive (or “first-or­der”) ethics in­volves de­bates such as “Con­se­quen­tial­ist or de­on­tolog­i­cal the­o­ries?”, while _meta_ethics in­volves de­bates such as “Mo­ral re­al­ism or moral an­tire­al­ism?” Thus, in just the same way we could be un­cer­tain about first-or­der ethics (morally un­cer­tain), we could be un­cer­tain about metaethics (metaeth­i­cally un­cer­tain).

It seems that metaeth­i­cal un­cer­tainty is rarely dis­cussed; in par­tic­u­lar, I’ve found no de­tailed treat­ment of how to make de­ci­sions un­der metaeth­i­cal un­cer­tainty. How­ever, there is one brief com­ment on the mat­ter in MacAskill’s the­sis:

even if one en­dorsed a meta-eth­i­cal view that is in­con­sis­tent with the idea that there’s value in gain­ing more moral in­for­ma­tion [e.g., cer­tain types of moral an­tire­al­ism], one should not be cer­tain in that meta-eth­i­cal view. And it’s high-stakes whether that view is true — if there are moral facts out there but one thinks there aren’t, that’s a big deal! Even for this sort of an­tire­al­ist, then, there’s there­fore value in moral in­for­ma­tion, be­cause there’s value in find­ing out for cer­tain whether that meta-eth­i­cal view is cor­rect.

It seems to me that, if and when we face metaeth­i­cal un­cer­tain­ties that are rele­vant to the ques­tion of what we should ac­tu­ally do, we could likely use ba­si­cally the same ap­proaches that are ad­vised for de­ci­sion-mak­ing un­der moral un­cer­tainty (which I dis­cuss later in this se­quence).[9]

Mo­ral pluralism

A differ­ent mat­ter that could ap­pear similar to moral un­cer­tainty is moral plu­ral­ism (aka value plu­ral­ism, aka plu­ral­is­tic moral the­o­ries). Ac­cord­ing to SEP:

moral plu­ral­ism [is] the view that there are many differ­ent moral val­ues.

Com­mon­sen­si­cally we talk about lots of differ­ent val­ues—hap­piness, liberty, friend­ship, and so on. The ques­tion about plu­ral­ism in moral the­ory is whether these ap­par­ently differ­ent val­ues are all re­ducible to one su­per­value, or whether we should think that there re­ally are sev­eral dis­tinct val­ues.

MacAskill notes that:

Some­one who [takes a par­tic­u­lar ex­pected-value-style ap­proach to de­ci­sion-mak­ing] un­der un­cer­tainty about whether only wellbe­ing, or both knowl­edge and wellbe­ing, are of value looks a lot like some­one who is con­form­ing with a first-or­der moral the­ory that as­signs both wellbe­ing and knowl­edge value.

In fact, one may even de­cide to re­act to moral un­cer­tainty by just no longer hav­ing any de­gree of be­lief in each of the first-or­der moral the­o­ries they’re un­cer­tain over, and in­stead hav­ing com­plete be­lief in a new (and still first-or­der) moral the­ory that com­bines those pre­vi­ously-be­lieved the­o­ries.[10] For ex­am­ple, af­ter dis­cussing two ap­proaches for think­ing about the “moral weight” of differ­ent an­i­mals’ ex­pe­riences, Brian To­masik writes:

Both of these ap­proaches strike me as hav­ing merit, and not only am I not sure which one I would choose, but I might ac­tu­ally choose them both. In other words, more than merely hav­ing moral un­cer­tainty be­tween them, I might adopt a “value plu­ral­ism” ap­proach and de­cide to care about both si­mul­ta­neously, with some trade ra­tio be­tween the two.[11]

But it’s im­por­tant to note that this re­ally isn’t the same as moral un­cer­tainty; the differ­ence is not merely ver­bal or merely a mat­ter of fram­ing. For ex­am­ple, if Alan has com­plete be­lief in a plu­ral­is­tic com­bi­na­tion of util­i­tar­i­anism and Kan­ti­anism, rather than un­cer­tainty over the two the­o­ries:

  1. Alan has no need for a (sec­ond-or­der) metanor­ma­tive the­ory for de­ci­sion-mak­ing un­der moral un­cer­tainty, be­cause he no longer has any moral un­cer­tainty.

    • If in­stead Alan has less than com­plete be­lief in the plu­ral­is­tic the­ory, then the moral un­cer­tainty that re­mains is be­tween the plu­ral­is­tic the­ory and what­ever other the­o­ries he has some be­lief in (rather than be­tween util­i­tar­i­anism, Kan­ti­anism, and what­ever other the­o­ries the per­son has some be­lief in).

  2. We can’t rep­re­sent the idea of Alan up­dat­ing to be­lieve more strongly in the Kan­tian the­ory, or to be­lieve more strongly in the util­i­tar­ian the­ory.[12]

  3. Re­lat­edly, we’re no longer able to straight­for­wardly ap­ply the idea of value of in­for­ma­tion to things that may in­form Alan de­gree of be­lief in each the­ory.[13]

Clos­ing remarks

I hope this post helped clar­ify the dis­tinc­tions and over­laps be­tween moral un­cer­tainty and re­lated con­cepts. (And as always, I’d wel­come any feed­back or com­ments!) In my next post, I’ll con­tinue ex­plor­ing what moral un­cer­tainty ac­tu­ally is, this time fo­cus­ing on the ques­tions:

  1. Is what we “ought to do” un­der moral un­cer­tainty an ob­jec­tive or sub­jec­tive mat­ter?

  2. Is what we “ought to do” un­der moral un­cer­tainty a mat­ter of ra­tio­nal­ity or moral­ity?


  1. For an­other in­di­ca­tion of why the topic of moral un­cer­tainty as a whole mat­ters, see this quote from Chris­tian Tarsney’s the­sis:

    The most pop­u­lar method of in­ves­ti­ga­tion in con­tem­po­rary an­a­lytic moral philos­o­phy, the method of re­flec­tive equil­ibrium based on heavy ap­peal to in­tu­itive judg­ments about cases, has come un­der con­certed at­tack and is re­garded by many philoso­phers (e.g. Singer (2005), Greene (2008)) as deeply sus­pect. Ad­di­tion­ally, ev­ery ma­jor the­o­ret­i­cal ap­proach to moral philos­o­phy (whether at the level of nor­ma­tive ethics or metaethics) is sub­ject to im­por­tant and in­tu­itively com­pel­ling ob­jec­tions, and the re­s­olu­tion of these ob­jec­tions of­ten turns on del­i­cate and method­olog­i­cally fraught ques­tions in other ar­eas of philos­o­phy like the meta­physics of con­scious­ness or per­sonal iden­tity (Mol­ler, 2011, pp. 428- 432). What­ever po­si­tion one takes on these de­bates, it can hardly be de­nied that our un­der­stand­ing of moral­ity re­mains on a much less sound foot­ing than, say, our knowl­edge of the nat­u­ral sci­ences. If, then, we re­main deeply and jus­tifi­ably un­cer­tain about a litany of im­por­tant ques­tions in physics, as­tron­omy, and biol­ogy, we should cer­tainly be at least equally un­cer­tain about moral mat­ters, even when some par­tic­u­lar moral judg­ment is widely shared and sta­ble upon re­flec­tion.

    ↩︎
  2. In an ear­lier post which in­fluenced this one, Kaj_So­tala wrote:

    I have long been slightly frus­trated by the ex­ist­ing dis­cus­sions about moral un­cer­tainty that I’ve seen. I sus­pect that the rea­son has been that they’ve been un­clear on what ex­actly they mean when they say that we are “un­cer­tain about which the­ory is right”—what is un­cer­tainty about moral the­o­ries? Fur­ther­more, es­pe­cially when dis­cussing things in an FAI [Friendly AI] con­text, it feels like sev­eral differ­ent senses of moral un­cer­tainty get mixed to­gether.

    ↩︎
  3. In var­i­ous places in this se­quence, I’ll use lan­guage that may ap­pear to en­dorse or pre­sume moral re­al­ism (e.g., refer­ring to “moral in­for­ma­tion” or to prob­a­bil­ity of a par­tic­u­lar moral the­ory be­ing “cor­rect”). But this is es­sen­tially just for con­ve­nience; I in­tend this se­quence to be as neu­tral as pos­si­ble on the mat­ter of moral re­al­ism vs an­tire­al­ism (ex­cept when di­rectly fo­cus­ing on such mat­ters).

    I think that the in­ter­pre­ta­tion and im­por­tance of moral un­cer­tainty is clear­est for re­al­ists, but, as I dis­cuss in this post, I also think that moral un­cer­tainty can still be a mean­ingful and im­por­tant topic for many types of moral an­tire­al­ist. ↩︎

  4. As an­other ex­am­ple of this sort of case, sup­pose I want to know whether fish are “con­scious”. This may seem on the face of it an em­piri­cal ques­tion. How­ever, I might not yet know pre­cisely what I mean by “con­scious”, and I might in fact only re­ally want to know whether fish are “con­scious in a sense I would morally care about”. In this case, the seem­ingly em­piri­cal ques­tion be­comes hard to dis­en­tan­gle from the (seem­ingly moral) ques­tion: “What forms of con­scious­ness are morally im­por­tant?”

    And in turn, my an­swers to that ques­tion may be in­fluenced by em­piri­cal dis­cov­er­ies. For ex­am­ple, I may ini­tially be­lieve that avoidance of painful stim­uli demon­strates con­scious­ness in a morally rele­vant sense, but then re­vise that be­lief when I learn that this be­havi­our can be dis­played in a stim­u­lus-re­sponse way by cer­tain ex­tremely sim­ple or­ganisms. ↩︎

  5. The bound­aries be­come even fuzzier, and may lose their mean­ing en­tirely, if one as­sumes the metaeth­i­cal view moral nat­u­ral­ism, which:

    refers to any ver­sion of moral re­al­ism that is con­sis­tent with [...] gen­eral philo­soph­i­cal nat­u­ral­ism. Mo­ral re­al­ism is the view that there are ob­jec­tive, mind-in­de­pen­dent moral facts. For the moral nat­u­ral­ist, then, there are ob­jec­tive moral facts, these facts are facts con­cern­ing nat­u­ral things, and we know about them us­ing em­piri­cal meth­ods. (SEP)

    This sounds to me like it would mean that all moral un­cer­tain­ties are effec­tively em­piri­cal un­cer­tain­ties, and that there’s no differ­ence in how moral vs em­piri­cal un­cer­tain­ties should be re­solved or in­cor­po­rated into de­ci­sion-mak­ing. But note that that’s my own claim; I haven’t seen it made ex­plic­itly by writ­ers on these sub­jects.

    That said, one quote that seems to sug­gest some­thing this claim is the fol­low­ing, from Tarsney’s the­sis:

    Most gen­er­ally, nat­u­ral­is­tic metaeth­i­cal views that treat nor­ma­tive eth­i­cal the­o­riz­ing as con­tin­u­ous with nat­u­ral sci­ence will see first-or­der moral prin­ci­ples as at least epistem­i­cally if not meta­phys­i­cally de­pen­dent on fea­tures of the em­piri­cal world. For in­stance, on Rail­ton’s (1986) view, moral value at­taches (roughly) to so­cial con­di­tions that are sta­ble with re­spect to cer­tain kinds of feed­back mechanisms (like the protest of those who ob­ject to their treat­ment un­der ex­ist­ing so­cial con­di­tions). What sort(s) of so­cial con­di­tions ex­hibit this sta­bil­ity, given the rele­vant back­ground facts about hu­man psy­chol­ogy, is an em­piri­cal ques­tion. For in­stance, is a so­cial ar­range­ment in which par­ents can pass down large ad­van­tages to their offspring through in­her­i­tance, ed­u­ca­tion, etc, more sta­ble or less sta­ble than one in which the state in­ter­venes ex­ten­sively to pre­vent such in­ter­gen­er­a­tional per­pet­u­a­tion of ad­van­tage? Some­one who ac­cepts a Rail­to­nian metaethic and is there­fore un­cer­tain about the first-or­der nor­ma­tive prin­ci­ples that gov­ern such prob­lems of dis­tribu­tive jus­tice, though on es­sen­tially em­piri­cal grounds, seems to oc­cupy an­other sort of limi­nal space be­tween em­piri­cal and moral un­cer­tainty.

    Foot­note 15 of this post dis­cusses rele­vant as­pects of moral nat­u­ral­ism, though not this spe­cific ques­tion. ↩︎

  6. In fact, Tarsney’s (p.140-146) dis­cus­sion of the difficulty of dis­en­tan­gling moral and em­piri­cal un­cer­tain­ties is used to ar­gue for the mer­its of ap­proach­ing moral un­cer­tainty analo­gously to how one ap­proaches em­piri­cal un­cer­tainty. ↩︎

  7. An al­ter­na­tive ap­proach that also doesn’t re­quire de­ter­min­ing whether a given un­cer­tainty is moral or em­piri­cal is the “wor­ld­view di­ver­sifi­ca­tion” ap­proach used by the Open Philan­thropy Pro­ject. In this con­text, a wor­ld­view is de­scribed as rep­re­sent­ing “a com­bi­na­tion of views, some­times very difficult to dis­en­tan­gle, such that un­cer­tainty be­tween wor­ld­views is con­sti­tuted by a mix of em­piri­cal un­cer­tainty (un­cer­tainty about facts), nor­ma­tive un­cer­tainty (un­cer­tainty about moral­ity), and method­olog­i­cal un­cer­tainty (e.g. un­cer­tainty about how to han­dle un­cer­tainty [...]).” Open Phil “[puts] sig­nifi­cant re­sources be­hind each wor­ld­view that [they] find highly plau­si­ble.” This doesn’t re­quire treat­ing moral and em­piri­cal un­cer­tainty any differ­ently, and thus doesn’t re­quire draw­ing lines be­tween those “types” of un­cer­tainty. ↩︎

  8. As with metanor­ma­tive un­cer­tainty in gen­eral, this can lead to com­pli­cated re­gresses. For ex­am­ple, there’s the pos­si­bil­ity to con­struct causal meta de­ci­sion the­o­ries and ev­i­den­tial meta de­ci­sion the­o­ries, and to be un­cer­tain over which of those meta de­ci­sion the­o­ries to en­dorse, and so on. As above, see Tram­mell, Tarsney, and MacAskill (p. 217-219) for dis­cus­sion of such mat­ters. ↩︎

  9. In a good, short post, Ikaxas writes:

    How should we deal with metaeth­i­cal un­cer­tainty? [...] One an­swer is this: in­so­far as some metaeth­i­cal is­sue is rele­vant for first-or­der eth­i­cal is­sues, deal with it as you would any other nor­ma­tive un­cer­tainty. And in­so­far as it is not rele­vant for first-or­der eth­i­cal is­sues, ig­nore it (dis­count­ing, of course, in­trin­sic cu­ri­os­ity and any value knowl­edge has for its own sake).

    Some peo­ple think that nor­ma­tive eth­i­cal is­sues ought to be com­pletely in­de­pen­dent of metaethics: “The whole idea [of my metaeth­i­cal nat­u­ral­ism] is to hold fixed or­di­nary nor­ma­tive ideas and try to an­swer some fur­ther ex­plana­tory ques­tions” (Schroeder [...]). Others [...] be­lieve that metaeth­i­cal and nor­ma­tive eth­i­cal the­o­riz­ing should in­form each other. For the first group, my sug­ges­tion in the pre­vi­ous para­graph recom­mends that they ig­nore metaethics en­tirely (again, set­ting aside any in­trin­sic mo­ti­va­tion to study it), while for the sec­ond my sug­ges­tion recom­mends pur­su­ing ex­clu­sively those ar­eas which are likely to in­fluence con­clu­sions in nor­ma­tive ethics.

    This seems to me like a good ex­ten­sion/​ap­pli­ca­tion of gen­eral ideas from work on the value of in­for­ma­tion. (I’ll ap­ply such ideas to moral un­cer­tainty later in this se­quence.)

    Tarsney gives an ex­am­ple of the sort of case in which metaeth­i­cal un­cer­tainty is rele­vant to de­ci­sion-mak­ing (though that’s not the point he’s mak­ing with the ex­am­ple):

    For in­stance, con­sider an agent Alex who, like Alice, di­vides his moral be­lief be­tween two the­o­ries, a he­do­nis­tic and a plu­ral­is­tic ver­sion of con­se­quen­tial­ism. But sup­pose that Alex also di­vides his metaeth­i­cal be­liefs be­tween a ro­bust moral re­al­ism and a fairly ane­mic anti-re­al­ism, and that his cre­dence in he­do­nis­tic con­se­quen­tial­ism is mostly or en­tirely con­di­tioned on his cre­dence in ro­bust re­al­ism while his cre­dence in plu­ral­ism is mostly or en­tirely con­di­tioned on his cre­dence in anti-re­al­ism. (Sup­pose he in­clines to­ward a he­do­nis­tic view on which cer­tain qualia have in­trin­sic value or dis­value en­tirely in­de­pen­dent of our be­liefs, at­ti­tudes, etc, which we are morally re­quired to max­i­mize. But if this view turns out to be wrong, he be­lieves, then moral­ity can only con­sist in the pur­suit of what­ever we con­tin­gently hap­pen to value in some dis­tinc­tively moral way, which in­cludes plea­sure but also knowl­edge, aes­thetic goods, friend­ship, etc.)

    ↩︎
  10. Or, more mod­er­ately, one could re­move just some de­gree of be­lief in some sub­set of the moral the­o­ries that one had some de­gree of be­lief in, and place that amount of be­lief in a new moral the­ory that com­bines just that sub­set of moral the­o­ries. E.g., one may ini­tially think util­i­tar­i­anism, Kan­ti­anism, and virtue ethics each have a 33% chance of be­ing “cor­rect”, but then switch to be­liev­ing that a plu­ral­is­tic com­bi­na­tion of util­i­tar­i­anism and Kan­ti­anism is 67% likely to be cor­rect, while virtue ethics is still 33% likely to be cor­rect. ↩︎

  11. Luke Muelhauser also ap­pears to en­dorse a similar ap­proach, though not ex­plic­itly in the con­text of moral un­cer­tainty. And Kaj So­tala also seems to en­dorse a similar ap­proach, though with­out us­ing the term “plu­ral­ism” (I’ll dis­cuss Kaj’s ap­proach two posts from now). Fi­nally, MacAskill quotes Noz­ick ap­pear­ing to en­dorse a similar ap­proach with re­gards to de­ci­sion-the­o­retic un­cer­tainty:

    I [Noz­ick] sug­gest that we go fur­ther and say not merely that we are un­cer­tain about which one of these two prin­ci­ples, [CDT] and [EDT], is (all by it­self) cor­rect, but that both of these prin­ci­ples are le­gi­t­i­mate and each must be given its re­spec­tive due. The weights, then, are not mea­sures of un­cer­tainty but mea­sures of the le­gi­t­i­mate force of each prin­ci­ple. We thus have a nor­ma­tive the­ory that di­rects a per­son to choose an act with max­i­mal de­ci­sion-value.

    ↩︎
  12. The clos­est ana­log would be Alan up­dat­ing his be­liefs about the plu­ral­is­tic the­ory’s con­tents/​sub­stance; for ex­am­ple, com­ing to be­lieve that a more cor­rect in­ter­pre­ta­tion of the the­ory would lean more in a Kan­tian di­rec­tion. (Although, if we ac­cept that such an up­date is pos­si­ble, it may ar­guably be best to rep­re­sent Alan as hav­ing moral un­cer­tainty be­tween differ­ent ver­sions of the plu­ral­is­tic the­ory, rather than be­ing cer­tain that the plu­ral­is­tic the­ory is “cor­rect” but un­cer­tain about what it says.) ↩︎

  13. That said, we can still ap­ply value of in­for­ma­tion anal­y­sis to things like Alan re­flect­ing on how best to in­ter­pret the plu­ral­is­tic moral the­ory (as­sum­ing again that we rep­re­sent Alan as un­cer­tain about the the­ory’s con­tents). A post later in this se­quence will be ded­i­cated to how and why to es­ti­mate the “value of moral in­for­ma­tion”. ↩︎