Moral uncertainty: What kind of ‘should’ is involved?

This post fol­lows on from my prior post; con­sider read­ing that post first.

We are of­ten forced to make de­ci­sions un­der con­di­tions of un­cer­tainty. This may be em­piri­cal un­cer­tainty (e.g., what is the like­li­hood that nu­clear war would cause hu­man ex­tinc­tion?), or it may be moral un­cer­tainty (e.g., does the wellbe­ing of fu­ture gen­er­a­tions mat­ter morally?).

In my prior post, I dis­cussed over­laps with and dis­tinc­tions be­tween moral un­cer­tainty and re­lated con­cepts. In this post, I con­tinue my at­tempt to clar­ify what moral un­cer­tainty ac­tu­ally is (rather than how to make de­ci­sions when morally un­cer­tain, which is cov­ered later in the se­quence). Speci­fi­cally, here I’ll dis­cuss:

  1. Is what we “ought to do” (or “should do”) un­der moral un­cer­tainty an ob­jec­tive or sub­jec­tive (i.e., be­lief-rel­a­tive) mat­ter?

  2. Is what we “ought to do” (or “should do”) un­der moral un­cer­tainty a mat­ter of ra­tio­nal­ity or moral­ity?

An im­por­tant aim will be sim­ply clar­ify­ing the ques­tions and terms them­selves. That said, to fore­shadow, the ten­ta­tive “an­swers” I’ll ar­rive at are:

  1. It seems both more in­tu­itive and more ac­tion-guid­ing to say that the “ought” is sub­jec­tive.

  2. Whether the “ought” is a ra­tio­nal or a moral one may be a “merely ver­bal” dis­pute with no prac­ti­cal sig­nifi­cance. But I’m very con­fi­dent that in­ter­pret­ing the “ought” as a mat­ter of ra­tio­nal­ity works in any case (i.e., whether or not in­ter­pret­ing it as a mat­ter of moral­ity does, and whether or not the dis­tinc­tion re­ally mat­ters).

This post doesn’t ex­plic­itly ad­dress what types of moral un­cer­tainty would be mean­ingful for moral an­tire­al­ists and/​or sub­jec­tivists, or ex­plore why a per­son (or agent) might per­ceive them­selves to be morally un­cer­tain (as op­posed to what moral un­cer­tainty “re­ally is”). Those mat­ters will be the sub­ject of a later post.[1]

Epistemic sta­tus: The con­cepts cov­ered here are broad, fuzzy, and over­lap in var­i­ous ways, mak­ing defi­ni­tions and dis­tinc­tions be­tween them al­most in­evitably de­bat­able. Ad­di­tion­ally, I’m not an ex­pert in these top­ics (though I have now spent a cou­ple weeks mostly read­ing about them). I’ve tried to mostly col­lect, sum­marise, and syn­the­sise ex­ist­ing ideas (from aca­demic philos­o­phy and the LessWrong and EA com­mu­ni­ties). I’d ap­pre­ci­ate feed­back or com­ments in re­la­tion to any mis­takes, un­clear phras­ings, etc. (and just in gen­eral!).

Ob­jec­tive or sub­jec­tive?

(Note: What I dis­cuss here is not the same as the ob­jec­tivism vs sub­jec­tivism de­bate in metaethics.)

As I noted in a prior post:

Sub­jec­tive nor­ma­tivity re­lates to what one should do based on what one be­lieves, whereas ob­jec­tive nor­ma­tivity re­lates to what one “ac­tu­ally” should do (i.e., based on the true state of af­fairs).

Hilary Greaves & Owen Cot­ton-Bar­ratt give an ex­am­ple of this dis­tinc­tion in the con­text of em­piri­cal un­cer­tainty:

Sup­pose Alice packs the wa­ter­proofs but, as the day turns out, it does not rain. Does it fol­low that Alice made the wrong de­ci­sion? In one (ob­jec­tive) sense of “wrong”, yes: thanks to that de­ci­sion, she ex­pe­rienced the mild but un­nec­es­sary in­con­ve­nience of car­ry­ing bulky raingear around all day. But in a sec­ond (more sub­jec­tive) sense, clearly it need not fol­low that the de­ci­sion was wrong: if the prob­a­bil­ity of rain was suffi­ciently high and Alice suffi­ciently dis­likes get­ting wet, her de­ci­sion could eas­ily be the ap­pro­pri­ate one to make given her state of ig­no­rance about how the weather would in fact turn out. Nor­ma­tive the­o­ries of de­ci­sion-mak­ing un­der un­cer­tainty aim to cap­ture this sec­ond, more sub­jec­tive, type of eval­u­a­tion; the stan­dard such ac­count is ex­pected util­ity the­ory.

Greaves & Cot­ton-Bar­ratt then make the analo­gous dis­tinc­tion for moral un­cer­tainty:

How should one choose, when fac­ing rele­vant moral un­cer­tainty? In one (ob­jec­tive) sense, of course, what one should do is sim­ply what the true moral hy­poth­e­sis says one should do. But it seems there is also a sec­ond sense of “should”, analo­gous to the sub­jec­tive “should” for em­piri­cal un­cer­tainty, cap­tur­ing the sense in which it is ap­pro­pri­ate for the agent fac­ing moral un­cer­tainty to be guided by her moral cre­dences [i.e., be­liefs], what­ever the moral facts may be. (em­pha­sis added)

(This ob­jec­tive vs sub­jec­tive dis­tinc­tion seems to me some­what similar—though not iden­ti­cal—to the dis­tinc­tion be­tween ex post and ex ante think­ing. We might say that Alice made the right de­ci­sion ex ante—i.e., based on what she knew when she made her de­ci­sion—even if it turned outex post—that the other de­ci­sion would’ve worked out bet­ter.)

MacAskill notes that, in both the em­piri­cal and moral con­texts, “The prin­ci­pal ar­gu­ment for think­ing that there must be a sub­jec­tive sense of ‘ought’ is be­cause the ob­jec­tive sense of ‘ought’ is not suffi­ciently ac­tion-guid­ing.” He illus­trates this in the case of moral un­cer­tainty with the fol­low­ing ex­am­ple:

Su­san is a doc­tor, who faces three sick in­di­vi­d­u­als, Greg, Harold and Harry. Greg is a hu­man pa­tient, whereas Harold and Harry are chim­panzees. They all suffer from the same con­di­tion. She has a vial of a drug, D. If she ad­ministers all of drug D to Greg, he will be com­pletely cured, and if she ad­ministers all of drug to the chim­panzees, they will both be com­pletely cured (health 100%). If she splits the drug be­tween the three, then Greg will be al­most com­pletely cured (health 99%), and Harold and Harry will be par­tially cured (health 50%). She is un­sure about the value of the welfare of non-hu­man an­i­mals: she thinks it is equally likely that chim­panzees’ welfare has no moral value and that chim­panzees’ welfare has the same moral value as hu­man welfare. And, let us sup­pose, there is no way that she can im­prove her epistemic state with re­spect to the rel­a­tive value of hu­mans and chim­panzees.

[...]

Her three op­tions are as fol­lows:

A: Give all of the drug to Greg

B: Split the drug

C: Give all of the drug to Harold and Harry

Her de­ci­sion can be rep­re­sented in the fol­low­ing table, us­ing num­bers to rep­re­sent how good each out­come would be.

Fi­nally, sup­pose that, ac­cord­ing to the true moral the­ory, chim­panzee welfare is of the same moral value as hu­man welfare and that there­fore, she should give all of the drug to Harold and Harry. What should she do?

Clearly, the best out­come would oc­cur if Su­san does C. But she doesn’t know that that would cause the best out­come, be­cause she doesn’t know what the “true moral the­ory” is. She thus has no way to act on the ad­vice “Just do what is ob­jec­tively morally right.” Mean­while, as MacAskill notes, “it seems it would be morally reck­less for Su­san not to choose op­tion B: given what she knows, she would be risk­ing se­vere wrong­do­ing by choos­ing ei­ther op­tion A or op­tion C” (em­pha­sis added).

To cap­ture the in­tu­ition the Su­san should choose op­tion B, and to provide ac­tu­ally fol­low­able guidance for ac­tion, we need to ac­cept that there is a sub­jec­tive sense of “should” (or of “ought”) - a sense of “should” that de­pends in part on what one be­lieves. (This could also be called a “be­lief-rel­a­tive” or “cre­dence-rel­a­tive” sense of “should”.)[2]

An ad­di­tional ar­gu­ment in favour of ac­cept­ing that there’s a sub­jec­tive “should” in re­la­tion to moral un­cer­tainty is con­sis­tency with how we treat em­piri­cal un­cer­tainty, where most peo­ple ac­cept that there’s a sub­jec­tive “should”.[3] This ar­gu­ment is made reg­u­larly, in­clud­ing by MacAskill and by Greaves & Cot­ton-Bar­ratt, and it seems par­tic­u­larly com­pel­ling when one con­sid­ers that it’s of­ten difficult to draw clear lines be­tween em­piri­cal and moral un­cer­tainty (see my prior post). That is, if it’s of­ten hard to say whether an un­cer­tainty is em­piri­cal or moral, it seems strange to say we should ac­cept a sub­jec­tive “should” un­der em­piri­cal un­cer­tainty but not un­der moral un­cer­tainty.

Ul­ti­mately, most of what I’ve read on moral un­cer­tainty is premised on there be­ing a sub­jec­tive sense of “should”, and much of this se­quence will rest on that premise also.[4] As far as I can tell, this seems nec­es­sary if we are to come up with any mean­ingful, ac­tion-guid­ing ap­proaches for de­ci­sion-mak­ing un­der moral un­cer­tainty (“metanor­ma­tive the­o­ries”).

But I should note that some writ­ers do ap­pear to ar­gue that there’s only an ob­jec­tive sense of “should” (one ex­am­ple, I think, is Weather­son, though he uses differ­ent lan­guage and I’ve only skimmed his pa­per). Fur­ther­more, while I can’t see how this could lead to ac­tion-guid­ing prin­ci­ples for mak­ing de­ci­sions un­der un­cer­tainty, it does seem to me that it’d still al­low for re­solv­ing one’s un­cer­tainty. In other words, if we do recog­nise only ob­jec­tive “oughts”:

  • We may be stuck with fairly use­less prin­ci­ples for de­ci­sion-mak­ing, such as “Just do what’s ac­tu­ally right, even when you don’t know what’s ac­tu­ally right”

  • But (as far as I can tell) we could still be guided to clar­ify and re­duce our un­cer­tain­ties, and thereby bring our be­liefs more in line with what’s ac­tu­ally right.

Ra­tional or moral?

There is also de­bate about what pre­cisely kind of “should” is in­volved [in cases of moral un­cer­tainty]: ra­tio­nal, moral, or some­thing else again. (Greaves & Cot­ton-Bar­ratt)

For ex­am­ple, in the above ex­am­ple of Su­san the doc­tor, are we won­der­ing what she ra­tio­nally ought to do, given her moral un­cer­tainty about the moral sta­tus of chim­panzees, or what she morally ought to do?

It may not mat­ter ei­ther way

Un­for­tu­nately, even af­ter hav­ing read up on this, it’s not ac­tu­ally clear to me what the dis­tinc­tion is meant to be. In par­tic­u­lar, I haven’t come across a clear ex­pla­na­tion of what it would mean for the “should” or “ought” to be moral. I sus­pect that what that would mean would be partly a mat­ter of in­ter­pre­ta­tion, and that some defi­ni­tions of a “moral” should could be effec­tively the same as those for a “ra­tio­nal” should. (But I should note that I didn’t look ex­haus­tively for such ex­pla­na­tions and defi­ni­tions.)

Ad­di­tion­ally, both Greaves & Cot­ton-Bar­ratt and MacAskill ex­plic­itly avoid the ques­tion of whether what one “ought to do” un­der moral un­cer­tainty is a mat­ter of ra­tio­nal­ity or moral­ity.[5] This does not seem to at all hold them back from mak­ing valuable con­tri­bu­tions to the liter­a­ture on moral un­cer­tainty (and, more speci­fi­cally, on how to make de­ci­sions when morally un­cer­tain).

To­gether, the above points make me in­clined to be­lieve (though with low con­fi­dence) that this may be a “merely ver­bal” de­bate with no real, prac­ti­cal im­pli­ca­tions (at least while the words in­volved re­main as fuzzy as they are).

How­ever, I still did come to two less-dis­mis­sive con­clu­sions:

  1. I’m very con­fi­dent that the pro­ject of work­ing out mean­ingful, ac­tion-guid­ing prin­ci­ples for de­ci­sion-mak­ing un­der moral un­cer­tainty makes sense if we see the rele­vant “should” as a ra­tio­nal one. (Note: This doesn’t mean that I think the “should” has to be seen as a ra­tio­nal one.)

  2. I’m less sure whether that pro­ject would make sense if we see the rele­vant “should” as a moral one. (Note: This doesn’t mean I have any par­tic­u­lar rea­son to be­lieve it wouldn’t make sense if we see the “should” as a moral one.)

I provide my rea­son­ing be­hind these con­clu­sions be­low, though, given my sense that this de­bate may lack prac­ti­cal sig­nifi­cance, some read­ers may wish to just skip to the next sec­tion.

A ra­tio­nal “should” likely works

Bykvist writes:

An al­ter­na­tive way to un­der­stand the ought rele­vant to moral un­cer­tainty is in terms of ra­tio­nal­ity (MacAskil­let al., forth­com­ing; Sepielli, 2013). Ra­tion­al­ity, in one im­por­tant sense at least, has to do with what one should do or in­tend, given one’s be­liefs and prefer­ences. This is the kind of ra­tio­nal­ity that de­ci­sion the­ory of­ten is seen as in­vok­ing. It can be spel­led out in differ­ent ways. One is to see it as a mat­ter of co­her­ence: It is ra­tio­nal to do or in­tend what co­heres with one’s be­liefs and prefer­ences (Broome, 2013; for a critic, see Ar­paly, 2000). Another way to spell it out is to un­der­stand it as mat­ter of ra­tio­nal pro­cesses: it is ra­tio­nal to do or in­tend what would be the out­put of a ra­tio­nal pro­cess, which starts with one’s be­liefs and prefer­ences (Kolodny, 2007).

To ap­ply the gen­eral idea to moral un­cer­tainty, we do not need to take stand on which ver­sion is cor­rect. We only need to as­sume that when a con­scien­tious moral agent faces moral un­cer­tainty, she cares about do­ing right and avoid do­ing wrong but is un­cer­tain about the moral sta­tus of her ac­tions. She prefers do­ing right to do­ing wrong and is in­differ­ent be­tween differ­ent right do­ings (at least when the right do­ings have the same moral value, that is, none is morally su­pereroga­tory). She also cares more about se­ri­ous wrong­do­ings than minor wrong­do­ings. The idea is then to ap­ply tra­di­tional de­ci­sion the­o­ret­i­cal prin­ci­ples, ac­cord­ing to which ra­tio­nal choice is some func­tion of the agent’s prefer­ences (util­ities) and be­liefs (cre­dences). Of course, differ­ent de­ci­sion‐the­o­ries provide differ­ent prin­ci­ples (and re­quire differ­ent kinds of util­ity in­for­ma­tion). But the plau­si­ble ones at least agree on cases where one op­tion dom­i­nates an­other.

Sup­pose that you are con­sid­er­ing only two the­o­ries (which is to sim­plify con­sid­er­ably, but we only need a log­i­cally pos­si­ble case): “busi­ness as usual,” ac­cord­ing to which it is per­mis­si­ble to eat fac­tory‐farmed meat and per­mis­si­ble to eat veg­eta­bles, and “veg­e­tar­i­anism,” ac­cord­ing to which it is im­per­mis­si­ble to eat fac­tory‐farmed meat and per­mis­si­ble to eat veg­eta­bles. Sup­pose fur­ther that you have slightly more con­fi­dence in “busi­ness as usual.” The op­tion of eat­ing veg­eta­bles will dom­i­nate the op­tion of eat­ing meat in terms of your own prefer­ences: No mat­ter which moral the­ory is true, by eat­ing veg­eta­bles, you will en­sure an out­come that you weakly [pre­fer] to the al­ter­na­tive out­come: if “veg­e­tar­i­anism” is true, you pre­fer the out­come; if “busi­ness as usual is true,” you are in­differ­ent be­tween the out­comes. The ra­tio­nal thing for you to do is thus to eat veg­eta­bles, given your be­liefs and prefer­ences. (lines breaks added)

It seems to me that that rea­son­ing makes perfect sense, and that we can have valid, mean­ingful, ac­tion-guid­ing prin­ci­ples about what one ra­tio­nally (and sub­jec­tively) should do given one’s moral un­cer­tainty. This seems fur­ther sup­ported by the ap­proach Chris­tian Tarsney takes, which seems to be use­ful and to also treat the rele­vant “should” as a ra­tio­nal one.

Fur­ther­more, MacAskill seems to sug­gest that there’s a cor­re­la­tion be­tween (a) writ­ers fully en­gag­ing with the pro­ject of work­ing out ac­tion-guid­ing prin­ci­ples for de­ci­sion-mak­ing un­der moral un­cer­tainty and (b) writ­ers con­sid­er­ing the rele­vant “should” to be ra­tio­nal (rather than moral):

(Lock­hart 2000, 24,26), (Sepielli 2009, 10) and (Ross 2006) all take metanor­ma­tive norms to be norms of ra­tio­nal­ity. (Weather­son 2014) and (Har­man 2014) both un­der­stand metanor­ma­tive norms as moral norms. So there is an odd situ­a­tion in the liter­a­ture where the defen­ders of metanor­mav­ism (Lock­hart, Ross, and Sepielli) and the crit­ics of the view (Weather­son and Har­man) seem to be talk­ing past one an­other.

A moral “should” may or may not work

I haven’t seen any writer (a) ex­plic­itly state that they un­der­stand the rele­vant “should” to be a moral one, and then (b) go on to fully en­gage with the pro­ject of work­ing out mean­ingful, ac­tion-guid­ing prin­ci­ples for de­ci­sion-mak­ing un­der moral un­cer­tainty. Thus, I have an ab­sence of ev­i­dence that one can en­gage in that pro­ject while see­ing the “should” as moral, and I take this as (very weak) ev­i­dence that one can’t en­gage in that pro­ject while see­ing the “should” that way.

Ad­di­tion­ally, as noted above, MacAskill writes that Weather­son and Har­man (who seem fairly dis­mis­sive of that pro­ject) see the rele­vant “should” as a moral one. Ar­guably, this is ev­i­dence that that pro­ject of find­ing such ac­tion-guid­ing prin­ci­ples won’t make sense if we see the “should” as moral (rather than ra­tio­nal). How­ever, I con­sider this to also be very weak ev­i­dence, be­cause:

  • It’s only two data points.

  • It’s just a cor­re­la­tion any­way.

  • I haven’t closely in­ves­ti­gated the “cor­re­la­tion” my­self. That is, I haven’t checked whether or not Weather­son and Har­man’s rea­sons for dis­mis­sive­ness seem highly re­lated to them see­ing the “should” as moral rather than ra­tio­nal.

Clos­ing remarks

In this post, I’ve aimed to:

  • Clar­ify what is meant by the ques­tion “Is what we “ought to do” un­der moral un­cer­tainty is an ob­jec­tive or sub­jec­tive mat­ter?”

  • Clar­ify what is meant by the ques­tion “Is that ‘ought’ a mat­ter of ra­tio­nal­ity or of moral­ity?”

  • Ar­gue that it seems both more in­tu­itive and more ac­tion-guid­ing to say that the “ought” is sub­jec­tive.

  • Ar­gue that whether the “ought” is a ra­tio­nal or a moral one may be a “merely ver­bal” dis­pute with no prac­ti­cal sig­nifi­cance (but that in­ter­pret­ing the “ought” as a mat­ter of ra­tio­nal­ity works in any case).

I hope this has helped give read­ers more clar­ity on the seem­ingly ne­glected mat­ter of what we ac­tu­ally mean by moral un­cer­tainty. (And as always, I’d wel­come any feed­back or com­ments!)

My next posts will con­tinue in a similar vein, but this time build­ing to the ques­tion of whether, when we’re talk­ing about moral un­cer­tainty, we’re ac­tu­ally talk­ing about moral risk rather than about moral (Knigh­tian) un­cer­tainty—and whether such a dis­tinc­tion is truly mean­ingful. (To do so, I’ll first dis­cuss the risk-un­cer­tainty dis­tinc­tion in gen­eral, and the re­lated mat­ter of un­known un­knowns, be­fore ap­ply­ing these ideas in the con­text of moral risk/​un­cer­tainty in par­tic­u­lar.)


  1. But the cur­rent post is still rele­vant for many types of moral an­tire­al­ist. As noted in my last post, this se­quence will some­times use lan­guage that may ap­pear to en­dorse or pre­sume moral re­al­ism, but this is es­sen­tially just for con­ve­nience. ↩︎

  2. We could fur­ther di­vide sub­jec­tive nor­ma­tivity up into, roughly, “what one should do based on what one ac­tu­ally be­lieves” and “what one should do based on what it would be rea­son­able for one to be­lieve”. The fol­low­ing quote, while not di­rectly ad­dress­ing that ex­act dis­tinc­tion, seems rele­vant:

    Be­fore mov­ing on, we should dis­t­in­guish sub­jec­tive cre­dences, that is, de­grees of be­lief, from epistemic cre­dences, that is, the de­gree of be­lief that one is epistem­i­cally jus­tified in hav­ing, given one’s ev­i­dence. When I use the term ‘cre­dence’ I re­fer to epistemic cre­dences (though much of my dis­cus­sion could be ap­plied to a par­allel dis­cus­sion in­volv­ing sub­jec­tive cre­dences); when I want to re­fer to sub­jec­tive cre­dences I use the term ‘de­grees of be­lief’.

    The rea­son for this is that ap­pro­pri­ate­ness seems to have some sort of nor­ma­tive force: if it is most ap­pro­pri­ate for some­one to do some­thing, it seems that, other things be­ing equal, they ought, in the rele­vant sense of ‘ought’, to do it. But peo­ple can have crazy be­liefs: a psy­chopath might think that a kil­ling spree is the most moral thing to do. But there’s no sense in which the psy­chopath ought to go on a kil­ling spree: rather, he ought to re­vise his be­liefs. We can only cap­ture that idea if we talk about epistemic cre­dences, rather than de­grees of be­lief.

    (I found that quote in this com­ment, where it’s at­tributed to MacAskill’s BPhil the­sis. Un­for­tu­nately, I can’t seem to ac­cess that the­sis, in­clud­ing via Way­back Ma­chine.) ↩︎

  3. Though note that Greaves and Cot­ton-Bar­ratt write:

    Not ev­ery­one does recog­nise a sub­jec­tive read­ing of the moral ‘ought’, even in the case of em­piri­cal un­cer­tainty. One can dis­t­in­guish be­tween ob­jec­tivist, (ra­tio­nal-)cre­dence-rel­a­tive and plu­ral­ist views on this mat­ter. Ac­cord­ing to ob­jec­tivists (Moore, 1903; Moore, 1912; Ross, 1930, p.32; Thom­son, 1986, esp. pp. 177-9; Gra­ham, 2010; Bykvist and Ol­son, 2011) (re­spec­tively, cre­dence-rel­a­tivists (Prichard, 1933; Ross, 1939; Howard-Sny­der, 2005; Zim­mer­mann, 2006; Zim­mer­man, 2009; Ma­son, 2013), the “ought” of moral­ity is uniquely an ob­jec­tive (re­spec­tively, a cre­dence-rel­a­tive) one. Ac­cord­ing to plu­ral­ists, “ought” is am­bigu­ous be­tween these two read­ings (Rus­sell, 1966; Gib­bard, 2005; Parfit, 2011; Port­more, 2011; Dorsey, 2012; Olsen, 2017), or varies be­tween the two read­ings ac­cord­ing to con­text (Kolodny and Mac­far­lane, 2010).

    ↩︎
  4. In the fol­low­ing quote, Bykvist pro­vides what seems to me (if I’m in­ter­pret­ing it cor­rectly) to be a differ­ent way of ex­plain­ing some­thing similar to the ob­jec­tive vs sub­jec­tive dis­tinc­tion.

    One pos­si­ble ex­pla­na­tion of why so few philoso­phers have en­gaged with moral un­cer­tainty might be se­ri­ous doubt about whether it makes much sense to ask about what one ought do when one is un­cer­tain about what one ought to do. The ob­vi­ous an­swer to this ques­tion might be thought to be: “you ought to do what you ought to do, no mat­ter whether or not you are cer­tain about it” (Weather­son, 2002, 2014). How­ever, this as­sumes the same sense of “ought” through­out.

    A bet­ter op­tion is to as­sume that there are differ­ent kinds of moral ought. We are ask­ing what we morally ought to do, in one sense of ought, when we are not cer­tain about what we morally ought to do, in an­other sense of ought. One way to make this idea more pre­cise is to think about the differ­ent senses as differ­ent lev­els of moral ought. When we face a moral prob­lem, we are ask­ing what we morally ought to do, at the first level. Stan­dard moral the­o­ries, such as util­i­tar­i­anism, Kan­ti­anism, and virtue ethics, provide an­swers to this ques­tion. In a case of moral un­cer­tainty, we are mov­ing up one level and ask­ing about what we ought to do, at the sec­ond level, when we are not sure what we ought to do at the first level. At this sec­ond level, we take into ac­count our cre­dence in var­i­ous hy­pothe­ses about what we ought to do at the first level and what these hy­pothe­ses say about the moral value of each ac­tion (MacAskill et al., forth­com­ing). This sec­ond level ought pro­vides a way to cope with the moral un­cer­tainty at the first level. It gives us a ver­dict of how to best man­age the risk of do­ing first or­der moral wrongs. That there is such a sec­ond‐level moral ought of cop­ing with first‐or­der moral risks seems to be sup­ported by the fact that agents are morally crit­i­ciz­able when they, know­ing all the rele­vant em­piri­cal facts, do what they think is very likely to be a first‐or­der moral wrong when there is an­other op­tion that is known not to pose any risk of such wrong­do­ing.

    Yet an­other (and I think similar) way of fram­ing this sort of dis­tinc­tion could make use of the fol­low­ing two terms: “A crite­rion of right­ness tells us what it takes for an ac­tion to be right (if it’s ac­tions we’re look­ing at). A de­ci­sion pro­ce­dure is some­thing that we use when we’re think­ing about what to do” (Askell).

    Speci­fi­cally, we might say that the true first-or­der moral the­ory pro­vides ob­jec­tive “crite­ria of right­ness”, but that we don’t have di­rect ac­cess to what these are. As such, we can use a sec­ond-or­der “de­ci­sion pro­ce­dure” that at­tempts to lead us to take ac­tions that are close as pos­si­ble to the best ac­tions (ac­cord­ing to the un­known crite­ria of right­ness). To do so, this de­ci­sion pro­ce­dure must make use of our cre­dences (be­liefs) in var­i­ous moral the­o­ries, and is thus sub­jec­tive. ↩︎

  5. Greaves & Cot­ton-Bar­ratt write: “For the pur­pose of this ar­ti­cle, we will [...] not take a stand on what kind of “should” [is in­volved in cases of moral un­cer­tainty]. Our ques­tion is how the “should” in ques­tion be­haves in purely ex­ten­sional terms. Say that an an­swer to that ques­tion is a metanor­ma­tive the­ory.”

    MacAskill writes: “I in­tro­duce the tech­ni­cal term ‘ap­pro­pri­ate­ness’ in or­der to re­main neu­tral on the is­sue of whether metanor­ma­tive norms are ra­tio­nal norms, or some other sort of norms (though not­ing that they can’t be first-or­der norms pro­vided by first-or­der nor­ma­tive the­o­ries, on pain of in­con­sis­tency).” ↩︎