Moral uncertainty vs related concepts

Overview

How important is the well‐being of non‐human animals compared with the well‐being of humans?

How much should we spend on helping strangers in need?

How much should we care about future generations?

How should we weigh reasons of autonomy and respect against reasons of benevolence?

Few could honestly say that they are fully certain about the answers to these pressing moral questions. Part of the reason we feel less than fully certain about the answers has to do with uncertainty about empirical facts. We are uncertain about whether fish can feel pain, whether we can really help strangers far away, or what we could do for people in the far future. However, sometimes, the uncertainty is fundamentally moral. [...] Even if were to come to know all the relevant non‐normative facts, we could still waver about whether it is right to kill an animal for a very small benefit for a human, whether we have strong duties to help strangers in need, and whether future people matter as much as current ones. Fundamental moral uncertainty can also be more general as when we are uncertain about whether a certain moral theory is correct. (Bykvist; emphasis added)[1]

I consider the above quote a great starting point for understanding what moral uncertainty is; it gives clear examples of moral uncertainties, and contrasts these with related empirical uncertainties. From what I’ve seen, a lot of academic work on moral uncertainty essentially opens with something like the above, then notes that the rational approach to decision-making under empirical uncertainty is typically considered to be expected utility theory, then discusses various approaches for decision-making under moral uncertainty.

That’s fair enough, as no one article can cover everything, but it also leaves open some major questions about what moral uncertainty actually is.[2] These include:

  1. How, more precisely, can we draw lines between moral and empirical uncertainty?

  2. What are the overlaps and distinctions between moral uncertainty and other related concepts, such as normative, metanormative, decision-theoretic, and metaethical uncertainty, as well as value pluralism?

    • My prior post answers similar questions about how morality overlaps with and differs from related concepts, and may be worth reading before this one.

  3. Is what we “ought to do” under moral uncertainty an objective or subjective matter?

  4. Is what we “ought to do” under moral uncertainty a matter of rationality or morality?

  5. Are we talking about “moral risk” or about “moral (Knightian) uncertainty” (if such a distinction is truly meaningful)?

  6. What “types” of moral uncertainty are meaningful for moral antirealists and/​or subjectivists?[3]

In this post, I collect and summarise ideas from academic philosophy and the LessWrong and EA communities in an attempt to answer the first two of the above questions (or to at least clarify what the questions mean, and what the most plausible answers are). My next few posts will do the same for the remaining questions.

I hope this will benefit readers by facilitating clearer thinking and discussion. For example, a better understanding of the nature and types of moral uncertainty may aid in determining how to resolve (i.e., reduce or clarify) one’s uncertainty, which I’ll discuss two posts from now. (How to make decisions given moral uncertainty is discussed later in this sequence.)

Epistemic status: The concepts covered here are broad, fuzzy, and overlap in various ways, making definitions and distinctions between them almost inevitably debatable. Additionally, I’m not an expert in these topics (though I have now spent a couple weeks mostly reading about them). I’ve tried to mostly collect, summarise, and synthesise existing ideas. I’d appreciate feedback or comments in relation to any mistakes, unclear phrasings, etc. (and just in general!).

Empirical uncertainty

In the quote at the start of this post, Bykvist (the author) seemed to imply that it was easy to identify which uncertainties in that example were empirical and which were moral. However, in many cases, the lines aren’t so clear. This is perhaps most obvious with regards to, as Christian Tarsney puts it:

Certain cases of uncertainty about moral considerability (or moral status more generally) [which] turn on metaphysical uncertainties that resist easy classification as empirical or moral.

[For example,] In the abortion debate, uncertainty about when in the course of development the fetus/​infant comes to count as a person is neither straightforwardly empirical nor straightforwardly moral. Likewise for uncertainty in Catholic moral theology about the time of ensoulment, the moment between conception and birth at which God endows the fetus with a human soul [...]. Nevertheless, it seems strange to regard these uncertainties as fundamentally different from more clearly empirical uncertainties about the moral status of the developing fetus (e.g., uncertainty about where in the gestation process complex mental activity, self-awareness, or the capacity to experience pain first emerge), or from more clearly moral uncertainties (e.g., uncertainty, given a certainty that the fetus is a person, whether it is permissible to cause the death of such a person when doing so will result in more total happiness and less total suffering).[4]

And there are also other types of cases in which it seems hard to find clear, non-arbitrary lines between moral and empirical uncertainties (some of which Tarsney [p. 140-146] also discusses).[5] Altogether, I expect drawing such lines will quite often be difficult.

Fortunately, we may not actually need to draw such lines anyway. In fact, as I discuss in my post on making decisions under both moral and empirical uncertainty, many approaches for handling moral uncertainty were consciously designed by analogy to approaches for handling empirical uncertainty, and it seems to me that they can easily be extended to handle both moral and empirical uncertainty, without having to distinguish between those “types” of uncertainty.[6][7]

The situation is a little less clear when it comes to resolving one’s uncertainty (rather than just making decisions given uncertainty). It seems at first glance that you might need to investigate different “types” of uncertainty in different ways. For example, if I’m uncertain whether fish react to pain in a certain way, I might need to read studies about that, whereas if I’m uncertain what “moral status” fish deserve (even assuming that I know all the relevant empirical facts), then I might need to engage in moral reflection. However, it seems to me that the key difference in such examples is what the uncertainties are actually about, rather than specifically whether a given uncertainty should be classified as “moral” or “empirical”.

(It’s also worth quickly noting that the topic of “cluelessness” is only about empirical uncertainty—specifically, uncertainty regarding the consequences that one’s actions will have. Cluelessness thus won’t be addressed in my posts on moral uncertainty, although I do plan to later write about it separately.)

Normative uncertainty

As I noted in my prior post:

A normative statement is any statement related to what one should do, what one ought to do, which of two things are better, or similar. [...] Normativity is thus the overarching category (superset) of which things like morality, prudence [essentially meaning the part of normativity that has to do with one’s own self-interest, happiness, or wellbeing], and arguably rationality are just subsets.

In the same way, normative uncertainty is a broader concept, of which moral uncertainty is just one component. Other components could include:

  • prudential uncertainty

  • decision-theoretic uncertainty (covered below)

  • metaethical uncertainty (also covered below) - although perhaps it’d make more sense to see metaethical uncertainty as instead just feeding into one’s moral uncertainty

Despite this, academic sources seem to commonly either:

  • focus only on moral uncertainty, or

  • state or imply that essentially the same approaches for decision-making will work for both moral uncertainty in particular and normative uncertainty in general (which seems to me a fairly reasonable assumption).

On this matter, Tarsney writes:

Fundamentally, the topic of the coming chapters will be the problem of normative uncertainty, which can be roughly characterized as uncertainty about one’s objective reasons that is not a result of some underlying empirical uncertainty (uncertainty about the state of concretia). However, I will confine myself almost exclusively to questions about moral uncertainty: uncertainty about one’s objective moral reasons that is not a result of etc etc. This is in part merely a matter of vocabulary: “moral uncertainty” is a bit less cumbersome than “normative uncertainty,” a consideration that bears some weight when the chosen expression must occur dozens of times per chapter. It is also in part because the vast majority of the literature on normative uncertainty deals specifically with moral uncertainty, and because moral uncertainty provides more than enough difficult problems and interesting examples, so that there is no need to venture outside the moral domain.

Additionally, however, focusing on moral uncertainty is a useful simplification that allows us to avoid difficult questions about the relationship between moral and non-moral reasons (though I am hopeful that the theoretical framework I develop can be applied straightforwardly to normative uncertainties of a non-moral kind). For myself, I have no taste for the moral/​non-moral distinction: To put it as crudely and polemically as possible, it seems to me that all objective reasons are moral reasons. But this view depends on substantive normative ethical commitments that it is well beyond the scope of this dissertation to defend. [...]

If one does think that all reasons are moral reasons, or that moral reasons always override non-moral reasons, then a complete account of how agents ought to act under moral uncertainty can be given without any discussion of non-moral reasons (Lockhart, 2000, p. 16). To the extent that one does not share either of these assumptions, theories of choice under moral uncertainty must generally be qualified with “insofar as there are no relevant non-moral considerations.”

Somewhat similarly, this sequence will nominally focus on moral uncertainty, even though:

  • some of the work I’m drawing on was nominally focused on normative uncertainty (e.g., Will MacAskill’s thesis)

  • I intend most of what I say to be fairly easily generalisable to normative uncertainty more broadly.

Metanormative uncertainty

In MacAskill’s thesis, he writes that metanormativism is “the view that there are second-order norms that govern action that are relative to a decision-maker’s uncertainty about first-order normative claims. [...] The central metanormative question is [...] about which option it’s appropriate to choose [when a decision-maker is uncertain about which first-order normative theory to believe in]”. MacAskill goes on to write:

A note on terminology: Metanormativism isn’t about normativity, in the way that meta-ethics is about ethics, or that a meta-language is about a language. Rather, ‘meta’ is used in the sense of ‘over’ or ‘beyond’

In essence, metanormativism focuses on what metanormative theories (or “approaches”) should be used for making decisions under normative uncertainty.

We can therefore imagine being metanormatively uncertain: uncertain about what metanormative theories to use for making decisions under normative uncertainty. For example:

  • You’re normatively uncertain if you see multiple (“first-order”) moral theories as possible and these give conflicting suggestions.

  • You’re _meta_normatively uncertain if you’re also unsure whether the best approach for deciding what to do given this uncertainty is the “My Favourite Theory” approach or the “Maximising Expected Choice-worthiness” approach (both of which are explained later in this sequence).

This leads inevitably to the following thought:

It seems that, just as we can suffer [first-order] normative uncertainty, we can suffer [second-order] metanormative uncertainty as well: we can assign positive probability to conflicting [second-order] metanormative theories. [Third-order] Metametanormative theories, then, are collections of claims about how we ought to act in the face of [second-order] metanormative uncertainty. And so on. In the end, it seems that the very existence of normative claims—the very notion that there are, in some sense or another, ways “one ought to behave”—organically gives rise to an infinite hierarchy of metanormative uncertainty, with which an agent may have to contend in the course of making a decision. (Philip Trammell)

I refer readers interested in this possibility of infinite regress—and potential solutions or reasons not to worry—to Trammell, Tarsney, and MacAskill (p. 217-219). (I won’t discuss those matters further here, and I haven’t properly read those Trammell or Tarsney papers myself.)

Decision-theoretic uncertainty

(Readers who are unfamiliar with the topic of decision theories may wish to read up on that first, or to skip this section.)

MacAskill writes:

Given the trenchant disagreement between intelligent and well-informed philosophers, it seems highly plausible that one should not be certain in either causal or evidential decision theory. In light of this fact, Robert Nozick briefly raised an interesting idea: that perhaps one should take decision-theoretic uncertainty into account in one’s decision-making.

This is precisely analogous to taking uncertainty about first-order moral theories into account in decision-making. Thus, decision-theoretic uncertainty is just another type of normative uncertainty. Furthermore, arguably, it can be handled using the same sorts of “metanormative theories” suggested for handling moral uncertainty (which are discussed later in this sequence).

Chapter 6 of MacAskill’s thesis is dedicated to discussion of this matter, and I refer interested readers there. For example, he writes:

metanormativism about decision theory [is] the idea that there is an important sense of ‘ought’ (though certainly not the only sense of ‘ought’) according which a decision-maker ought to take decision-theoretic uncertainty into account. I call any metanormative theory that takes decision-theoretic uncertainty into account a type of meta decision theory [- in] contrast to a metanormative view according to which there are norms that are relative to moral and prudential uncertainty, but not relative to decision-theoretic uncertainty.[8]

Metaethical uncertainty

While normative ethics addresses such questions as “What should I do?”, evaluating specific practices and principles of action, meta-ethics addresses questions such as “What is goodness?” and “How can we tell what is good from what is bad?”, seeking to understand the nature of ethical properties and evaluations. (Wikipedia)

To illustrate, normative (or “first-order”) ethics involves debates such as “Consequentialist or deontological theories?”, while _meta_ethics involves debates such as “Moral realism or moral antirealism?” Thus, in just the same way we could be uncertain about first-order ethics (morally uncertain), we could be uncertain about metaethics (metaethically uncertain).

It seems that metaethical uncertainty is rarely discussed; in particular, I’ve found no detailed treatment of how to make decisions under metaethical uncertainty. However, there is one brief comment on the matter in MacAskill’s thesis:

even if one endorsed a meta-ethical view that is inconsistent with the idea that there’s value in gaining more moral information [e.g., certain types of moral antirealism], one should not be certain in that meta-ethical view. And it’s high-stakes whether that view is true — if there are moral facts out there but one thinks there aren’t, that’s a big deal! Even for this sort of antirealist, then, there’s therefore value in moral information, because there’s value in finding out for certain whether that meta-ethical view is correct.

It seems to me that, if and when we face metaethical uncertainties that are relevant to the question of what we should actually do, we could likely use basically the same approaches that are advised for decision-making under moral uncertainty (which I discuss later in this sequence).[9]

Moral pluralism

A different matter that could appear similar to moral uncertainty is moral pluralism (aka value pluralism, aka pluralistic moral theories). According to SEP:

moral pluralism [is] the view that there are many different moral values.

Commonsensically we talk about lots of different values—happiness, liberty, friendship, and so on. The question about pluralism in moral theory is whether these apparently different values are all reducible to one supervalue, or whether we should think that there really are several distinct values.

MacAskill notes that:

Someone who [takes a particular expected-value-style approach to decision-making] under uncertainty about whether only wellbeing, or both knowledge and wellbeing, are of value looks a lot like someone who is conforming with a first-order moral theory that assigns both wellbeing and knowledge value.

In fact, one may even decide to react to moral uncertainty by just no longer having any degree of belief in each of the first-order moral theories they’re uncertain over, and instead having complete belief in a new (and still first-order) moral theory that combines those previously-believed theories.[10] For example, after discussing two approaches for thinking about the “moral weight” of different animals’ experiences, Brian Tomasik writes:

Both of these approaches strike me as having merit, and not only am I not sure which one I would choose, but I might actually choose them both. In other words, more than merely having moral uncertainty between them, I might adopt a “value pluralism” approach and decide to care about both simultaneously, with some trade ratio between the two.[11]

But it’s important to note that this really isn’t the same as moral uncertainty; the difference is not merely verbal or merely a matter of framing. For example, if Alan has complete belief in a pluralistic combination of utilitarianism and Kantianism, rather than uncertainty over the two theories:

  1. Alan has no need for a (second-order) metanormative theory for decision-making under moral uncertainty, because he no longer has any moral uncertainty.

    • If instead Alan has less than complete belief in the pluralistic theory, then the moral uncertainty that remains is between the pluralistic theory and whatever other theories he has some belief in (rather than between utilitarianism, Kantianism, and whatever other theories the person has some belief in).

  2. We can’t represent the idea of Alan updating to believe more strongly in the Kantian theory, or to believe more strongly in the utilitarian theory.[12]

  3. Relatedly, we’re no longer able to straightforwardly apply the idea of value of information to things that may inform Alan degree of belief in each theory.[13]

Closing remarks

I hope this post helped clarify the distinctions and overlaps between moral uncertainty and related concepts. (And as always, I’d welcome any feedback or comments!) In my next post, I’ll continue exploring what moral uncertainty actually is, this time focusing on the questions:

  1. Is what we “ought to do” under moral uncertainty an objective or subjective matter?

  2. Is what we “ought to do” under moral uncertainty a matter of rationality or morality?


  1. ↩︎

    For another indication of why the topic of moral uncertainty as a whole matters, see this quote from Christian Tarsney’s thesis:

    The most popular method of investigation in contemporary analytic moral philosophy, the method of reflective equilibrium based on heavy appeal to intuitive judgments about cases, has come under concerted attack and is regarded by many philosophers (e.g. Singer (2005), Greene (2008)) as deeply suspect. Additionally, every major theoretical approach to moral philosophy (whether at the level of normative ethics or metaethics) is subject to important and intuitively compelling objections, and the resolution of these objections often turns on delicate and methodologically fraught questions in other areas of philosophy like the metaphysics of consciousness or personal identity (Moller, 2011, pp. 428- 432). Whatever position one takes on these debates, it can hardly be denied that our understanding of morality remains on a much less sound footing than, say, our knowledge of the natural sciences. If, then, we remain deeply and justifiably uncertain about a litany of important questions in physics, astronomy, and biology, we should certainly be at least equally uncertain about moral matters, even when some particular moral judgment is widely shared and stable upon reflection.

  2. ↩︎

    In an earlier post which influenced this one, Kaj_Sotala wrote:

    I have long been slightly frustrated by the existing discussions about moral uncertainty that I’ve seen. I suspect that the reason has been that they’ve been unclear on what exactly they mean when they say that we are “uncertain about which theory is right”—what is uncertainty about moral theories? Furthermore, especially when discussing things in an FAI [Friendly AI] context, it feels like several different senses of moral uncertainty get mixed together.

  3. ↩︎

    In various places in this sequence, I’ll use language that may appear to endorse or presume moral realism (e.g., referring to “moral information” or to probability of a particular moral theory being “correct”). But this is essentially just for convenience; I intend this sequence to be as neutral as possible on the matter of moral realism vs antirealism (except when directly focusing on such matters).

    I think that the interpretation and importance of moral uncertainty is clearest for realists, but, as I discuss in this post, I also think that moral uncertainty can still be a meaningful and important topic for many types of moral antirealist.

  4. ↩︎

    As another example of this sort of case, suppose I want to know whether fish are “conscious”. This may seem on the face of it an empirical question. However, I might not yet know precisely what I mean by “conscious”, and I might in fact only really want to know whether fish are “conscious in a sense I would morally care about”. In this case, the seemingly empirical question becomes hard to disentangle from the (seemingly moral) question: “What forms of consciousness are morally important?”

    And in turn, my answers to that question may be influenced by empirical discoveries. For example, I may initially believe that avoidance of painful stimuli demonstrates consciousness in a morally relevant sense, but then revise that belief when I learn that this behaviour can be displayed in a stimulus-response way by certain extremely simple organisms.

  5. ↩︎

    The boundaries become even fuzzier, and may lose their meaning entirely, if one assumes the metaethical view moral naturalism, which:

    refers to any version of moral realism that is consistent with [...] general philosophical naturalism. Moral realism is the view that there are objective, mind-independent moral facts. For the moral naturalist, then, there are objective moral facts, these facts are facts concerning natural things, and we know about them using empirical methods. (SEP)

    This sounds to me like it would mean that all moral uncertainties are effectively empirical uncertainties, and that there’s no difference in how moral vs empirical uncertainties should be resolved or incorporated into decision-making. But note that that’s my own claim; I haven’t seen it made explicitly by writers on these subjects.

    That said, one quote that seems to suggest something this claim is the following, from Tarsney’s thesis:

    Most generally, naturalistic metaethical views that treat normative ethical theorizing as continuous with natural science will see first-order moral principles as at least epistemically if not metaphysically dependent on features of the empirical world. For instance, on Railton’s (1986) view, moral value attaches (roughly) to social conditions that are stable with respect to certain kinds of feedback mechanisms (like the protest of those who object to their treatment under existing social conditions). What sort(s) of social conditions exhibit this stability, given the relevant background facts about human psychology, is an empirical question. For instance, is a social arrangement in which parents can pass down large advantages to their offspring through inheritance, education, etc, more stable or less stable than one in which the state intervenes extensively to prevent such intergenerational perpetuation of advantage? Someone who accepts a Railtonian metaethic and is therefore uncertain about the first-order normative principles that govern such problems of distributive justice, though on essentially empirical grounds, seems to occupy another sort of liminal space between empirical and moral uncertainty.

    Footnote 15 of this post discusses relevant aspects of moral naturalism, though not this specific question.

  6. ↩︎

    In fact, Tarsney’s (p.140-146) discussion of the difficulty of disentangling moral and empirical uncertainties is used to argue for the merits of approaching moral uncertainty analogously to how one approaches empirical uncertainty.

  7. ↩︎

    An alternative approach that also doesn’t require determining whether a given uncertainty is moral or empirical is the “worldview diversification” approach used by the Open Philanthropy Project. In this context, a worldview is described as representing “a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty [...]).” Open Phil “[puts] significant resources behind each worldview that [they] find highly plausible.” This doesn’t require treating moral and empirical uncertainty any differently, and thus doesn’t require drawing lines between those “types” of uncertainty.

  8. ↩︎

    As with metanormative uncertainty in general, this can lead to complicated regresses. For example, there’s the possibility to construct causal meta decision theories and evidential meta decision theories, and to be uncertain over which of those meta decision theories to endorse, and so on. As above, see Trammell, Tarsney, and MacAskill (p. 217-219) for discussion of such matters.

  9. ↩︎

    In a good, short post, Ikaxas writes:

    How should we deal with metaethical uncertainty? [...] One answer is this: insofar as some metaethical issue is relevant for first-order ethical issues, deal with it as you would any other normative uncertainty. And insofar as it is not relevant for first-order ethical issues, ignore it (discounting, of course, intrinsic curiosity and any value knowledge has for its own sake).

    Some people think that normative ethical issues ought to be completely independent of metaethics: “The whole idea [of my metaethical naturalism] is to hold fixed ordinary normative ideas and try to answer some further explanatory questions” (Schroeder [...]). Others [...] believe that metaethical and normative ethical theorizing should inform each other. For the first group, my suggestion in the previous paragraph recommends that they ignore metaethics entirely (again, setting aside any intrinsic motivation to study it), while for the second my suggestion recommends pursuing exclusively those areas which are likely to influence conclusions in normative ethics.

    This seems to me like a good extension/​application of general ideas from work on the value of information. (I’ll apply such ideas to moral uncertainty later in this sequence.)

    Tarsney gives an example of the sort of case in which metaethical uncertainty is relevant to decision-making (though that’s not the point he’s making with the example):

    For instance, consider an agent Alex who, like Alice, divides his moral belief between two theories, a hedonistic and a pluralistic version of consequentialism. But suppose that Alex also divides his metaethical beliefs between a robust moral realism and a fairly anemic anti-realism, and that his credence in hedonistic consequentialism is mostly or entirely conditioned on his credence in robust realism while his credence in pluralism is mostly or entirely conditioned on his credence in anti-realism. (Suppose he inclines toward a hedonistic view on which certain qualia have intrinsic value or disvalue entirely independent of our beliefs, attitudes, etc, which we are morally required to maximize. But if this view turns out to be wrong, he believes, then morality can only consist in the pursuit of whatever we contingently happen to value in some distinctively moral way, which includes pleasure but also knowledge, aesthetic goods, friendship, etc.)

  10. ↩︎

    Or, more moderately, one could remove just some degree of belief in some subset of the moral theories that one had some degree of belief in, and place that amount of belief in a new moral theory that combines just that subset of moral theories. E.g., one may initially think utilitarianism, Kantianism, and virtue ethics each have a 33% chance of being “correct”, but then switch to believing that a pluralistic combination of utilitarianism and Kantianism is 67% likely to be correct, while virtue ethics is still 33% likely to be correct.

  11. ↩︎

    Luke Muelhauser also appears to endorse a similar approach, though not explicitly in the context of moral uncertainty. And Kaj Sotala also seems to endorse a similar approach, though without using the term “pluralism” (I’ll discuss Kaj’s approach two posts from now). Finally, MacAskill quotes Nozick appearing to endorse a similar approach with regards to decision-theoretic uncertainty:

    I [Nozick] suggest that we go further and say not merely that we are uncertain about which one of these two principles, [CDT] and [EDT], is (all by itself) correct, but that both of these principles are legitimate and each must be given its respective due. The weights, then, are not measures of uncertainty but measures of the legitimate force of each principle. We thus have a normative theory that directs a person to choose an act with maximal decision-value.

  12. ↩︎

    The closest analog would be Alan updating his beliefs about the pluralistic theory’s contents/​substance; for example, coming to believe that a more correct interpretation of the theory would lean more in a Kantian direction. (Although, if we accept that such an update is possible, it may arguably be best to represent Alan as having moral uncertainty between different versions of the pluralistic theory, rather than being certain that the pluralistic theory is “correct” but uncertain about what it says.)

  13. ↩︎

    That said, we can still apply value of information analysis to things like Alan reflecting on how best to interpret the pluralistic moral theory (assuming again that we represent Alan as uncertain about the theory’s contents). A post later in this sequence will be dedicated to how and why to estimate the “value of moral information”.