Mo­ral Uncertainty

TagLast edit: 12 Feb 2021 10:11 UTC by Yoav Ravid

Moral uncertainty (or normative uncertainty) is uncertainty about how to act given the diversity of moral doctrines. For example, suppose that we knew for certain that a new technology would enable more humans to live on another planet with slightly less well-being than on Earth1. An average utilitarian would consider these consequences bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do?

Moral uncertainty includes a level of uncertainty above the more usual uncertainty of what to do given incomplete information, since it deals also with uncertainty about which moral theory is right. Even with complete information about the world this kind of uncertainty would still remain 1. In one level of uncertainty, one can have doubts on how to act because all the relevant empirical information isn’t available, for example choosing whether to implement or not a new technology (e.g.: AGI, Biological Cognitive Enhancement, Mind Uploading) not fully knowing about its consequences and nature. But even if we ideally get to know each and every consequences of a new technology, we would still need to know which is the right ethical perspective for analyzing these consequences.

One approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way? A better approach is to “perform the action with the highest expected moral value. We get the expected moral value of an action by multiplying the subjective probability that some theory is true by the value of that action if it is true, doing the same for all of the other theories, and adding up the results.” 2 However, we would still need a method of comparing value intertheories, an utilon in one theory may not be the same with an utilon in another theory. Outside consequentialism, many ethical theories don’t use utilions or even any quantifiable values. This is still an open problem.

Nick Bostrom and Toby Ord have proposed a parliamentary model. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord’s proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.

Even with a high degree of moral uncertainty and a wide range of possible moral theories, there are still certain actions which seems highly valuable in any theory. Bostrom argues that Existential risk reduction is among then, showing that not only it is the most important task given most versions of consequentialism, but highly recommended by many of the other widely acceptable moral theories3.

External links


See also


  1. Crouch, William. (2010) “Moral Uncertainty and Intertheoretic Comparisons of Value” BPhil Thesis, 2010. p. 6. Available at: http://​​​​WilliamCrouch/​​Papers/​​873903/​​Moral_Uncertainty_and_Intertheoretic_Comparisons_of_Value

  2. Sepielli, Andrew. (2008) “Moral Uncertainty and the Principle of Equity among Moral Theories”. ISUS-X, Tenth Conference of the International Society for Utilitarian Studies, Kadish Center for Morality, Law and Public Affairs, UC Berkeley. Available at: http://​​​​uc/​​item/​​7h5852rr.pdf

  3. Bostrom, Nick. (2012) “Existential Risk Reduction as the Most Important Task for Humanity” Global Policy, forthcoming, 2012. p. 22. Available at: http://​​​​concept.pdf


abramdemski18 Nov 2020 16:52 UTC
46 points
11 comments9 min readLW link

Mo­ral­ity vs re­lated concepts

MichaelA7 Jan 2020 10:47 UTC
26 points
17 comments8 min readLW link

Three kinds of moral uncertainty

Kaj_Sotala30 Dec 2012 10:43 UTC
55 points
15 comments2 min readLW link

Mo­ral un­cer­tainty vs re­lated concepts

MichaelA11 Jan 2020 10:03 UTC
26 points
13 comments16 min readLW link

Mo­ral un­cer­tainty: What kind of ‘should’ is in­volved?

MichaelA13 Jan 2020 12:13 UTC
14 points
11 comments13 min readLW link

Value uncertainty

MichaelA29 Jan 2020 20:16 UTC
16 points
3 comments14 min readLW link

Mak­ing de­ci­sions un­der moral uncertainty

MichaelA30 Dec 2019 1:49 UTC
15 points
26 comments17 min readLW link

Mak­ing de­ci­sions when both morally and em­piri­cally uncertain

MichaelA2 Jan 2020 7:20 UTC
13 points
14 comments20 min readLW link

Value Uncer­tainty and the Sin­gle­ton Scenario

Wei_Dai24 Jan 2010 5:03 UTC
10 points
31 comments3 min readLW link

Nick Bostrom: Mo­ral un­cer­tainty – to­wards a solu­tion? [link, 2009]

Kevin8 Mar 2012 11:07 UTC
−10 points
8 comments1 min readLW link

Poly­math-style at­tack on the Par­li­a­men­tary Model for moral uncertainty

danieldewey26 Sep 2014 13:51 UTC
35 points
74 comments4 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:46 UTC
190 points
26 comments62 min readLW link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

Pre­limi­nary thoughts on moral weight

lukeprog13 Aug 2018 23:45 UTC
84 points
47 comments8 min readLW link

Ar­gu­ments for moral indefinability

Richard_Ngo12 Feb 2019 10:40 UTC
51 points
10 comments7 min readLW link

AI Align­ment Pod­cast: An Overview of Tech­ni­cal AI Align­ment in 2018 and 2019 with Buck Sh­legeris and Ro­hin Shah

Palus Astra16 Apr 2020 0:50 UTC
46 points
27 comments89 min readLW link

On­tolog­i­cal Cri­sis in Humans

Wei_Dai18 Dec 2012 17:32 UTC
67 points
69 comments4 min readLW link

Re­view and Sum­mary of ‘Mo­ral Uncer­tainty’

fin7 Oct 2020 17:52 UTC
11 points
8 comments1 min readLW link

AXRP Epi­sode 3 - Ne­go­tiable Re­in­force­ment Learn­ing with An­drew Critch

DanielFilan29 Dec 2020 20:45 UTC
26 points
0 comments27 min readLW link

When is un­al­igned AI morally valuable?

paulfchristiano25 May 2018 1:57 UTC
58 points
52 comments10 min readLW link

Two-Tier Rationalism

Alicorn17 Apr 2009 19:44 UTC
48 points
26 comments4 min readLW link

Real­ism and Rationality

bmgarfinkel16 Sep 2019 3:09 UTC
43 points
49 comments23 min readLW link

Pro­tected From Myself

Eliezer Yudkowsky19 Oct 2008 0:09 UTC
37 points
30 comments6 min readLW link

For the past, in some ways only, we are moral degenerates

Stuart_Armstrong7 Jun 2019 15:57 UTC
32 points
17 comments2 min readLW link

Non­para­met­ric Ethics

Eliezer Yudkowsky20 Jun 2009 11:31 UTC
30 points
60 comments5 min readLW link

Is the po­ten­tial as­tro­nom­i­cal waste in our uni­verse too small to care about?

Wei_Dai21 Oct 2014 8:44 UTC
48 points
14 comments2 min readLW link

Hu­man er­rors, hu­man values

PhilGoetz9 Apr 2011 2:50 UTC
41 points
138 comments1 min readLW link

Univer­sal Eudaimonia

hg005 Oct 2020 13:45 UTC
17 points
6 comments2 min readLW link

Mul­ti­ple Moralities

Liam Goddard3 Nov 2019 17:06 UTC
9 points
4 comments4 min readLW link

Sor­ti­tion Model of Mo­ral Uncertainty

Bob Jacobs8 Oct 2020 17:44 UTC
9 points
2 comments2 min readLW link

Is re­quires ought

jessicata28 Oct 2019 2:36 UTC
20 points
58 comments7 min readLW link

Whither Mo­ral Progress?

Eliezer Yudkowsky16 Jul 2008 5:04 UTC
18 points
101 comments2 min readLW link

In­visi­ble Frameworks

Eliezer Yudkowsky22 Aug 2008 3:36 UTC
18 points
47 comments6 min readLW link

(Mo­ral) Truth in Fic­tion?

Eliezer Yudkowsky9 Feb 2009 17:26 UTC
21 points
82 comments7 min readLW link

Just a re­minder: Scien­tists are, tech­ni­cally, peo­ple.

PhilGoetz20 Mar 2009 20:33 UTC
8 points
35 comments1 min readLW link

Aver­age util­i­tar­i­anism must be cor­rect?

PhilGoetz6 Apr 2009 17:10 UTC
6 points
169 comments3 min readLW link

My main prob­lem with utilitarianism

taw17 Apr 2009 20:26 UTC
−1 points
84 comments2 min readLW link

While we’re on the sub­ject of meta-ethics...

CronoDAS17 Apr 2009 8:01 UTC
7 points
4 comments1 min readLW link

Wed­nes­day de­pends on us.

byrnema29 Apr 2009 3:47 UTC
2 points
43 comments3 min readLW link

Fic­tion of interest

dclayh29 Apr 2009 18:47 UTC
14 points
16 comments1 min readLW link

Con­ven­tions and Con­fus­ing Con­ti­nu­ity Conundrums

Psy-Kosh1 May 2009 1:41 UTC
5 points
9 comments1 min readLW link

The Sword of Good

Eliezer Yudkowsky3 Sep 2009 0:53 UTC
105 points
300 comments2 min readLW link

Es­say-Ques­tion Poll: Die­tary Choices

Alicorn3 May 2009 15:27 UTC
17 points
244 comments1 min readLW link

Re­vis­it­ing tor­ture vs. dust specks

cousin_it8 Jul 2009 11:04 UTC
8 points
66 comments2 min readLW link

The sailor’s wife

krbouchard27 Feb 2021 0:23 UTC
0 points
2 comments2 min readLW link

Mo­ral Golems

Erich_Grunewald3 Apr 2021 10:12 UTC
7 points
2 comments6 min readLW link