RSS

Consequentialism

TagLast edit: 1 Oct 2020 20:45 UTC by Ruby

Consequentialism is the ethical theory that people should choose their actions based on the outcomes they expect will result. How to judge outcomes is not specified, but there are many types of consequentialism that specify how outcomes should be judged. For example, utilitarianism holds that the best outcome is that which maximizes the total welfare of all people, and ethical egoism holds that the best outcome is that which maximizes their own personal interests. Consequentialism is one of three main strands of ethical thought, along with deontology, which holds that people should choose actions which conform to a prescribed list of moral rules, and virtue ethics, which holds that people should be judged by how virtuous they are, instead of by what actions they take.

Related: Ethics & Morality, Deontology, Moral Uncertainty, Utilitarianism

Consequentialism is often associated with maximizing the expected value of a utility function. However, it has been argued that consequentialism is not the same thing as having a utility function because it is possible to evaluate actions based on their consequences without obeying the von Neuman-Morgenstern axioms necessary for having a utility function, and because utility functions can also be used to implement moral theories similar to deontology.

Blog posts

External links

See also

References

Tor­ture vs. Dust Specks

Eliezer Yudkowsky30 Oct 2007 2:50 UTC
66 points
616 comments1 min readLW link

Deon­tol­ogy for Consequentialists

Alicorn30 Jan 2010 17:58 UTC
52 points
255 comments6 min readLW link

[link] Choose your (prefer­ence) util­i­tar­i­anism care­fully – part 1

Kaj_Sotala25 Jun 2015 12:06 UTC
21 points
6 comments2 min readLW link

To cap­ture anti-death in­tu­itions, in­clude mem­ory in utilitarianism

Kaj_Sotala15 Jan 2014 6:27 UTC
12 points
34 comments3 min readLW link

An­swer to Job

Scott Alexander15 Mar 2015 18:02 UTC
33 points
0 comments4 min readLW link

Tran­shu­man­ism as Sim­plified Humanism

Eliezer Yudkowsky5 Dec 2018 20:12 UTC
122 points
34 comments5 min readLW link

Con­se­quen­tial­ism Need Not Be Nearsighted

orthonormal2 Sep 2011 7:37 UTC
77 points
119 comments5 min readLW link

Are Deon­tolog­i­cal Mo­ral Judg­ments Ra­tion­al­iza­tions?

lukeprog16 Aug 2011 16:40 UTC
52 points
171 comments11 min readLW link

Re­view of Doris, ‘The Mo­ral Psy­chol­ogy Hand­book’ (2010)

lukeprog26 Jun 2011 19:33 UTC
24 points
10 comments5 min readLW link

[Question] Why do you re­ject nega­tive util­i­tar­i­anism?

Teo Ajantaival11 Feb 2019 15:38 UTC
19 points
26 comments1 min readLW link

The Very Repug­nant Conclusion

Stuart_Armstrong18 Jan 2019 14:26 UTC
25 points
18 comments1 min readLW link

Antiantinatalism

Jacob Falkovich9 Feb 2018 16:49 UTC
3 points
4 comments5 min readLW link

‘The Bat­tle for Com­pas­sion’: ethics in a world of ac­cel­er­at­ing change

lukeprog11 Sep 2011 12:54 UTC
5 points
3 comments1 min readLW link

The Mo­ral Sta­tus of In­de­pen­dent Iden­ti­cal Copies

Wei_Dai30 Nov 2009 23:41 UTC
42 points
77 comments2 min readLW link

Per­son-mo­ment af­fect­ing views

KatjaGrace7 Mar 2018 2:30 UTC
16 points
8 comments5 min readLW link
(meteuphoric.wordpress.com)

To­tal­i­tar­ian eth­i­cal systems

Benquo3 May 2019 19:35 UTC
33 points
12 comments3 min readLW link
(benjaminrosshoffman.com)

Feel­ing Moral

Eliezer Yudkowsky11 Mar 2015 19:00 UTC
28 points
7 comments3 min readLW link

It’s hard to use util­ity max­i­miza­tion to jus­tify cre­at­ing new sen­tient beings

dynomight19 Oct 2020 19:45 UTC
10 points
14 comments4 min readLW link
(dyno-might.github.io)

Meta-Prefer­ence Utilitarianism

Bob Jacobs4 Feb 2020 20:24 UTC
10 points
30 comments1 min readLW link

Money: The Unit of Caring

Eliezer Yudkowsky31 Mar 2009 12:35 UTC
150 points
132 comments4 min readLW link

The Ep­silon Fallacy

johnswentworth17 Mar 2018 0:08 UTC
49 points
11 comments7 min readLW link
(medium.com)

Ends Don’t Jus­tify Means (Among Hu­mans)

Eliezer Yudkowsky14 Oct 2008 21:00 UTC
94 points
94 comments4 min readLW link

The Mere Cable Chan­nel Ad­di­tion Paradox

Ghatanathoah26 Jul 2012 7:20 UTC
106 points
147 comments12 min readLW link

Shut Up and Divide?

Wei_Dai9 Feb 2010 20:09 UTC
89 points
274 comments1 min readLW link

One Life Against the World

Eliezer Yudkowsky18 May 2007 22:06 UTC
79 points
83 comments3 min readLW link

Cir­cu­lar Altruism

Eliezer Yudkowsky22 Jan 2008 18:00 UTC
57 points
310 comments4 min readLW link

The Lifes­pan Dilemma

Eliezer Yudkowsky10 Sep 2009 18:45 UTC
47 points
220 comments7 min readLW link

Non-Con­se­quen­tial­ist Co­op­er­a­tion?

abramdemski11 Jan 2019 9:15 UTC
47 points
15 comments7 min readLW link

Wel­come to Heaven

denisbider25 Jan 2010 23:22 UTC
26 points
245 comments2 min readLW link

Pin­point­ing Utility

[deleted]1 Feb 2013 3:58 UTC
93 points
156 comments13 min readLW link

Two-Tier Rationalism

Alicorn17 Apr 2009 19:44 UTC
48 points
26 comments4 min readLW link

Dialogue on Ap­peals to Consequences

jessicata18 Jul 2019 2:34 UTC
33 points
82 comments7 min readLW link
(unstableontology.com)

Log­a­r­ithms and To­tal Utilitarianism

pvs9 Aug 2018 8:49 UTC
37 points
31 comments4 min readLW link

Pain

Alicorn2 Aug 2009 19:12 UTC
45 points
200 comments2 min readLW link

SotW: Check Consequentialism

Eliezer Yudkowsky29 Mar 2012 1:35 UTC
58 points
313 comments7 min readLW link

A (small) cri­tique of to­tal utilitarianism

Stuart_Armstrong26 Jun 2012 12:36 UTC
47 points
237 comments11 min readLW link

Hell Must Be Destroyed

algekalipso6 Dec 2018 4:11 UTC
30 points
1 comment4 min readLW link

Un­der­ap­pre­ci­ated points about util­ity func­tions (of both sorts)

Sniffnoy4 Jan 2020 7:27 UTC
34 points
61 comments15 min readLW link

The Prefer­ence Utili­tar­ian’s Time In­con­sis­tency Problem

Wei_Dai15 Jan 2010 0:26 UTC
34 points
107 comments1 min readLW link

Con­se­quen­tial­ism FAQ

Scott Alexander26 Apr 2011 1:45 UTC
39 points
123 comments1 min readLW link

Some reser­va­tions about Singer’s child-in-the-pond argument

JonahS19 Jun 2013 23:54 UTC
39 points
120 comments6 min readLW link

Hu­man er­rors, hu­man values

PhilGoetz9 Apr 2011 2:50 UTC
41 points
138 comments1 min readLW link

What we talk about when we talk about max­imis­ing utility

Richard_Ngo24 Feb 2018 22:33 UTC
14 points
18 comments4 min readLW link

Sublimity vs. Youtube

Alicorn18 Mar 2011 5:33 UTC
31 points
61 comments1 min readLW link

Why At­ti­tudes Matter

ozymandias21 Sep 2017 15:07 UTC
18 points
5 comments4 min readLW link

Ex­pe­ri­en­tial­ist The­o­ries of Well-Being

andzuck19 Feb 2021 22:04 UTC
16 points
1 comment11 min readLW link
No comments.