RSS

Utilitarianism

TagLast edit: 18 Mar 2021 15:49 UTC by Yoav Ravid

Utilitarianism is a moral philosophy that says that what matters is the sum of everyone’s welfare, or the “greatest good for the greatest number”.

Not to be confused with maximization of utility, or expected utility. If you’re a utilitarian, you don’t just sum over possible worlds; you sum over people.

Utilitarianism comes in different variants. For example, unlike standard total utilitarianism, average utilitarianism values the average utility among a group’s members. Negative utilitarianism seeks only to minimize suffering, and is often discussed for its extreme implications.

Related Pages: Negative Utilitarianism, Consequentialism, Ethics & Morality, Fun Theory, Complexity of Value

Com­par­ing Utilities

abramdemski14 Sep 2020 20:56 UTC
58 points
31 comments17 min readLW link

It’s hard to use util­ity max­i­miza­tion to jus­tify cre­at­ing new sen­tient beings

dynomight19 Oct 2020 19:45 UTC
10 points
14 comments4 min readLW link
(dyno-might.github.io)

Coal­i­tion Dy­nam­ics as Morality

abramdemski23 Jun 2017 18:00 UTC
2 points
5 comments5 min readLW link

Em­brac­ing the “sadis­tic” conclusion

Stuart_Armstrong13 Feb 2014 10:30 UTC
27 points
41 comments2 min readLW link

In favour of to­tal util­i­tar­i­anism over average

casebash22 Dec 2015 5:07 UTC
0 points
15 comments4 min readLW link

AXRP Epi­sode 3 - Ne­go­tiable Re­in­force­ment Learn­ing with An­drew Critch

DanielFilan29 Dec 2020 20:45 UTC
26 points
0 comments27 min readLW link

Sublimity vs. Youtube

Alicorn18 Mar 2011 5:33 UTC
31 points
61 comments1 min readLW link

Forc­ing Freedom

vlad.proex6 Oct 2020 18:15 UTC
41 points
14 comments7 min readLW link

In­fant Mor­tal­ity and the Ar­gu­ment from Life History

ozymandias4 Oct 2017 23:10 UTC
13 points
4 comments3 min readLW link

Aver­age util­i­tar­i­anism must be cor­rect?

PhilGoetz6 Apr 2009 17:10 UTC
6 points
169 comments3 min readLW link

My main prob­lem with utilitarianism

taw17 Apr 2009 20:26 UTC
−1 points
84 comments2 min readLW link

Ex­pected util­ity with­out the in­de­pen­dence axiom

Stuart_Armstrong28 Oct 2009 14:40 UTC
20 points
68 comments4 min readLW link

Mo­ral differ­ences in mediocristan

Benquo26 Sep 2018 20:39 UTC
22 points
0 comments3 min readLW link
(benjaminrosshoffman.com)

Con­ven­tions and Con­fus­ing Con­ti­nu­ity Conundrums

Psy-Kosh1 May 2009 1:41 UTC
5 points
9 comments1 min readLW link

Model Uncer­tainty, Pas­calian Rea­son­ing and Utilitarianism

multifoliaterose14 Jun 2011 3:19 UTC
34 points
155 comments5 min readLW link

[Question] What do we *re­ally* ex­pect from a well-al­igned AI?

jan betley4 Jan 2021 20:57 UTC
8 points
10 comments1 min readLW link
No comments.