RSS

Metaethics

TagLast edit: 16 Nov 2021 10:22 UTC by Yoav Ravid

Metaethics is one of the three branches of ethics usually recognized by philosophers, the others being normative ethics and applied ethics. It’s a field of study that tries to understand the metaphysical, epistemological and semantic characteristics as well as the foundations and scope of moral values. It worries about questions and problems such as “Are moral judgments objective or subjective, relative or absolute?”, “Do moral facts exist?” or “How do we learn moral values?”. (As distinct from object-level moral questions like, “Ought I to steal from banks in order to give the money to the deserving poor?”)

Metaethics on LessWrong

Eliezer Yudkowsky wrote a Sequence about metaethics, the Metaethics sequence, which Yudkowsky worried failed to convey his central point (this post by Luke tried to clarify); he approached the same problem again from a different angle in Highly Advanced Epistemology 101 for Beginners. From a standard philosophical standpoint, Yudkowsky’s philosophy is closest to Frank Jackson’s moral functionalism /​ analytic descriptivism; Yudkowsky could be loosely characterized as moral cognitivist—someone who believes moral sentences are either true or false—but not a moral realist—thus denying that moral sentences refer to facts about the world. Yudkowsky believes that moral cognition in any single human is at least potentially about a subject matter that is ‘logical’ in the sense that its semantics can be pinned down by axioms, and hence that moral cognition can bear truth-values; also that human beings both using similar words like “morality” can be talking about highly overlapping subject matter; but not that all possible minds would find the truths about this subject matter to be psychologically compelling.

Luke Muehlhauser has written a sequence, No-Nonsense Metaethics, where he claims that many of the questions of metaethics can be answered today using modern neuroscience and rationality. He explains how conventional metaethics or “Austere Metaethics” is capable of, after assuming a definition of ‘right’, choosing the right action given a situation—but useless without assuming some criteria for ‘right’. He proposes instead “Empathic Metaethics” which utilizes your underlying cognitive algorithms to understand what you think ‘right’ means, helps clarify any emotional and cognitive contradictions in it, and then tells you what the right thing to do is, according to your definition of right. This approach is highly relevant for the Friendly AI problem as a way of defining human-like goals and motivations when designing AIs.

Further Reading & References

See Also

Mo­ral un­cer­tainty vs re­lated concepts

MichaelA11 Jan 2020 10:03 UTC
26 points
13 comments16 min readLW link

What is Eliezer Yud­kowsky’s meta-eth­i­cal the­ory?

lukeprog29 Jan 2011 19:58 UTC
49 points
375 comments1 min readLW link

By Which It May Be Judged

Eliezer Yudkowsky10 Dec 2012 4:26 UTC
89 points
941 comments11 min readLW link

The Value Defi­ni­tion Problem

Sammy Martin18 Nov 2019 19:56 UTC
15 points
6 comments11 min readLW link

Deon­tol­ogy for Consequentialists

Alicorn30 Jan 2010 17:58 UTC
61 points
255 comments6 min readLW link

Real­ism and Rationality

bmgarfinkel16 Sep 2019 3:09 UTC
45 points
49 comments23 min readLW link

Why didn’t peo­ple (ap­par­ently?) un­der­stand the metaethics se­quence?

ChrisHallquist29 Oct 2013 23:04 UTC
23 points
231 comments1 min readLW link

Mo­ral un­cer­tainty: What kind of ‘should’ is in­volved?

MichaelA13 Jan 2020 12:13 UTC
14 points
11 comments13 min readLW link

Con­cep­tual Anal­y­sis and Mo­ral Theory

lukeprog16 May 2011 6:28 UTC
92 points
481 comments8 min readLW link

Ex­is­ten­tial Angst Factory

Eliezer Yudkowsky19 Jul 2008 6:55 UTC
76 points
100 comments4 min readLW link

The AGI Op­ti­mist’s Dilemma

kaputmi23 Feb 2023 20:20 UTC
−6 points
1 comment1 min readLW link

Is Mo­ral­ity Given?

Eliezer Yudkowsky6 Jul 2008 8:12 UTC
34 points
100 comments8 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 3 - Why don’t we agree on what’s right?

Gordon Seidoh Worley25 Jun 2022 17:50 UTC
27 points
21 comments14 min readLW link

Log­i­cal Foun­da­tions of Govern­ment Policy

FCCC10 Oct 2020 17:05 UTC
2 points
0 comments17 min readLW link

Can we make peace with moral in­de­ter­mi­nacy?

Charlie Steiner3 Oct 2019 12:56 UTC
16 points
8 comments3 min readLW link

Meta-prefer­ences two ways: gen­er­a­tor vs. patch

Charlie Steiner1 Apr 2020 0:51 UTC
18 points
0 comments2 min readLW link

25 Min Talk on Me­taEth­i­cal.AI with Ques­tions from Stu­art Armstrong

June Ku29 Apr 2021 15:38 UTC
21 points
7 comments1 min readLW link

Chang­ing Your Metaethics

Eliezer Yudkowsky27 Jul 2008 12:36 UTC
50 points
20 comments5 min readLW link

The Mean­ing of Right

Eliezer Yudkowsky29 Jul 2008 1:28 UTC
58 points
156 comments23 min readLW link

[Valence se­ries] 2. Valence & Normativity

Steven Byrnes7 Dec 2023 16:43 UTC
70 points
4 comments28 min readLW link

Three Wor­lds Col­lide (0/​8)

Eliezer Yudkowsky30 Jan 2009 12:07 UTC
96 points
97 comments1 min readLW link

Quick thoughts on em­pathic metaethics

lukeprog12 Dec 2017 21:46 UTC
27 points
0 comments9 min readLW link

Ar­gu­ments for moral indefinability

Richard_Ngo30 Sep 2023 22:40 UTC
47 points
16 comments7 min readLW link
(www.thinkingcomplete.com)

A the­ory of hu­man values

Stuart_Armstrong13 Mar 2019 15:22 UTC
28 points
13 comments7 min readLW link

Against meta-eth­i­cal hedonism

Joe Carlsmith2 Dec 2022 0:23 UTC
23 points
4 comments35 min readLW link

The Cat­e­gor­i­cal Im­per­a­tive Obscures

Gordon Seidoh Worley6 Dec 2022 17:48 UTC
17 points
17 comments2 min readLW link

Nor­ma­tivity and Meta-Philosophy

Wei Dai23 Apr 2013 20:35 UTC
29 points
56 comments1 min readLW link

De­con­fus­ing Hu­man Values Re­search Agenda v1

Gordon Seidoh Worley23 Mar 2020 16:25 UTC
28 points
12 comments4 min readLW link

Crit­i­cism of Eliezer’s ir­ra­tional moral beliefs

Jorterder16 Jun 2023 20:47 UTC
−17 points
21 comments1 min readLW link

Ele­ments of Com­pu­ta­tional Philos­o­phy, Vol. I: Truth

1 Jul 2023 11:44 UTC
11 points
6 comments1 min readLW link
(compphil.github.io)

Philo­soph­i­cal self-ratification

jessicata3 Feb 2020 22:48 UTC
23 points
13 comments5 min readLW link
(unstableontology.com)

Whither Mo­ral Progress?

Eliezer Yudkowsky16 Jul 2008 5:04 UTC
22 points
101 comments2 min readLW link

Mir­rors and Paintings

Eliezer Yudkowsky23 Aug 2008 0:29 UTC
29 points
42 comments8 min readLW link

Mo­ral Er­ror and Mo­ral Disagreement

Eliezer Yudkowsky10 Aug 2008 23:32 UTC
26 points
133 comments6 min readLW link

In­ner Goodness

Eliezer Yudkowsky23 Oct 2008 22:19 UTC
27 points
31 comments7 min readLW link

The Be­drock of Fairness

Eliezer Yudkowsky3 Jul 2008 6:00 UTC
53 points
103 comments5 min readLW link

Is Fair­ness Ar­bi­trary?

Eliezer Yudkowsky14 Aug 2008 1:54 UTC
9 points
37 comments6 min readLW link

In­visi­ble Frameworks

Eliezer Yudkowsky22 Aug 2008 3:36 UTC
27 points
47 comments6 min readLW link

Ethics Notes

Eliezer Yudkowsky21 Oct 2008 21:57 UTC
21 points
46 comments11 min readLW link

While we’re on the sub­ject of meta-ethics...

CronoDAS17 Apr 2009 8:01 UTC
7 points
4 comments1 min readLW link

Re­solv­ing moral un­cer­tainty with randomization

29 Sep 2023 11:23 UTC
7 points
1 comment11 min readLW link

[LINK] Scott Aaron­son on Google, Break­ing Cir­cu­lar­ity and Eigenmorality

shminux19 Jun 2014 20:17 UTC
31 points
46 comments1 min readLW link

What are the lef­tover ques­tions of metaethics?

cousin_it28 Apr 2011 8:46 UTC
30 points
55 comments1 min readLW link

Mo­ral­ity is Awesome

[deleted]6 Jan 2013 15:21 UTC
145 points
437 comments3 min readLW link

Mo­ral Golems

Erich_Grunewald3 Apr 2021 10:12 UTC
8 points
2 comments6 min readLW link
(www.erichgrunewald.com)

RFC: Meta-eth­i­cal un­cer­tainty in AGI alignment

Gordon Seidoh Worley8 Jun 2018 20:56 UTC
16 points
6 comments3 min readLW link

RFC: Philo­soph­i­cal Con­ser­vatism in AI Align­ment Research

Gordon Seidoh Worley15 May 2018 3:29 UTC
17 points
13 comments1 min readLW link

Neo-Mohism

Bae's Theorem16 Jun 2021 21:57 UTC
5 points
11 comments7 min readLW link

[Question] How can there be a god­less moral world ?

momom221 Jun 2021 12:34 UTC
7 points
79 comments1 min readLW link

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Eleos Arete Citrini16 Sep 2021 16:13 UTC
6 points
0 comments8 min readLW link

[Book Re­view] “Suffer­ing-fo­cused Ethics” by Mag­nus Vinding

KStub28 Dec 2021 5:58 UTC
14 points
3 comments24 min readLW link

Re­view: G.E.M. An­scombe’s “Modern Mo­ral Philos­o­phy”

David Gross20 Feb 2022 18:58 UTC
24 points
3 comments5 min readLW link

Reflec­tion Mechanisms as an Align­ment tar­get: A survey

22 Jun 2022 15:05 UTC
32 points
1 comment14 min readLW link

What Should AI Owe To Us? Ac­countable and Aligned AI Sys­tems via Con­trac­tu­al­ist AI Alignment

xuan8 Sep 2022 15:04 UTC
32 points
15 comments25 min readLW link

Against the nor­ma­tive re­al­ist’s wager

Joe Carlsmith13 Oct 2022 16:35 UTC
16 points
9 comments23 min readLW link

Ques­tions about Value Lock-in, Pa­ter­nal­ism, and Empowerment

Sam F. Brown16 Nov 2022 15:33 UTC
13 points
2 comments12 min readLW link
(sambrown.eu)

Mo­ral Real­ity Check (a short story)

jessicata26 Nov 2023 5:03 UTC
137 points
44 comments21 min readLW link
(unstableontology.com)

What can thought-ex­per­i­ments do?

Cleo Nardo17 Jan 2023 0:35 UTC
16 points
3 comments5 min readLW link

Solv­ing For Meta-Ethics By In­duc­ing From The Self

TheAspiringHumanist20 Jan 2023 7:21 UTC
4 points
1 comment9 min readLW link

Book Re­view: Spiritual En­light­en­ment: The Damnedest Thing

Cole Killian21 Jan 2023 2:38 UTC
7 points
26 comments10 min readLW link
(colekillian.com)

What kind of place is this?

Jim Pivarski25 Feb 2023 2:14 UTC
24 points
24 comments8 min readLW link

Reflec­tion Mechanisms as an Align­ment Tar­get—At­ti­tudes on “near-term” AI

2 Mar 2023 4:29 UTC
20 points
0 comments8 min readLW link

[Question] Math­e­mat­i­cal mod­els of Ethics

Victors8 Mar 2023 17:40 UTC
4 points
2 comments1 min readLW link

your ter­mi­nal val­ues are com­plex and not objective

Tamsin Leake13 Mar 2023 13:34 UTC
60 points
6 comments2 min readLW link
(carado.moe)

Value Plu­ral­ism and AI

Göran Crafte19 Mar 2023 23:38 UTC
9 points
4 comments2 min readLW link

Two Dog­mas of LessWrong

omnizoid15 Dec 2022 17:56 UTC
−6 points
155 comments69 min readLW link

A Bench­mark for De­ci­sion Theories

StrivingForLegibility11 Jan 2024 18:54 UTC
10 points
0 comments2 min readLW link

Open-ended ethics of phe­nom­ena (a desider­ata with uni­ver­sal moral­ity)

Ryo 8 Nov 2023 20:10 UTC
1 point
0 comments8 min readLW link

My un­bundling of morality

Rudi C30 Dec 2020 15:19 UTC
7 points
2 comments1 min readLW link

AI Align­ment, Philo­soph­i­cal Plu­ral­ism, and the Rele­vance of Non-Western Philosophy

xuan1 Jan 2021 0:08 UTC
30 points
21 comments20 min readLW link

Value uncertainty

MichaelA29 Jan 2020 20:16 UTC
20 points
3 comments14 min readLW link

Mo­ral­ity vs re­lated concepts

MichaelA7 Jan 2020 10:47 UTC
26 points
17 comments8 min readLW link

​​ Open-ended/​Phenom­e­nal ​Ethics ​(TLDR)

Ryo 9 Nov 2023 16:58 UTC
3 points
0 comments1 min readLW link

Op­tion­al­ity ap­proach to ethics

Ryo 13 Nov 2023 15:23 UTC
7 points
2 comments3 min readLW link

Why small phe­nomenons are rele­vant to moral­ity ​

Ryo 13 Nov 2023 15:25 UTC
1 point
0 comments3 min readLW link

Re­ac­tion to “Em­pow­er­ment is (al­most) All We Need” : an open-ended alternative

Ryo 25 Nov 2023 15:35 UTC
9 points
3 comments5 min readLW link

Six Plau­si­ble Meta-Eth­i­cal Alternatives

Wei Dai6 Aug 2014 0:04 UTC
84 points
36 comments3 min readLW link

The Ur­gent Meta-Ethics of Friendly Ar­tifi­cial Intelligence

lukeprog1 Feb 2011 14:15 UTC
76 points
252 comments1 min readLW link

Head­ing Toward: No-Non­sense Metaethics

lukeprog24 Apr 2011 0:42 UTC
55 points
60 comments2 min readLW link

The Mo­ral Void

Eliezer Yudkowsky30 Jun 2008 8:52 UTC
67 points
111 comments4 min readLW link

Plu­ral­is­tic Mo­ral Reductionism

lukeprog1 Jun 2011 0:59 UTC
64 points
327 comments15 min readLW link

No Univer­sally Com­pel­ling Arguments

Eliezer Yudkowsky26 Jun 2008 8:29 UTC
85 points
58 comments5 min readLW link

Created Already In Motion

Eliezer Yudkowsky1 Jul 2008 6:03 UTC
75 points
23 comments3 min readLW link

The Sheer Folly of Cal­low Youth

Eliezer Yudkowsky19 Sep 2008 1:30 UTC
85 points
18 comments7 min readLW link

Set­ting Up Metaethics

Eliezer Yudkowsky28 Jul 2008 2:25 UTC
25 points
34 comments4 min readLW link

Mo­ral Complexities

Eliezer Yudkowsky4 Jul 2008 6:43 UTC
31 points
40 comments1 min readLW link

Could Any­thing Be Right?

Eliezer Yudkowsky18 Jul 2008 7:19 UTC
60 points
39 comments6 min readLW link

Causal­ity and Mo­ral Responsibility

Eliezer Yudkowsky13 Jun 2008 8:34 UTC
52 points
55 comments5 min readLW link

On Ob­jec­tive Ethics, and a bit about boats

EndlessBlue31 May 2023 11:40 UTC
−7 points
3 comments2 min readLW link

You Prov­ably Can’t Trust Yourself

Eliezer Yudkowsky19 Aug 2008 20:35 UTC
48 points
18 comments6 min readLW link

Insep­a­rably Right; or, Joy in the Merely Good

Eliezer Yudkowsky9 Aug 2008 1:00 UTC
56 points
33 comments4 min readLW link
No comments.