RSS

Metaethics

TagLast edit: 16 Nov 2021 10:22 UTC by Yoav Ravid

Metaethics is one of the three branches of ethics usually recognized by philosophers, the others being normative ethics and applied ethics. It’s a field of study that tries to understand the metaphysical, epistemological and semantic characteristics as well as the foundations and scope of moral values. It worries about questions and problems such as “Are moral judgments objective or subjective, relative or absolute?”, “Do moral facts exist?” or “How do we learn moral values?”. (As distinct from object-level moral questions like, “Ought I to steal from banks in order to give the money to the deserving poor?”)

Metaethics on LessWrong

Eliezer Yudkowsky wrote a Sequence about metaethics, the Metaethics sequence, which Yudkowsky worried failed to convey his central point (this post by Luke tried to clarify); he approached the same problem again from a different angle in Highly Advanced Epistemology 101 for Beginners. From a standard philosophical standpoint, Yudkowsky’s philosophy is closest to Frank Jackson’s moral functionalism /​ analytic descriptivism; Yudkowsky could be loosely characterized as moral cognitivist—someone who believes moral sentences are either true or false—but not a moral realist—thus denying that moral sentences refer to facts about the world. Yudkowsky believes that moral cognition in any single human is at least potentially about a subject matter that is ‘logical’ in the sense that its semantics can be pinned down by axioms, and hence that moral cognition can bear truth-values; also that human beings both using similar words like “morality” can be talking about highly overlapping subject matter; but not that all possible minds would find the truths about this subject matter to be psychologically compelling.

Luke Muehlhauser has written a sequence, No-Nonsense Metaethics, where he claims that many of the questions of metaethics can be answered today using modern neuroscience and rationality. He explains how conventional metaethics or “Austere Metaethics” is capable of, after assuming a definition of ‘right’, choosing the right action given a situation—but useless without assuming some criteria for ‘right’. He proposes instead “Empathic Metaethics” which utilizes your underlying cognitive algorithms to understand what you think ‘right’ means, helps clarify any emotional and cognitive contradictions in it, and then tells you what the right thing to do is, according to your definition of right. This approach is highly relevant for the Friendly AI problem as a way of defining human-like goals and motivations when designing AIs.

Further Reading & References

See Also

Mo­ral un­cer­tainty vs re­lated concepts

MichaelA11 Jan 2020 10:03 UTC
26 points
13 comments16 min readLW link

What is Eliezer Yud­kowsky’s meta-eth­i­cal the­ory?

lukeprog29 Jan 2011 19:58 UTC
48 points
375 comments1 min readLW link

By Which It May Be Judged

Eliezer Yudkowsky10 Dec 2012 4:26 UTC
70 points
940 comments11 min readLW link

The Value Defi­ni­tion Problem

Sammy Martin18 Nov 2019 19:56 UTC
14 points
6 comments11 min readLW link

Deon­tol­ogy for Consequentialists

Alicorn30 Jan 2010 17:58 UTC
55 points
255 comments6 min readLW link

Mo­ral un­cer­tainty: What kind of ‘should’ is in­volved?

MichaelA13 Jan 2020 12:13 UTC
14 points
11 comments13 min readLW link

Ex­is­ten­tial Angst Factory

Eliezer Yudkowsky19 Jul 2008 6:55 UTC
67 points
101 comments4 min readLW link

Con­cep­tual Anal­y­sis and Mo­ral Theory

lukeprog16 May 2011 6:28 UTC
90 points
481 comments8 min readLW link

Real­ism and Rationality

bmgarfinkel16 Sep 2019 3:09 UTC
45 points
49 comments23 min readLW link

Why didn’t peo­ple (ap­par­ently?) un­der­stand the metaethics se­quence?

ChrisHallquist29 Oct 2013 23:04 UTC
23 points
231 comments1 min readLW link

A the­ory of hu­man values

Stuart_Armstrong13 Mar 2019 15:22 UTC
27 points
13 comments7 min readLW link

De­con­fus­ing Hu­man Values Re­search Agenda v1

G Gordon Worley III23 Mar 2020 16:25 UTC
27 points
12 comments4 min readLW link

Is Mo­ral­ity Given?

Eliezer Yudkowsky6 Jul 2008 8:12 UTC
30 points
100 comments8 min readLW link

Three Wor­lds Col­lide (0/​8)

Eliezer Yudkowsky30 Jan 2009 12:07 UTC
72 points
95 comments1 min readLW link

Quick thoughts on em­pathic metaethics

lukeprog12 Dec 2017 21:46 UTC
26 points
0 comments9 min readLW link

Log­i­cal Foun­da­tions of Govern­ment Policy

FCCC10 Oct 2020 17:05 UTC
2 points
0 comments17 min readLW link

Can we make peace with moral in­de­ter­mi­nacy?

Charlie Steiner3 Oct 2019 12:56 UTC
16 points
8 comments3 min readLW link

Meta-prefer­ences two ways: gen­er­a­tor vs. patch

Charlie Steiner1 Apr 2020 0:51 UTC
18 points
0 comments2 min readLW link

Nor­ma­tivity and Meta-Philosophy

Wei_Dai23 Apr 2013 20:35 UTC
29 points
56 comments1 min readLW link

25 Min Talk on Me­taEth­i­cal.AI with Ques­tions from Stu­art Armstrong

June Ku29 Apr 2021 15:38 UTC
21 points
7 comments1 min readLW link

Value uncertainty

MichaelA29 Jan 2020 20:16 UTC
19 points
3 comments14 min readLW link

Mo­ral­ity vs re­lated concepts

MichaelA7 Jan 2020 10:47 UTC
26 points
17 comments8 min readLW link

Six Plau­si­ble Meta-Eth­i­cal Alternatives

Wei_Dai6 Aug 2014 0:04 UTC
81 points
36 comments3 min readLW link

The Ur­gent Meta-Ethics of Friendly Ar­tifi­cial Intelligence

lukeprog1 Feb 2011 14:15 UTC
66 points
252 comments1 min readLW link

Head­ing Toward: No-Non­sense Metaethics

lukeprog24 Apr 2011 0:42 UTC
50 points
59 comments2 min readLW link

The Mo­ral Void

Eliezer Yudkowsky30 Jun 2008 8:52 UTC
62 points
112 comments4 min readLW link

Plu­ral­is­tic Mo­ral Reductionism

lukeprog1 Jun 2011 0:59 UTC
62 points
327 comments15 min readLW link

No Univer­sally Com­pel­ling Arguments

Eliezer Yudkowsky26 Jun 2008 8:29 UTC
61 points
57 comments5 min readLW link

Created Already In Motion

Eliezer Yudkowsky1 Jul 2008 6:03 UTC
65 points
23 comments3 min readLW link

The Sheer Folly of Cal­low Youth

Eliezer Yudkowsky19 Sep 2008 1:30 UTC
66 points
17 comments7 min readLW link

The Mean­ing of Right

Eliezer Yudkowsky29 Jul 2008 1:28 UTC
50 points
152 comments23 min readLW link

Chang­ing Your Metaethics

Eliezer Yudkowsky27 Jul 2008 12:36 UTC
43 points
21 comments5 min readLW link

Set­ting Up Metaethics

Eliezer Yudkowsky28 Jul 2008 2:25 UTC
22 points
34 comments4 min readLW link

Mo­ral Complexities

Eliezer Yudkowsky4 Jul 2008 6:43 UTC
26 points
40 comments1 min readLW link

Could Any­thing Be Right?

Eliezer Yudkowsky18 Jul 2008 7:19 UTC
51 points
39 comments6 min readLW link

Causal­ity and Mo­ral Responsibility

Eliezer Yudkowsky13 Jun 2008 8:34 UTC
50 points
55 comments5 min readLW link

You Prov­ably Can’t Trust Yourself

Eliezer Yudkowsky19 Aug 2008 20:35 UTC
29 points
18 comments6 min readLW link

Insep­a­rably Right; or, Joy in the Merely Good

Eliezer Yudkowsky9 Aug 2008 1:00 UTC
48 points
33 comments4 min readLW link

Philo­soph­i­cal self-ratification

jessicata3 Feb 2020 22:48 UTC
23 points
13 comments5 min readLW link
(unstableontology.com)

Whither Mo­ral Progress?

Eliezer Yudkowsky16 Jul 2008 5:04 UTC
19 points
101 comments2 min readLW link

Mir­rors and Paintings

Eliezer Yudkowsky23 Aug 2008 0:29 UTC
18 points
42 comments8 min readLW link

Mo­ral Er­ror and Mo­ral Disagreement

Eliezer Yudkowsky10 Aug 2008 23:32 UTC
22 points
133 comments6 min readLW link

In­ner Goodness

Eliezer Yudkowsky23 Oct 2008 22:19 UTC
20 points
31 comments7 min readLW link

The Be­drock of Fairness

Eliezer Yudkowsky3 Jul 2008 6:00 UTC
46 points
103 comments5 min readLW link

Is Fair­ness Ar­bi­trary?

Eliezer Yudkowsky14 Aug 2008 1:54 UTC
5 points
37 comments6 min readLW link

In­visi­ble Frameworks

Eliezer Yudkowsky22 Aug 2008 3:36 UTC
19 points
47 comments6 min readLW link

Ethics Notes

Eliezer Yudkowsky21 Oct 2008 21:57 UTC
16 points
46 comments11 min readLW link

While we’re on the sub­ject of meta-ethics...

CronoDAS17 Apr 2009 8:01 UTC
7 points
4 comments1 min readLW link

[LINK] Scott Aaron­son on Google, Break­ing Cir­cu­lar­ity and Eigenmorality

shminux19 Jun 2014 20:17 UTC
31 points
46 comments1 min readLW link

What are the lef­tover ques­tions of metaethics?

cousin_it28 Apr 2011 8:46 UTC
30 points
55 comments1 min readLW link

My un­bundling of morality

Rudi C30 Dec 2020 15:19 UTC
7 points
2 comments1 min readLW link

AI Align­ment, Philo­soph­i­cal Plu­ral­ism, and the Rele­vance of Non-Western Philosophy

xuan1 Jan 2021 0:08 UTC
30 points
21 comments20 min readLW link

Mo­ral­ity is Awesome

[deleted]6 Jan 2013 15:21 UTC
143 points
437 comments3 min readLW link

Mo­ral Golems

Erich_Grunewald3 Apr 2021 10:12 UTC
8 points
2 comments6 min readLW link
(www.erichgrunewald.com)

RFC: Meta-eth­i­cal un­cer­tainty in AGI alignment

G Gordon Worley III8 Jun 2018 20:56 UTC
16 points
6 comments3 min readLW link

RFC: Philo­soph­i­cal Con­ser­vatism in AI Align­ment Research

G Gordon Worley III15 May 2018 3:29 UTC
17 points
13 comments1 min readLW link

Neo-Mohism

Bae's Theorem16 Jun 2021 21:57 UTC
5 points
11 comments5 min readLW link

[Question] How can there be a god­less moral world ?

amaury lorin21 Jun 2021 12:34 UTC
7 points
81 comments1 min readLW link

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Dario Citrini16 Sep 2021 16:13 UTC
6 points
0 comments8 min readLW link

[Book Re­view] “Suffer­ing-fo­cused Ethics” by Mag­nus Vinding

KStub28 Dec 2021 5:58 UTC
13 points
3 comments24 min readLW link

Re­view: G.E.M. An­scombe’s “Modern Mo­ral Philos­o­phy”

David_Gross20 Feb 2022 18:58 UTC
24 points
3 comments5 min readLW link

Reflec­tion Mechanisms as an Align­ment tar­get: A survey

22 Jun 2022 15:05 UTC
28 points
1 comment14 min readLW link
No comments.