TagLast edit: 1 Oct 2020 23:16 UTC by Ruby

Metaethics is one of the three branches of ethics usually recognized by philosophers, the others being normative ethics and applied ethics. It’s a field of study that tries to understand the metaphysical, epistemological and semantic characteristics as well as the foundations and scope of moral values. It worries about questions and problems such as “Are moral judgments objective or subjective, relative or absolute?”, “Do moral facts exist?” or “How do we learn moral values?”. (As distinct from object-level moral questions like, “Ought I to steal from banks in order to give the money to the deserving poor?”)

Metaethics on LessWrong

Eliezer Yudkowsky wrote a Sequence about metaethics, the Metaethics sequence, which Yudkowsky worried failed to convey his central point; he approached the same problem again from a different angle in Highly Advanced Epistemology 101 for Beginners. From a standard philosophical standpoint, Yudkowsky’s philosophy is closest to Frank Jackson’s moral functionalism /​ analytic descriptivism; Yudkowsky could be loosely characterized as moral cognitivist—someone who believes moral sentences are either true or false—but not a moral realist—thus denning that moral sentences refer to facts about the world. Yudkowsky believes that moral cognition in any single human is at least potentially about a subject matter that is ‘logical’ in the sense that its semantics can be pinned down by axioms, and hence that moral cognition can bear truth-values; also that human beings both using similar words like “morality” can be talking about highly overlapping subject matter; but not that all possible minds would find the truths about this subject matter to be psychologically compelling.

Luke Muehlhauser has written a sequence, No-Nonsense Metaethics, where he claims that many of the questions of metaethics can be answered today using modern neuroscience and rationality. He explains how conventional metaethics or “Austere Metaethics” is capable of, after assuming a definition of ‘right’, choosing the right action given a situation—but useless without assuming some criteria for ‘right’. He proposes instead “Empathic Metaethics” which utilizes your underlying cognitive algorithms to understand what you think ‘right’ means, helps clarify any emotional and cognitive contradictions in it, and then tells you what the right thing to do is, according to your definition of right. This approach is highly relevant for the Friendly AI problem as a way of defining human-like goals and motivations when designing AIs.

Further Reading & References

See Also

Mo­ral un­cer­tainty vs re­lated concepts

MichaelA11 Jan 2020 10:03 UTC
26 points
13 comments16 min readLW link

The Value Defi­ni­tion Problem

SDM18 Nov 2019 19:56 UTC
14 points
6 comments11 min readLW link

Deon­tol­ogy for Consequentialists

Alicorn30 Jan 2010 17:58 UTC
52 points
255 comments6 min readLW link

Mo­ral un­cer­tainty: What kind of ‘should’ is in­volved?

MichaelA13 Jan 2020 12:13 UTC
14 points
11 comments13 min readLW link

Ex­is­ten­tial Angst Factory

Eliezer Yudkowsky19 Jul 2008 6:55 UTC
63 points
101 comments4 min readLW link

Con­cep­tual Anal­y­sis and Mo­ral Theory

lukeprog16 May 2011 6:28 UTC
88 points
481 comments8 min readLW link

A the­ory of hu­man values

Stuart_Armstrong13 Mar 2019 15:22 UTC
27 points
13 comments7 min readLW link

De­con­fus­ing Hu­man Values Re­search Agenda v1

G Gordon Worley III23 Mar 2020 16:25 UTC
23 points
12 comments4 min readLW link

Is Mo­ral­ity Given?

Eliezer Yudkowsky6 Jul 2008 8:12 UTC
26 points
100 comments8 min readLW link

Three Wor­lds Col­lide (0/​8)

Eliezer Yudkowsky30 Jan 2009 12:07 UTC
62 points
95 comments1 min readLW link

Quick thoughts on em­pathic metaethics

lukeprog12 Dec 2017 21:46 UTC
18 points
0 comments9 min readLW link

Can we make peace with moral in­de­ter­mi­nacy?

Charlie Steiner3 Oct 2019 12:56 UTC
16 points
8 comments3 min readLW link

Meta-prefer­ences two ways: gen­er­a­tor vs. patch

Charlie Steiner1 Apr 2020 0:51 UTC
18 points
0 comments2 min readLW link

Nor­ma­tivity and Meta-Philosophy

Wei_Dai23 Apr 2013 20:35 UTC
29 points
56 comments1 min readLW link

Value uncertainty

MichaelA29 Jan 2020 20:16 UTC
16 points
3 comments14 min readLW link

Mo­ral­ity vs re­lated concepts

MichaelA7 Jan 2020 10:47 UTC
26 points
17 comments8 min readLW link

Six Plau­si­ble Meta-Eth­i­cal Alternatives

Wei_Dai6 Aug 2014 0:04 UTC
67 points
36 comments3 min readLW link

The Ur­gent Meta-Ethics of Friendly Ar­tifi­cial Intelligence

lukeprog1 Feb 2011 14:15 UTC
63 points
252 comments1 min readLW link

Head­ing Toward: No-Non­sense Metaethics

lukeprog24 Apr 2011 0:42 UTC
48 points
59 comments2 min readLW link

The Mo­ral Void

Eliezer Yudkowsky30 Jun 2008 8:52 UTC
52 points
112 comments4 min readLW link

Plu­ral­is­tic Mo­ral Reductionism

lukeprog1 Jun 2011 0:59 UTC
61 points
326 comments15 min readLW link

No Univer­sally Com­pel­ling Arguments

Eliezer Yudkowsky26 Jun 2008 8:29 UTC
53 points
57 comments5 min readLW link

Created Already In Motion

Eliezer Yudkowsky1 Jul 2008 6:03 UTC
56 points
23 comments3 min readLW link

Real­ism and Rationality

bmgarfinkel16 Sep 2019 3:09 UTC
43 points
49 comments23 min readLW link

The Sheer Folly of Cal­low Youth

Eliezer Yudkowsky19 Sep 2008 1:30 UTC
52 points
17 comments7 min readLW link

The Mean­ing of Right

Eliezer Yudkowsky29 Jul 2008 1:28 UTC
43 points
152 comments23 min readLW link

Chang­ing Your Metaethics

Eliezer Yudkowsky27 Jul 2008 12:36 UTC
36 points
21 comments5 min readLW link

Set­ting Up Metaethics

Eliezer Yudkowsky28 Jul 2008 2:25 UTC
20 points
34 comments4 min readLW link

Mo­ral Complexities

Eliezer Yudkowsky4 Jul 2008 6:43 UTC
22 points
36 comments1 min readLW link

Could Any­thing Be Right?

Eliezer Yudkowsky18 Jul 2008 7:19 UTC
37 points
39 comments6 min readLW link

Causal­ity and Mo­ral Responsibility

Eliezer Yudkowsky13 Jun 2008 8:34 UTC
44 points
55 comments5 min readLW link

What is Eliezer Yud­kowsky’s meta-eth­i­cal the­ory?

lukeprog29 Jan 2011 19:58 UTC
47 points
375 comments1 min readLW link

You Prov­ably Can’t Trust Yourself

Eliezer Yudkowsky19 Aug 2008 20:35 UTC
29 points
18 comments6 min readLW link

Insep­a­rably Right; or, Joy in the Merely Good

Eliezer Yudkowsky9 Aug 2008 1:00 UTC
37 points
33 comments4 min readLW link

Philo­soph­i­cal self-ratification

jessicata3 Feb 2020 22:48 UTC
23 points
13 comments5 min readLW link

A Model of Ra­tional Policy: When Is a Goal “Good”?

FCCC10 Oct 2020 17:05 UTC
2 points
0 comments17 min readLW link

Whither Mo­ral Progress?

Eliezer Yudkowsky16 Jul 2008 5:04 UTC
18 points
101 comments2 min readLW link

Mir­rors and Paintings

Eliezer Yudkowsky23 Aug 2008 0:29 UTC
18 points
42 comments8 min readLW link

Mo­ral Er­ror and Mo­ral Disagreement

Eliezer Yudkowsky10 Aug 2008 23:32 UTC
21 points
133 comments6 min readLW link

In­ner Goodness

Eliezer Yudkowsky23 Oct 2008 22:19 UTC
16 points
31 comments7 min readLW link

The Be­drock of Fairness

Eliezer Yudkowsky3 Jul 2008 6:00 UTC
40 points
102 comments5 min readLW link

Is Fair­ness Ar­bi­trary?

Eliezer Yudkowsky14 Aug 2008 1:54 UTC
5 points
37 comments6 min readLW link

In­visi­ble Frameworks

Eliezer Yudkowsky22 Aug 2008 3:36 UTC
18 points
47 comments6 min readLW link

Ethics Notes

Eliezer Yudkowsky21 Oct 2008 21:57 UTC
16 points
46 comments11 min readLW link

While we’re on the sub­ject of meta-ethics...

CronoDAS17 Apr 2009 8:01 UTC
7 points
4 comments1 min readLW link

[LINK] Scott Aaron­son on Google, Break­ing Cir­cu­lar­ity and Eigenmorality

shminux19 Jun 2014 20:17 UTC
31 points
46 comments1 min readLW link

What are the lef­tover ques­tions of metaethics?

cousin_it28 Apr 2011 8:46 UTC
30 points
55 comments1 min readLW link

My un­bundling of morality

Rudi C30 Dec 2020 15:19 UTC
7 points
2 comments1 min readLW link

AI Align­ment, Philo­soph­i­cal Plu­ral­ism, and the Rele­vance of Non-Western Philosophy

xuan1 Jan 2021 0:08 UTC
28 points
19 comments20 min readLW link

Mo­ral­ity is Awesome

[deleted]6 Jan 2013 15:21 UTC
138 points
437 comments3 min readLW link

Mo­ral Golems

Erich_Grunewald3 Apr 2021 10:12 UTC
7 points
2 comments6 min readLW link
No comments.