RSS

Humility

TagLast edit: 16 Nov 2021 15:14 UTC by Rob Bensinger

Outside of LessWrong, “humility” usually refers to “a modest or low view of one’s own importance”. In common parlance, to be humble is to be meek, deferential, submissive, or unpretentious, “not arrogant or prideful”. Thus, in ordinary English “humility” and “modesty” have pretty similar connotations.

On LessWrong, Eliezer Yudkowsky has proposed that we instead draw a sharp distinction between two kinds of “humility” — social modesty, versus “scientific humility”.

In The Proper Use of Humility (2006), Yudkowsky writes:

You suggest studying harder, and the student replies: “No, it wouldn’t work for me; I’m not one of the smart kids like you; nay, one so lowly as myself can hope for no better lot.”

This is social modesty, not humility. It has to do with regulating status in the tribe, rather than scientific process.

If you ask someone to “be more humble,” by default they’ll associate the words to social modesty—which is an intuitive, everyday, ancestrally relevant concept. Scientific humility is a more recent and rarefied invention, and it is not inherently social. Scientific humility is something you would practice even if you were alone in a spacesuit, light years from Earth with no one watching. Or even if you received an absolute guarantee that no one would ever criticize you again, no matter what you said or thought of yourself. You’d still double-check your calculations if you were wise.

On LW, then, we tend to follow the convention of using “humility” as a term of art for an important part of reasoning: combating overconfidence, recognizing and improving on your weaknesses, anticipating and preparing for likely errors you’ll make, etc.

In contrast, “modesty” here refers to the bad habit of letting your behavior and epistemics be ruled by not wanting to look arrogant or conceited. Yudkowsky argues in Inadequate Equilibria (2017) that psychological impulses like “status regulation and anxious underconfidence” have caused many people in the effective altruism and rationality communities to adopt a “modest epistemology” that involves rationalizing various false world-models and invalid reasoning heuristics.

LW tries to create a social environment where social reward and punishment is generally less salient, and where (to the extent it persists) it incentivizes honesty and truth-seeking as much as possible. LW doesn’t always succeed in this goal, but this is nonetheless the goal.

The most commonly cited explanation of scientific/​epistemic humility on LW is found in Yudkowsky’s “Twelve Virtues of Rationality” (2006):

The eighth virtue is humility.

To be humble is to take specific actions in anticipation of your own errors.

To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.

Who are most humble? Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans.

Because this world contains many whose grasp of rationality is abysmal, beginning students of rationality win arguments and acquire an exaggerated view of their own abilities. But it is useless to be superior: Life is not graded on a curve. The best physicist in ancient Greece could not calculate the path of a falling apple. There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse. If you compare yourself to others you will not see the biases that all humans share. To be human is to make ten thousand errors. No one in this world achieves perfection.

Humility versus Modest Epistemology

While humility is based on the general idea that you are fallible (and should try to be calibrated and realistic about this), modest epistemology makes stronger claims such as:

In contrast, Yudkowsky has argued:

I try to be careful to distinguish the virtue of avoiding overconfidence, which I sometimes call “humility,” from the phenomenon I’m calling “modest epistemology.” But even so, when overconfidence is such a terrible scourge according to the cognitive bias literature, can it ever be wise to caution people against underconfidence?

Yes. First of all, overcompensation after being warned about a cognitive bias is also a recognized problem in the literature; and the literature on that talks about how bad people often are at determining whether they’re undercorrecting or overcorrecting. Second, my own experience has been that while, yes, commenters on the Internet are often overconfident, it’s very different when I’m talking to people in person. My more recent experience seems more like 90% telling people to be less underconfident, to reach higher, to be more ambitious, to test themselves, and maybe 10% cautioning people against overconfidence. And yes, this ratio applies to men as well as women and nonbinary people, and to people considered high-status as well as people considered low-status.

The Sin of Underconfidence (2009) argues that underconfidence is one of the “three great besetting sins of rationalists” (the others being motivated reasoning /​ motivated skepticism and “cleverness”).

In Taboo “Outside View” (2021), Daniel Kokotajlo notes that the original meaning of “outside view” (reference class forecasting) has become eroded as EAs have begun using “outside view” to refer to everything from reasoning by analogy, to trend extrapolation, to foxy aggregation, to bias correction, to “deference to wisdom of the many”, to “anti-weirdness heuristics”, to priors, etc.

Additionally, proponents of outside-viewing often behave as though there is a single obvious reference class to use—“the outside view”, as opposed to “an outside view”—and tend to neglect the role of detailed model-building in helping us figure out which reference classes are relevant.

The lesson of this isn’t “it’s bad to ever use reference class forecasting, trend extrapolation, etc.”, but rather that these tools are part and parcel of building good world-models and deriving good predictions from them, rather than being a robust replacement for world-modeling.

Likewise, the lesson isn’t “it’s bad to ever worry about overconfidence”, but rather that overconfidence and underconfidence are both problems, neither is a priori worse than the other, and fixing them requires doing a lot of legwork and model-building about your own capabilities—again, there isn’t a royal road to ‘getting the right answer without having to figure things out’.

Related pages

The Proper Use of Humility

Eliezer Yudkowsky1 Dec 2006 19:55 UTC
176 points
54 comments5 min readLW link

The Modesty Argument

Eliezer Yudkowsky10 Dec 2006 21:42 UTC
54 points
40 comments10 min readLW link

Ein­stein’s Superpowers

Eliezer Yudkowsky30 May 2008 6:40 UTC
109 points
91 comments5 min readLW link

Say It Loud

Eliezer Yudkowsky19 Sep 2008 17:34 UTC
61 points
20 comments2 min readLW link

Ex­treme up­dat­ing: The devil is in the miss­ing details

PhilGoetz25 Mar 2009 17:55 UTC
7 points
17 comments2 min readLW link

The Sin of Underconfidence

Eliezer Yudkowsky20 Apr 2009 6:30 UTC
99 points
187 comments6 min readLW link

Ra­tion­al­ist Role in the In­for­ma­tion Age

byrnema30 Apr 2009 18:24 UTC
7 points
19 comments3 min readLW link

How to use “philo­soph­i­cal ma­jori­tar­i­anism”

jimmy5 May 2009 6:49 UTC
13 points
9 comments4 min readLW link

Are You Anosog­nosic?

Eliezer Yudkowsky19 Jul 2009 4:35 UTC
20 points
67 comments1 min readLW link

The Pre­dic­tion Hierarchy

RobinZ19 Jan 2010 3:36 UTC
28 points
38 comments3 min readLW link

Li­tany of a Bright Dilettante

shminux18 Apr 2013 5:06 UTC
89 points
71 comments1 min readLW link

Si­mul­ta­neous Over­con­fi­dence and Underconfidence

abramdemski3 Jun 2015 21:04 UTC
37 points
6 comments5 min readLW link

Plac­ing Your­self as an In­stance of a Class

abramdemski3 Oct 2017 19:10 UTC
35 points
5 comments3 min readLW link

Inad­e­quacy and Modesty

Eliezer Yudkowsky28 Oct 2017 21:51 UTC
130 points
77 comments18 min readLW link

In defence of epistemic modesty

Thrasymachus29 Oct 2017 20:00 UTC
31 points
20 comments36 min readLW link

Against Shoot­ing Your­self in the Foot

Eliezer Yudkowsky16 Nov 2017 20:13 UTC
47 points
3 comments3 min readLW link

Time­less Modesty?

abramdemski24 Nov 2017 11:12 UTC
17 points
2 comments3 min readLW link

Mis­takes with Con­ser­va­tion of Ex­pected Evidence

abramdemski8 Jun 2019 23:07 UTC
212 points
25 comments12 min readLW link1 review

On Ch­ester­ton’s Fence

trentbrick10 Sep 2020 15:10 UTC
21 points
3 comments10 min readLW link

Notes on Wisdom

David Gross14 Nov 2020 2:37 UTC
6 points
0 comments5 min readLW link

Notes on Humility

David Gross29 Nov 2020 19:50 UTC
18 points
4 comments8 min readLW link

Christen­son’s “Episte­mol­ogy of Disagree­ment: The Good News”

Aidan_Kierans16 May 2021 3:58 UTC
1 point
0 comments5 min readLW link

Why the Prob­lem of the Cri­te­rion Matters

Gordon Seidoh Worley30 Oct 2021 20:44 UTC
24 points
9 comments8 min readLW link

How I Formed My Own Views About AI Safety

Neel Nanda27 Feb 2022 18:50 UTC
64 points
6 comments13 min readLW link
(www.neelnanda.io)

When should you defer to ex­per­tise? A use­ful heuris­tic (Cross­post from EA fo­rum)

Noosphere8913 Oct 2022 14:14 UTC
9 points
3 comments2 min readLW link
(forum.effectivealtruism.org)

Dangers of deference

TsviBT8 Jan 2023 14:36 UTC
55 points
5 comments2 min readLW link

Jor­dan Peter­son: Guru/​Villain

Bryan Frances3 Feb 2023 9:02 UTC
−14 points
6 comments9 min readLW link

Do we have a plan for the “first crit­i­cal try” prob­lem?

Christopher King3 Apr 2023 16:27 UTC
−3 points
14 comments1 min readLW link

In defence of epistemic mod­esty [dis­til­la­tion]

Luise10 May 2023 9:44 UTC
16 points
2 comments9 min readLW link

Why You Should Never Up­date Your Beliefs

Arjun Panickssery29 Jul 2023 0:27 UTC
69 points
17 comments4 min readLW link
(arjunpanickssery.substack.com)

“Des­per­ate Hon­esty” by Agnes Callard

David Gross1 Aug 2023 13:34 UTC
11 points
0 comments2 min readLW link
(dailynous.com)

Listen For What You Don’t Hear: The Case for Contrarianism

Yashvardhan Sharma14 Aug 2023 2:53 UTC
3 points
1 comment5 min readLW link

Why Us­ing A Neu­tral Cur­rency Is Crit­i­cal To Afford AI Devel­op­ment To Em­power Creat­ing Sus­tain­able Excellence

X O25 Feb 2024 9:27 UTC
−27 points
11 comments48 min readLW link