RSS

Self-Deception

TagLast edit: 19 Mar 2023 19:51 UTC by Diabloto96

Self-deception is a state of preserving a wrong belief, often facilitated by denying or rationalizing away the relevance, significance, or importance of opposing evidence and logical arguments. Beliefs supported by self-deception are often chosen for reasons other than how closely those beliefs approximate truth.

Related: Anticipated Experiences, Motivated Reasoning, Rationalization

On LessWrong, a common distinction is between beliefs as expectation-controllers and other things people commonly label as beliefs. When these different things conflict, a person is said to have engaged in self-deception.

Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.

An example from No, Really, I’ve Deceived Myself:

When this woman was in high school, she thought she was an atheist. But she decided, at that time, that she should act as if she believed in God. And then—she told me earnestly—over time, she came to really believe in God.

So far as I can tell, she is completely wrong about that. Always throughout our conversation, she said, over and over, “I believe in God”, never once, “There is a God.” When I asked her why she was religious, she never once talked about the consequences of God existing, only about the consequences of believing in God. Never, “God will help me”, always, “my belief in God helps me”. When I put to her, “Someone who just wanted the truth and looked at our universe would not even invent God as a hypothesis,” she agreed outright.

She hasn’t actually deceived herself into believing that God exists or that the Jewish religion is true. Not even close, so far as I can tell.

On the other hand, I think she really does believe she has deceived herself.

Blog posts

Sequence by Eliezer Yudkowsky

Part of How To Actually Change Your Mind sequence

Other resources

See also

Self-de­cep­tion: Hypocrisy or Akra­sia?

Eliezer Yudkowsky26 Mar 2007 17:03 UTC
65 points
21 comments2 min readLW link

Your Ra­tion­al­ity is My Business

Eliezer Yudkowsky15 Apr 2007 7:31 UTC
146 points
28 comments3 min readLW link

The Third Alternative

Eliezer Yudkowsky6 May 2007 23:47 UTC
144 points
84 comments3 min readLW link

Belief in Belief

Eliezer Yudkowsky29 Jul 2007 17:49 UTC
186 points
176 comments5 min readLW link

The Im­por­tance of Say­ing “Oops”

Eliezer Yudkowsky5 Aug 2007 3:17 UTC
211 points
33 comments2 min readLW link

Dou­ble­think (Choos­ing to be Bi­ased)

Eliezer Yudkowsky14 Sep 2007 20:05 UTC
86 points
169 comments4 min readLW link

Singlethink

Eliezer Yudkowsky6 Oct 2007 19:24 UTC
105 points
32 comments2 min readLW link

Ends Don’t Jus­tify Means (Among Hu­mans)

Eliezer Yudkowsky14 Oct 2008 21:00 UTC
190 points
97 comments4 min readLW link

Eth­i­cal Injunctions

Eliezer Yudkowsky20 Oct 2008 23:00 UTC
74 points
77 comments9 min readLW link

No, Really, I’ve De­ceived Myself

Eliezer Yudkowsky4 Mar 2009 23:29 UTC
118 points
90 comments2 min readLW link

Belief in Self-Deception

Eliezer Yudkowsky5 Mar 2009 15:20 UTC
92 points
114 comments4 min readLW link

Si­mul­ta­neously Right and Wrong

Scott Alexander7 Mar 2009 22:55 UTC
112 points
63 comments3 min readLW link

Don’t Believe You’ll Self-Deceive

Eliezer Yudkowsky9 Mar 2009 8:03 UTC
35 points
72 comments2 min readLW link

“Self-pre­tend­ing” is not as use­ful as we think

pwno25 Apr 2009 23:01 UTC
3 points
15 comments1 min readLW link

What I Tell You Three Times Is True

Scott Alexander2 May 2009 23:47 UTC
59 points
32 comments4 min readLW link

Epistemic vs. In­stru­men­tal Ra­tion­al­ity: Case of the Leaky Agent

Wei Dai7 May 2009 23:09 UTC
18 points
22 comments1 min readLW link

Lie to me?

pwno24 Jun 2009 21:56 UTC
−1 points
32 comments1 min readLW link

The Strangest Thing An AI Could Tell You

Eliezer Yudkowsky15 Jul 2009 2:27 UTC
129 points
613 comments2 min readLW link

Ab­solute de­nial for atheists

taw16 Jul 2009 15:41 UTC
51 points
606 comments1 min readLW link

Shut Up And Guess

Scott Alexander21 Jul 2009 4:04 UTC
124 points
110 comments5 min readLW link

Belief in Belief vs. Internalization

Desrtopa29 Nov 2010 3:12 UTC
42 points
59 comments2 min readLW link

The Bias You Didn’t Expect

Psychohistorian14 Apr 2011 16:20 UTC
131 points
91 comments2 min readLW link

Trivers on Self-Deception

Scott Alexander12 Jul 2011 21:04 UTC
65 points
25 comments4 min readLW link

Strate­gic ig­no­rance and plau­si­ble deniability

Kaj_Sotala10 Aug 2011 9:30 UTC
60 points
59 comments4 min readLW link

On self-deception

irrational5 Oct 2011 18:46 UTC
42 points
83 comments6 min readLW link

“The Book Of Mor­mon” or Belief In Belief, The Musical

Raw_Power14 Feb 2012 14:48 UTC
14 points
34 comments4 min readLW link

I be­lieve it’s doublethink

kerspoon21 Feb 2012 22:30 UTC
32 points
32 comments3 min readLW link

A cyn­i­cal ex­pla­na­tion for why ra­tio­nal­ists worry about FAI

aaronsw4 Aug 2012 12:27 UTC
28 points
172 comments1 min readLW link

Self-skep­ti­cism: the first prin­ci­ple of rationality

aaronsw6 Aug 2012 0:51 UTC
50 points
86 comments2 min readLW link

A Dialogue On Doublethink

LoganStrohl11 May 2014 19:38 UTC
102 points
108 comments11 min readLW link

If we can’t lie to oth­ers, we will lie to ourselves

paulfchristiano26 Nov 2016 22:29 UTC
45 points
24 comments1 min readLW link
(sideways-view.com)

The Just World Hypothesis

michael_vassar29 Oct 2017 6:03 UTC
17 points
9 comments3 min readLW link

Book Re­view: The Elephant in the Brain

Zvi31 Dec 2017 17:30 UTC
52 points
9 comments31 min readLW link
(thezvi.wordpress.com)

The Loud­est Alarm Is Prob­a­bly False

orthonormal2 Jan 2018 16:38 UTC
171 points
28 comments2 min readLW link1 review

Of Two Minds

Valentine17 May 2018 4:34 UTC
93 points
12 comments2 min readLW link

Ra­tion­al­ity Is Not Sys­tem­atized Winning

namespace11 Nov 2018 22:05 UTC
36 points
20 comments1 min readLW link
(www.thelastrationalist.com)

“Ra­tion­al­iz­ing” and “Sit­ting Bolt Upright in Alarm.”

Raemon8 Jul 2019 20:34 UTC
40 points
56 comments4 min readLW link

Ne­go­ti­at­ing With Yourself

orthonormal26 Jun 2020 23:55 UTC
25 points
0 comments5 min readLW link

Al­gorith­mic In­tent: A Han­so­nian Gen­er­al­ized Anti-Zom­bie Principle

Zack_M_Davis14 Jul 2020 6:03 UTC
50 points
19 comments12 min readLW link

A Re­fu­ta­tion of (Global) “Hap­piness Max­i­miza­tion”

fare25 Aug 2020 20:33 UTC
−2 points
4 comments15 min readLW link

The Haters Gonna Hate Fallacy

Kaj_Sotala22 Sep 2020 12:20 UTC
47 points
6 comments1 min readLW link
(kajsotala.fi)

How to reach 80% of your goals. Ex­actly 80%.

Bart Bussmann10 Oct 2020 17:33 UTC
36 points
11 comments1 min readLW link

Notes on Fairness

David Gross7 Dec 2020 18:52 UTC
20 points
4 comments7 min readLW link

Un­wit­ting cult leaders

Kaj_Sotala11 Feb 2021 11:10 UTC
119 points
9 comments3 min readLW link1 review
(kajsotala.fi)

Magic Shoes

onur24 Mar 2021 21:14 UTC
11 points
0 comments3 min readLW link

Forc­ing your­self to keep your iden­tity small is self-harm

Gordon Seidoh Worley3 Apr 2021 14:03 UTC
38 points
10 comments2 min readLW link

[Question] Liv­ing with a home­opath—how?

sudoLife21 Aug 2021 12:41 UTC
5 points
36 comments1 min readLW link

Or­di­nary Peo­ple and Ex­traor­di­nary Evil: A Re­port on the Beguil­ings of Evil

David Gross20 Sep 2021 15:19 UTC
56 points
31 comments4 min readLW link

Book Re­view: De­nial of Death

PatrickDFarley14 Oct 2021 4:28 UTC
14 points
5 comments37 min readLW link

Ex­cerpts from Veyne’s “Did the Greeks Believe in Their Myths?”

Rob Bensinger8 Nov 2021 20:23 UTC
24 points
1 comment16 min readLW link

Do a cost-benefit anal­y­sis of your tech­nol­ogy usage

TurnTrout27 Mar 2022 23:09 UTC
191 points
53 comments13 min readLW link

Un­der­stand­ing and avoid­ing value drift

TurnTrout9 Sep 2022 4:16 UTC
43 points
9 comments6 min readLW link

Be more effec­tive by learn­ing im­por­tant prac­ti­cal knowl­edge us­ing flashcards

Stenemo12 Oct 2022 18:05 UTC
5 points
2 comments1 min readLW link

Col­lege Ad­mis­sions as a Bru­tal One-Shot Game

devansh5 Dec 2022 20:05 UTC
10 points
26 comments2 min readLW link

“Endgame safety” for AGI

Steven Byrnes24 Jan 2023 14:15 UTC
84 points
10 comments6 min readLW link

[Question] Lost in the sauce

JungleTact1cs2 Mar 2023 16:58 UTC
−5 points
12 comments1 min readLW link

The Unifi­ca­tion of Physics and Me­ta­physics: 22 Ax­ioms for All Existences

30 Apr 2023 4:16 UTC
−42 points
0 comments2 min readLW link

Ex­pert trap: Why is it hap­pen­ing? (Part 2 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Paweł Sysiak9 Jun 2023 23:00 UTC
3 points
0 comments7 min readLW link

Go­ing Crazy and Get­ting Bet­ter Again

Evenstar2 Jul 2023 18:55 UTC
118 points
10 comments7 min readLW link

[Question] What to do af­ter a men­tal break­down? (Deal­ing with fear of failure)

TeaTieAndHat13 Jul 2023 9:09 UTC
2 points
0 comments4 min readLW link

Ex­pert trap – Ways out (Part 3 of 3)

Paweł Sysiak22 Jul 2023 13:06 UTC
4 points
0 comments9 min readLW link

Ra­tion­al­iza­tion Max­i­mizes Ex­pected Value

Kevin Dorst30 Jul 2023 20:11 UTC
19 points
10 comments7 min readLW link
(kevindorst.substack.com)

An Idea on How LLMs Can Show Self-Serv­ing Bias

Bruce W. Lee23 Nov 2023 20:25 UTC
6 points
6 comments3 min readLW link

Selfish AI Inevitable

Davey Morse6 Feb 2024 4:29 UTC
1 point
0 comments1 min readLW link
No comments.