Knowing About Biases Can Hurt People
Once upon a time I tried to tell my mother about the problem of expert calibration, saying: “So when an expert says they’re 99% confident, it only happens about 70% of the time.” Then there was a pause as, suddenly, I realized I was talking to my mother, and I hastily added: “Of course, you’ve got to make sure to apply that skepticism evenhandedly, including to yourself, rather than just using it to argue against anything you disagree with—”
And my mother said: “Are you kidding? This is great! I’m going to use it all the time!”
Taber and Lodge’s “Motivated Skepticism in the Evaluation of Political Beliefs” describes the confirmation of six predictions:
Prior attitude effect. Subjects who feel strongly about an issue—even when encouraged to be objective—will evaluate supportive arguments more favorably than contrary arguments.
Disconfirmation bias. Subjects will spend more time and cognitive resources denigrating contrary arguments than supportive arguments.
Confirmation bias. Subjects free to choose their information sources will seek out supportive rather than contrary sources.
Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarization.
Attitude strength effect. Subjects voicing stronger attitudes will be more prone to the above biases.
Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases.
If you’re irrational to start with, having more knowledge can hurt you. For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves.
I’ve seen people severely messed up by their own knowledge of biases. They have more ammunition with which to argue against anything they don’t like. And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich’s “dysrationalia” sense of stupidity.
You can think of people who fit this description, right? People with high g-factor who end up being less effective because they are too sophisticated as arguers? Do you think you’d be helping them—making them more effective rationalists—if you just told them about a list of classic biases?
I recall someone who learned about the calibration/overconfidence problem. Soon after he said: “Well, you can’t trust experts; they’re wrong so often—as experiments have shown. So therefore, when I predict the future, I prefer to assume that things will continue historically as they have—” and went off into this whole complex, error-prone, highly questionable extrapolation. Somehow, when it came to trusting his own preferred conclusions, all those biases and fallacies seemed much less salient—leapt much less readily to mind—than when he needed to counter-argue someone else.
I told the one about the problem of disconfirmation bias and sophisticated argument, and lo and behold, the next time I said something he didn’t like, he accused me of being a sophisticated arguer. He didn’t try to point out any particular sophisticated argument, any particular flaw—just shook his head and sighed sadly over how I was apparently using my own intelligence to defeat itself. He had acquired yet another Fully General Counterargument.
Even the notion of a “sophisticated arguer” can be deadly, if it leaps all too readily to mind when you encounter a seemingly intelligent person who says something you don’t like.
I endeavor to learn from my mistakes. The last time I gave a talk on heuristics and biases, I started out by introducing the general concept by way of the conjunction fallacy and representativeness heuristic. And then I moved on to confirmation bias, disconfirmation bias, sophisticated argument, motivated skepticism, and other attitude effects. I spent the next thirty minutes hammering on that theme, reintroducing it from as many different perspectives as I could.
I wanted to get my audience interested in the subject. Well, a simple description of conjunction fallacy and representativeness would suffice for that. But suppose they did get interested. Then what? The literature on bias is mostly cognitive psychology for cognitive psychology’s sake. I had to give my audience their dire warnings during that one lecture, or they probably wouldn’t hear them at all.
Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm!
- Why Our Kind Can’t Cooperate by 20 Mar 2009 8:37 UTC; 260 points) (
- Is Rationalist Self-Improvement Real? by 9 Dec 2019 17:11 UTC; 235 points) (
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 221 points) (
- The Intelligent Social Web by 22 Feb 2018 18:55 UTC; 211 points) (
- Cached Selves by 22 Mar 2009 19:34 UTC; 211 points) (
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 197 points) (
- Where to Draw the Boundaries? by 13 Apr 2019 21:34 UTC; 107 points) (
- Disputing Definitions by 12 Feb 2008 0:15 UTC; 106 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 93 points) (
- Don’t Apply the Principle of Charity to Yourself by 19 Nov 2011 19:26 UTC; 81 points) (
- Go Forth and Create the Art! by 23 Apr 2009 1:37 UTC; 80 points) (
- Five-minute rationality techniques by 10 Aug 2010 2:24 UTC; 71 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- The Statistician’s Fallacy by 9 Dec 2013 4:48 UTC; 63 points) (
- Information hazards: a very simple typology by 13 Jul 2020 16:54 UTC; 62 points) (EA Forum;
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 61 points) (
- Categorizing Has Consequences by 19 Feb 2008 1:40 UTC; 60 points) (
- About Less Wrong by 23 Feb 2009 23:30 UTC; 57 points) (
- 20 Dec 2021 0:15 UTC; 56 points)'s comment on In Defense of Attempting Hard Things, and my story of the Leverage ecosystem by (
- Conjunction Controversy (Or, How They Nail It Down) by 20 Sep 2007 2:41 UTC; 54 points) (
- Singularity Mindset by 19 Jan 2018 0:32 UTC; 51 points) (
- Against Devil’s Advocacy by 9 Jun 2008 4:15 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- Training Regime Day 19: Hamming Questions for Potted Plants by 23 Apr 2020 16:00 UTC; 47 points) (
- Debiasing as Non-Self-Destruction by 7 Apr 2007 20:20 UTC; 46 points) (
- A Prodigy of Refutation by 18 Sep 2008 1:57 UTC; 45 points) (
- A Rational Identity by 12 Jul 2010 22:59 UTC; 43 points) (
- Simplicio and Sophisticus by 22 Jul 2018 13:30 UTC; 42 points) (
- Philosophy Needs to Trust Your Rationality Even Though It Shouldn’t by 29 Nov 2012 21:00 UTC; 39 points) (
- In praise of heuristics by 24 Oct 2018 15:44 UTC; 39 points) (
- Marketing Rationality by 18 Nov 2015 13:43 UTC; 39 points) (
- Practical debiasing by 20 Nov 2011 9:45 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong  by 8 Jul 2011 1:40 UTC; 38 points) (
- Confirmation Bias in Action by 24 Jan 2021 17:38 UTC; 36 points) (
- Surface Analogies and Deep Causes by 22 Jun 2008 7:51 UTC; 35 points) (
- SotW: Avoid Motivated Cognition by 28 May 2012 15:57 UTC; 33 points) (
- Catchy Fallacy Name Fallacy (and Supporting Disagreement) by 21 May 2009 6:01 UTC; 32 points) (
- “I know I’m biased, but...” by 10 May 2011 20:03 UTC; 32 points) (
- Why a New Rationalization Sequence? by 13 Jan 2020 6:46 UTC; 30 points) (
- The Problem With Rational Wiki by 26 Oct 2012 11:31 UTC; 30 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Teaching Ladders by 24 Jan 2018 19:01 UTC; 28 points) (
- My Strange Beliefs by 30 Dec 2007 12:15 UTC; 28 points) (
- 9 Mar 2020 0:47 UTC; 28 points)'s comment on Credibility of the CDC on SARS-CoV-2 by (
- Pascal’s Muggle Pays by 16 Dec 2017 20:40 UTC; 25 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 25 points) (
- 25 Oct 2019 19:57 UTC; 24 points)'s comment on bgaesop’s Shortform by (
- [Link] Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study by 2 Dec 2012 14:43 UTC; 24 points) (
- Against naming things, and so on by 15 Oct 2017 23:48 UTC; 24 points) (
- How to teach to magical thinkers? by 24 Feb 2014 13:43 UTC; 23 points) (
- Problem of Optimal False Information by 15 Oct 2012 21:42 UTC; 23 points) (
- A Genius for Destruction by 1 Aug 2008 19:25 UTC; 22 points) (
- Schelling Fences versus Marginal Thinking by 22 May 2019 10:22 UTC; 22 points) (
- Trust in Math by 15 Jan 2008 4:25 UTC; 21 points) (
- Hufflepuff Cynicism on Hypocrisy by 29 Mar 2018 21:01 UTC; 21 points) (
- Does Your Morality Care What You Think? by 26 Jul 2008 0:25 UTC; 20 points) (
- 9 Feb 2020 8:44 UTC; 20 points)'s comment on A Cautionary Note on Unlocking the Emotional Brain by (
- 3 Oct 2013 8:14 UTC; 20 points)'s comment on Open Thread, September 30 - October 6, 2013 by (
- Contaminated by Optimism by 6 Aug 2008 0:26 UTC; 19 points) (
- 13 May 2015 22:31 UTC; 19 points)'s comment on LW should go into mainstream academia ? by (
- Principles of Disagreement by 2 Jun 2008 7:04 UTC; 19 points) (
- “Target audience” size for the Less Wrong sequences by 18 Nov 2010 12:21 UTC; 18 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- 9 Apr 2013 22:58 UTC; 18 points)'s comment on The Universal Medical Journal Article Error by (
- 9 Sep 2012 13:57 UTC; 18 points)'s comment on How to deal with someone in a LessWrong meeting being creepy by (
- Starting University Advice Repository by 3 Dec 2015 23:51 UTC; 17 points) (
- 8 Mar 2013 22:28 UTC; 16 points)'s comment on Don’t Get Offended by (
- 13 May 2013 0:35 UTC; 16 points)'s comment on How to calibrate your political beliefs by (
- 18 Nov 2010 2:25 UTC; 16 points)'s comment on Suspended Animation Inc. accused of incompetence by (
- 14 Mar 2012 8:18 UTC; 16 points)'s comment on Harry Potter and the Methods of Rationality discussion thread, part 10 by (
- Some Thoughts Are Too Dangerous For Brains to Think by 13 Jul 2010 4:44 UTC; 16 points) (
- 1 Mar 2014 18:51 UTC; 16 points)'s comment on Self-Congratulatory Rationalism by (
- 1 Apr 2012 20:08 UTC; 15 points)'s comment on Rationality Quotes April 2012 by (
- List of Fully General Counterarguments by 18 Jul 2015 21:49 UTC; 15 points) (
- Some Remarks on the Nature of Political Conflict by 4 Jul 2018 12:31 UTC; 15 points) (
- 28 Apr 2019 4:40 UTC; 15 points)'s comment on Counterspells by (
- 30 Sep 2014 16:53 UTC; 14 points)'s comment on Rationality Quotes September 2014 by (
- 15 Sep 2012 6:55 UTC; 14 points)'s comment on High School Lectures by (
- 17 Jul 2009 4:10 UTC; 13 points)'s comment on Absolute denial for atheists by (
- 4 Nov 2014 9:54 UTC; 13 points)'s comment on Open thread, Nov. 3 - Nov. 9, 2014 by (
- 2 May 2011 2:35 UTC; 13 points)'s comment on [Altruist Support] LW Go Foom by (
- 29 Mar 2009 21:00 UTC; 13 points)'s comment on Akrasia, hyperbolic discounting, and picoeconomics by (
- 15 Jul 2014 7:54 UTC; 13 points)'s comment on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult by (
- The Paradox of Expert Opinion by 26 Sep 2021 21:39 UTC; 12 points) (
- Consider Representative Data Sets by 6 May 2009 1:49 UTC; 12 points) (
- 27 Nov 2014 3:00 UTC; 12 points)'s comment on The Hostile Arguer by (
- 6 Mar 2009 14:52 UTC; 12 points)'s comment on No, Really, I’ve Deceived Myself by (
- 3 Dec 2012 18:54 UTC; 12 points)'s comment on Rationality Quotes December 2012 by (
- 8 May 2012 9:48 UTC; 11 points)'s comment on Logical fallacies poster, a LessWrong adaptation. by (
- 26 Jul 2011 23:44 UTC; 11 points)'s comment on Welcome to Less Wrong! (2010-2011) by (
- 15 Jun 2010 4:23 UTC; 11 points)'s comment on Open Thread June 2010, Part 3 by (
- Rationality Compendium: Principle 2 - You are implemented on a human brain by 29 Aug 2015 16:24 UTC; 11 points) (
- 27 Oct 2012 23:30 UTC; 10 points)'s comment on The Problem With Rational Wiki by (
- 11 Jun 2020 20:59 UTC; 10 points)'s comment on Failed Utopia #4-2 by (
- [SEQ RERUN] Knowing About Biases Can Hurt People by 24 May 2011 12:54 UTC; 10 points) (
- 16 Dec 2014 13:02 UTC; 9 points)'s comment on Open thread, Dec. 15 - Dec. 21, 2014 by (
- New York Times on Arguments and Evolution [link] by 14 Jun 2011 18:12 UTC; 9 points) (
- 23 Oct 2011 8:51 UTC; 9 points)'s comment on Better Disagreement by (
- 3 Jul 2019 19:36 UTC; 9 points)'s comment on Self-consciousness wants to make everything about itself by (
- 21 Oct 2021 8:31 UTC; 8 points)'s comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
- 18 Apr 2013 7:22 UTC; 8 points)'s comment on Open Thread, April 15-30, 2013 by (
- 28 Aug 2014 8:46 UTC; 8 points)'s comment on Open thread, 25-31 August 2014 by (
- 13 Jun 2014 15:10 UTC; 8 points)'s comment on Science/rationality subjects to teach by (
- 15 Jan 2016 4:41 UTC; 7 points)'s comment on Open Thread, January 11-17, 2016 by (
- 20 Jan 2014 7:45 UTC; 7 points)'s comment on Using vs. evaluating (or, Why I don’t come around here no more) by (
- 12 Apr 2018 13:14 UTC; 7 points)'s comment on Is Rhetoric Worth Learning? by (
- 2 May 2012 12:03 UTC; 7 points)'s comment on Open Thread, May 1-15, 2012 by (
- 11 Jan 2012 18:20 UTC; 7 points)'s comment on We Change Our Minds Less Often Than We Think by (
- 4 Jun 2008 13:57 UTC; 7 points)'s comment on Why Quantum? by (
- 4 Oct 2011 14:47 UTC; 7 points)'s comment on Lecturing congressmen on cognitive biases by (
- Rationality Reading Group: Part G: Against Rationalization by 12 Aug 2015 22:09 UTC; 7 points) (
- 7 Mar 2022 7:19 UTC; 6 points)'s comment on The Future Fund’s Project Ideas Competition by (EA Forum;
- 5 Nov 2012 2:56 UTC; 6 points)'s comment on Open Thread, November 1-15, 2012 by (
- 18 Mar 2012 23:51 UTC; 6 points)'s comment on Welcome to Less Wrong! (2012) by (
- 13 Apr 2011 18:29 UTC; 6 points)'s comment on We are not living in a simulation by (
- 2 Oct 2013 3:28 UTC; 6 points)'s comment on A Muggle Studies course by (
- Less Wrong Podcast Queries by 14 May 2015 11:52 UTC; 6 points) (
- 23 Feb 2013 6:36 UTC; 6 points)'s comment on Welcome to Less Wrong! (July 2012) by (
- 27 May 2011 13:59 UTC; 6 points)'s comment on The 48 Rules of Power; Viable? by (
- Article idea: Good argumentation by 27 Nov 2011 19:57 UTC; 5 points) (
- 10 Aug 2010 15:48 UTC; 5 points)'s comment on Five-minute rationality techniques by (
- 1 Dec 2010 3:04 UTC; 5 points)'s comment on Yes, a blog. by (
- 11 Jan 2012 20:28 UTC; 5 points)'s comment on We Change Our Minds Less Often Than We Think by (
- 15 Feb 2010 22:14 UTC; 5 points)'s comment on Boo lights: groupthink edition by (
- 13 Oct 2011 2:35 UTC; 5 points)'s comment on Rationality Lessons Learned from Irrational Adventures in Romance by (
- 23 Nov 2014 17:02 UTC; 4 points)'s comment on Open thread, Nov. 17 - Nov. 23, 2014 by (
- 22 Mar 2012 8:25 UTC; 4 points)'s comment on What epistemic hygiene norms should there be? by (
- 6 Feb 2013 9:58 UTC; 4 points)'s comment on What are you working on? February 2013 by (
- 28 Jul 2015 15:00 UTC; 4 points)'s comment on Welcome to Less Wrong! (8th thread, July 2015) by (
- 4 May 2010 14:41 UTC; 4 points)'s comment on Open Thread: May 2010 by (
- 1 Nov 2011 18:20 UTC; 4 points)'s comment on [LINK] Cracked on PitMK, Fundamental Attribution Error, Confimation Bias and More by (
- 26 Feb 2014 10:19 UTC; 4 points)'s comment on White Lies by (
- 22 May 2015 0:56 UTC; 4 points)'s comment on Leaving LessWrong for a more rational life by (
- 16 May 2012 13:33 UTC; 4 points)'s comment on Open Thread, May 16-31, 2012 by (
- 27 Oct 2015 17:32 UTC; 4 points)'s comment on “How To Become Less Wrong”—Feedback on Article Request by (
- 23 Jul 2011 6:09 UTC; 4 points)'s comment on Religion’s Claim to be Non-Disprovable by (
- 1 Jun 2019 13:47 UTC; 4 points)'s comment on Welcome and Open Thread June 2019 by (
- 5 Oct 2015 14:51 UTC; 4 points)'s comment on How could one (and should one) convert someone from pseudoscience? by (
- 10 Mar 2013 15:25 UTC; 3 points)'s comment on Don’t Get Offended by (
- 27 Sep 2011 2:36 UTC; 3 points)'s comment on How to incentivize people doing useful stuff on Less Wrong by (
- 11 Dec 2019 3:09 UTC; 3 points)'s comment on The Intelligent Social Web by (
- 5 Oct 2014 20:36 UTC; 3 points)'s comment on [Link] Animated Video—The Useful Idea of Truth (Part 1/3) by (
- 30 Jan 2015 20:38 UTC; 3 points)'s comment on Is there a rationalist skill tree yet? by (
- 8 Dec 2012 18:17 UTC; 3 points)'s comment on LW Women- Minimizing the Inferential Distance by (
- 28 Feb 2021 7:59 UTC; 3 points)'s comment on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges by (
- 2 Aug 2007 22:55 UTC; 3 points)'s comment on Belief as Attire by (
- 16 Mar 2014 15:19 UTC; 3 points)'s comment on Intelligence-disadvantage by (
- 13 Jan 2018 16:28 UTC; 3 points)'s comment on Prune by (
- 20 Apr 2011 2:37 UTC; 3 points)'s comment on [SEQ RERUN] The Martial Art of Rationality by (
- 23 Nov 2012 8:10 UTC; 2 points)'s comment on Open Thread, November 1-15, 2012 by (
- 17 Oct 2011 14:39 UTC; 2 points)'s comment on Open thread, October 2011 by (
- 16 Mar 2010 3:40 UTC; 2 points)'s comment on Undiscriminating Skepticism by (
- 29 May 2009 12:07 UTC; 2 points)'s comment on A social norm against unjustified opinions? by (
- 13 Aug 2011 1:20 UTC; 2 points)'s comment on Why Truth? by (
- 3 Nov 2009 17:39 UTC; 2 points)'s comment on Open Thread: November 2009 by (
- 21 Mar 2011 4:21 UTC; 2 points)'s comment on What comes before rationality by (
- 15 Feb 2022 9:42 UTC; 2 points)'s comment on A compilation of misuses of statistics by (
- 17 May 2012 16:00 UTC; 2 points)'s comment on Open Thread, May 16-31, 2012 by (
- 20 Jan 2017 13:47 UTC; 2 points)'s comment on If rationality is purely winning there is a minimal shared art by (
- 9 Jan 2020 19:48 UTC; 2 points)'s comment on Circling as Cousin to Rationality by (
- Meetup : Berlin Meetup by 22 Oct 2012 5:41 UTC; 2 points) (
- 11 Sep 2012 21:18 UTC; 2 points)'s comment on How to deal with someone in a LessWrong meeting being creepy by (
- 27 Apr 2011 20:51 UTC; 2 points)'s comment on What is Metaethics? by (
- 7 Nov 2017 3:21 UTC; 2 points)'s comment on De-Centering Bias by (
- 2 Apr 2012 1:10 UTC; 2 points)'s comment on Doing “Nothing” by (
- 27 Mar 2009 3:16 UTC; 2 points)'s comment on Levels of Power by (
- 7 Mar 2013 11:19 UTC; 1 point)'s comment on Rationality Quotes March 2013 by (
- Request for rough draft review: Navigating Identityspace by 29 Sep 2010 17:51 UTC; 1 point) (
- 7 Jun 2010 16:21 UTC; 1 point)'s comment on Welcome to Less Wrong! by (
- 1 Jun 2010 5:17 UTC; 1 point)'s comment on Abnormal Cryonics by (
- 17 Mar 2012 18:30 UTC; 1 point)'s comment on Evolutionary psychology: evolving three eyed monsters by (
- Meetup : West LA Meetup 11-30-2011 by 29 Nov 2011 5:22 UTC; 1 point) (
- 10 Dec 2007 0:51 UTC; 1 point)'s comment on When None Dare Urge Restraint by (
- 14 Jan 2012 3:42 UTC; 1 point)'s comment on We Change Our Minds Less Often Than We Think by (
- 16 May 2012 14:12 UTC; 1 point)'s comment on Open Thread, May 16-31, 2012 by (
- 24 May 2019 11:49 UTC; 1 point)'s comment on Final Words by (
- 24 Sep 2013 8:43 UTC; 0 points)'s comment on Fiction: Written on the Body as love versus reason by (
- 3 Aug 2016 15:50 UTC; 0 points)'s comment on Irrationality Quotes August 2016 by (
- 12 Sep 2015 4:30 UTC; 0 points)'s comment on What Exactly Do We Mean By “Rationality”? by (
- 25 Jan 2012 22:08 UTC; 0 points)'s comment on Welcome to Less Wrong! (2012) by (
- 26 Apr 2012 8:27 UTC; 0 points)'s comment on The Craft And The Community: Wealth And Power And Tsuyoku Naritai by (
- 8 Jul 2013 8:52 UTC; 0 points)'s comment on Open Thread, April 15-30, 2013 by (
- 23 Feb 2013 19:31 UTC; 0 points)'s comment on Epistemic Viciousness by (
- 5 Feb 2015 22:05 UTC; 0 points)'s comment on Is there a rationalist skill tree yet? by (
- 26 Jun 2012 3:34 UTC; 0 points)'s comment on Thwarting a Catholic conversion? by (
- 22 Jan 2012 22:41 UTC; 0 points)'s comment on Crocker’s Rules: How far to take it? by (
- 25 Jan 2014 9:54 UTC; 0 points)'s comment on Do we underuse the genetic heuristic? by (
- 16 Apr 2011 11:33 UTC; 0 points)'s comment on On Debates with Trolls by (
- 3 Nov 2016 12:52 UTC; 0 points)'s comment on Ethical Injunctions by (
- 29 Nov 2015 0:39 UTC; 0 points)'s comment on Open thread, Nov. 23 - Nov. 29, 2015 by (
- 19 Feb 2014 14:04 UTC; 0 points)'s comment on Don’t rely on the system to guarantee you life satisfaction by (
- 6 Oct 2015 10:06 UTC; 0 points)'s comment on How could one (and should one) convert someone from pseudoscience? by (
- 20 Nov 2010 14:35 UTC; 0 points)'s comment on What I’ve learned from Less Wrong by (
- 5 Sep 2012 21:09 UTC; 0 points)'s comment on The noncentral fallacy—the worst argument in the world? by (
- 1 Dec 2010 11:15 UTC; 0 points)'s comment on The Boundaries of Biases by (
- 5 Mar 2012 9:33 UTC; -1 points)'s comment on How to Fix Science by (
- Confabulation Bias by 28 Sep 2012 1:27 UTC; -1 points) (
- 3 Jun 2019 0:48 UTC; -2 points)'s comment on Welcome and Open Thread June 2019 by (
- 4 Oct 2017 2:49 UTC; -7 points)'s comment on Slack by (
- The Goal of the Bayesian Conspiracy by 16 Aug 2011 18:40 UTC; -18 points) (
Humans aren’t just not perfect Bayesians. Very very few of us are even Bayesian wannabes. In essence, everyone who thinks that it is more moral/ethical to hold some proposition than to hold it’s converse is taking some criterion other than appearent truth as normative with respect to the evaluation of beliefs.
This is something of a nitpick, but I think that it is more moral/ethical to hold a proposition than to hold its converse if there is good reason to think that that proposition is true. Is this un-Bayesian?
It’s a meta-level/aliasing sort of problem, I think. You don’t believe it’s more ethical/moral to believe any specific proposition, you believe it’s more ethical/moral to believe ‘the proposition most likely to be true’, which is a variable which can be filled with whatever proposition the situation suggests, so it’s a different class of thing. Effectively it’s equivalent to ‘taking apparent truth as normative’, so I’d call it the only position of that format that is Bayesian.
This website seems to have two definitions of rationality: rationality as truth-finding, and rationality as goal-achieving. Since truth deals with “is”, and morality deals with “ought”, morality will be of the latter kind. Because they are two different definitions, at some point they can be at odds—but what if your primary goal is truth-finding (which might be required by your statement if you make no exceptions for beneficial self-deception)? How would you feel about ignoring some truths, because they might lead you to miss other truths?
This article is about how learning some truths can prevent you from learning other truths, with an implication that order of learning will mitigate these effects. In some cases, you might be well served by purging truths from your mind (for example, “there is a miniscule possibility of X” will activate priming and availability heuristic). Some truths are simply much more useful than others, so what do you do if some lesser truths can get in the way of greater truths?
Neither truth-finding nor goal-achieving quite captures the usual sense of the word around here. I’d say the latter is closer to how we usually use it, in that we’re interested in fulfilling human values; but explicit, surface-level goals don’t always further deep values, and in fact can be actively counterproductive thanks to bias or partial or asymmetrical information.
Almost everyone who thinks they terminally value truth-finding is wrong; it makes a good applause light, but our minds just aren’t built that way. But since there are so many cognitive and informational obstacles in our way, finding the real truth is at some point going to be critically important to fulfilling almost any real-world set of human values.
On the other hand, I don’t rule out beneficial self-deception in some situations, either. It shouldn’t be necessary for any kind of hypothetical rationalist super-being, but there aren’t too many of those running around.
This seems like a shorthand for denying the existence of morals and ethics. I don’t think that’s what you mean, but I’ve heard that exact argument used to support nihilism.
If I say “torture is unethical”, I might mean “I believe that torture, for its own sake and without a greater positive offset, is unethical”, which is objectively true (please, I entreat you to examine my source code). But it would be just as objectively true to say the negation if I actually believed the negation. Is it neither moral nor immoral to hold the belief that torture is a bad thing?
Hmm… thanks for writing this. I just realized that I may resemble your argumentative friend in some ways. I should bookmark this.
Stanovich’s “dysrationalia” sense of stupidity is one of my greatest fears.
I didn’t know whether to post this reply to “Black swans from the future” or here, so I’ll just reference it:
Good post, Eliezer.
I’ve pointed before to this very good review of Philip Tetlock’s book, Expert Political Judgment. The review describes the results of Tetlock’s experiments evaluating expert predictions in the field of international politics, where they did very poorly. On average the experts did about as well as random predictions and were badly outperformed by simple statistical extrapolations.
Even after going over the many ways the experts failed in detail, and even though the review is titled “Everybody’s An Expert”, the reviewer concludes, “But the best lesson of Tetlock’s book may be the one that he seems most reluctant to draw: Think for yourself.”
Does that make sense, though? Think for yourself? If you’ve just read an entire book describing how poorly people did who thought for themselves and had a lot more knowledge than you do, is it really likely that you will do better to think for yourself? This advice looks like the same kind of flaw Eliezer describes here, the failure to generalize from knowledge of others’ failures to appreciation of your own.
There’s a better counterargument than that in Tetlock—one of the data points he collected was from a group of university undergraduates, and they did worse than the worst experts, worse than blind chance. Thinking for yourself is the worst option Tetlock considered.
Thinking for yourself is the worst option Tetlock considered.
Worse for making predictions, I suppose. But if people never think for themselves, we are never going to have any new ideas. Statistical extrapolation may be great for prediction, but it is poor for originality. So we value thinking for oneself. But the hit-rate is terrtlble. We have to put up with huge amounts of crap to get the gems. Most Ideas are Wrong, as I like to say when people tell me I’m being “too critical”.
Oh, it’s less general than that—it’s worse for political forecasting specifically. Other kinds of prediction (e.g. will this box fit under this table?), thinking for yourself is often one of the better options.
But, you know, political forecasting is one of the things we often care about. So knowing rules of thumb like “trust the experts, but not very much” is quite helpful.
Actually, when I was rereading the comments and saw your mention of Tetlock, I thought you would point out the bit where he noted the hedgehog predictors made inferior predictions within their area of expertise than without.
Fantastic article. The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I’d be better off guessing myself or at random than listening to them.
Maybe I can estimate how many variables various conclusions rest on, and how much uncertainty is in each, in order to estimate the total uncertainty in various possible outcomes. I’ll have to pay special attention to any evidence that undercuts my beliefs and assumptions, to try to avoid confirmation bias.
That’s great, stop watching TV. TV pundits are an awful source of information.
One of my past life decisions I consistently feel very happy about.
TV pundits are entertainers. They’re hired less for their insightful commentary and more for their ability to engage an audience.
Hal, to be precise, the bias is generalizing from knowledge of others’ failures to skepticism about disliked conclusions, but failing to generalize to skepticism about preferred conclusions or one’s own conclusions. That is, the error is not absence of generalization, but imbalance of generalization, which is far deadlier. I do agree with you that the reviewer’s conclusion is not supported (to put it mildly) by the evidence under review.
So why, then, is this blog not incorporating more statistical and collective de-biasing mechanisms? There are some out-of-the-box web widgets and mildly manual methods to incorporate that would at the very least provide new grist for the discussion mill.
The error here is similar to one I see all the time in beginning philosophy students: when confronted with reasons to be skeptics, they instead become relativists. That is, where the rational conclusion is to suspend judgment about an issue, all too many people instead conclude that any judgment is as plausible as any other.
I would love to hear more about such methods, Rafe. This blog tends to be a somewhat abstract and “meta” but I would like to do more case studies on specific issues and look at how we could come to a less biased view of the truth. I did a couple of postings on the “Peak Oil” controversy a few months ago along these lines.
Rafe, name three.
Rooney, I don’t disagree that this would be a mistake, but in my experience the balance of evidence is very rarely exactly even—because hypotheses have inherent penalties for complexity. Where there is no evidence in favor of a complicated proposed belief, it is almost always correct to reject it, not suspend judgment. The only cases I can think of where I suspend judgment are binary or small discrete hypothesis spaces, like “Was it murder or suicide?”, or matters like the anthropic principle, where there is no null hypothesis to take refuge in, and any position is attackable.
I have also had repeated encounters with individuals who take the bias literature to provide ‘equal and opposite biases’ for every situation, and take this as reason to continue to hold their initial beliefs. The situation is reminiscent of many economic discussions, where bright minds question whether the effect of a change on some quantity will be positive, negative or ambiguous. The discussants eagerly search for at least one theoretical effect that could move the quantity in a positive direction, one that could move it in the negative, and then declare the effect ambiguous after demonstrating their cleverness, without evaluating the actual size of the opposed effects.
I would recommend that when we talk about opposed biases, at least those for which there is an experimental literature, we should give rough indications of their magnitudes to discourage our audiences from utilizing the ‘it’s all a wash’ excuse to avoid analysis.
As someone who seems to have “thrown the kitchen sink” of cognitive biases at the free will problem, I wonder if I’ve suffered from this meta-bias myself. I find only modest reassurance in the facts that: (i) others have agreed with me and (ii) my challenge for others to find biases that would favor disbelief in free will has gone almost entirely unanswered.
But this is a good reminder that one can get carried away...
Eliezer, I agree that exactly even balances of evidence are rare. However, I would think suspending judgment to be rational in many situations where the balance of evidence is not exactly even. For example, if I roll a die, it would hardly be rational to believe “it will not come up 5 or 6”, despite the balance of evidence being in favor of such a belief. If you are willing to make >50% the threshold of rational belief, you will hold numerous false and contradictory beliefs.
Also, I have some doubt about your claim that when “there is no evidence in favor of a complicated proposed belief, it is almost always correct to reject it”. If you proposed a complicated belief of 20th century physics (say, Bell’s theorem) to Archimedes, he would be right to say he has no evidence in its favor. Nonetheless, it would not be correct for Archimedes to conclude that Bell’s theorem is therefore false.
Perhaps I am misunderstanding you.
A Bayesian would not say definitively that it would not come up as 5 or 6. However, if you were to wager on whether or not the dice will come up as either 5 or 6, the only rational position is to bet against it. Given enough throws of the die, you will be right 2⁄3 of the time.
At the most basic level, the difference between Bayesian reasoning and traditional rationalism is a Bayesian only thinks in terms in likelihoods. It’s not a matter of “this position is at a >50% probability, therefore it is correct”, it is a matter of “this position is at a >50% probability, so I will hold it to be more likely correct than incorrect until that probability changes”.
It’s a difficult way of thinking, as it doesn’t really allow you to definitively decide anything with perfect certainty. There are very few beliefs in this world for which a 100% probability exists (there must be zero evidence against a belief for this to occur). Math proofs, really, are the only class of beliefs that can hold such certainty. As such the possibility of being wrong pretty much always exists, and must always be considered, though by how much depends on the likelihood of the belief being incorrect.
If no evidence is given for the belief, of course he is right to reject it. It is the only rational position Archimedes can take. Without evidence, Archimedes must assign a 0%, or near 0%, probability to the likelihood that the 20th century position is correct. However, if he is presented with the evidence for which we now believe such things, his probability assignment must change, and given the amount of evidence available it would be irrational to reject it.
Just because you were wrong does not mean you were thinking irrationally. The converse of that is also true: just because you were right does not mean you were thinking rationally.
Also note that it is a fairly well known fact that 20th century physics are broken—i.e. incorrect, or at least not completely correct. We simply have nothing particularly viable to supersede them with yet, so we are stuck until we find the more correct theories of physics. It would be pretty funny to convince Archimedes of their correctness, only to follow it up with all the areas where modern physics break down.
You need to specify even odds. Bayesians will bet on just about anything if the price is right.
Odds on dice are usually assumed even unless specified otherwise, but it’s never wrong to specify it, so thanks.
On the other hand when considering rational agency some come very close to defining ‘probability’ based on what odds would be accepted for bets on specified events.
There are none.
Thanks, I was a little unsure of stating that there is no such thing as 100% probability. That post is very helpful.
Ah, the Godelian “This sentence is false.”
If you gave him almost anything else that complex, it actually would be false. Once something gets even moderately complex, there is a huge number of other things that complex.
Technically, he should figure that there’s just a one in 10^somethingorother chance that it’s true, but you can’t remember all 10^somethingorother things that are that unlikely, so you’re best off to reject it.
It would be irrational to believe “it will not come up 5 or 6” because P(P(5 or 6) = 0) = 0, so you know for certain that its false. As you said “Claims about the probability of a given claim being true, helpful as they may be in many cases, are distinct from the claim itself.” Before taking up any belief (if the situation demands taking up a belief, like in a bet, or living life), a Bayesian would calculate the likelihood of it being true vs the likelihood of it being false, and will favour the higher likelihood. In this case, the likelihood that “it will not come up 5 or 6” is true is 0, so a Bayesian would not take up that position. Now, you might observe that the belief that “1,2,3 or 4 will come up” is true also holds holds the likelihood of zero. In the case of a dice role, any statement of this form will be false, so a Bayesian will take up beliefs that talk probabilities and not certainties . (As Bigjeff explains, “At the most basic level, the difference between Bayesian reasoning and traditional rationalism is a Bayesian only thinks in terms in likelihoods”)
Ofcourse, one can always say “I don’t know”, but saying “I don’t know” would have an inferior utility in life than being a Bayesian. So, for example, assume that your life depends on a series of dice rolls. You can take two positions: 1) You say “I believe I don’t know what the outcome would be” on every roll. 2) You bet on every dice roll according to the information you have (in other words, You say “I believe that outcome X has Y chance of turning up”. Both positions would be of course be agreeable, but the second position would give you a higher payoff in life. Or so Bayesians believe.
“Nonetheless, it would not be correct for Archimedes to conclude that Bell’s theorem is therefore false.”
I think this is a terrible hypothetical to use to illuminate your point, since most of Archimedes’ decision would be based on how much evidence is proper to give to the source of information he gets the theorem from. I would say that, for any historically plausible mechanism, he’d certainly be correct in rejecting it.
Rooney, where there isn’t any evidence, then indeed it may be appropriate to suspend judgment over a large hypothesis space, which indeed is not the same as being able to justifiably adopt a random such judgment—anyone who wants to assign more than default probability mass is being irrational.
I concur that Bell’s theorem is a terrible hypothetical, because the whole point is that, in real life, without evidence, there’s absolutely no way for Archimedes to just accidentally hit on Bell’s theorem—in his lifetime he will not reach that part of the search space; anything he tries without evidence will be wrong. It’s exactly like saying, “But what if you did buy the winning lottery ticket? Then it would have high expected utility.”
I don’t think that 50% is a distinguished threshold for probability. Heck, I don’t think 1 in 20 is a distinguished threshold for probability. The point of a binary decision space is that it is small and discrete, not that it is binary.
Eliezer, I think we are misunderstanding each other, possibly merely about terminology.
When you (and pdf) say “reject”, I am taking you to mean “regard as false”. I may be mistaken about that.
I would hope that you don’t mean that, for if so, your claim that “no evidence in favor → almost always false” seems bound to lead to massive errors. For example, you have no evidence in favor of the claim “Rooney has string in his pockets”. But you wouldn’t on such grounds aver that such a claim is almost certainly false. The appropriate response would be to suspend judgment, i.e., to neither reject nor accept. Perhaps I am not understanding what counts as a suitably “complicated” belief.
As for Archimedes meeting Bell’s theorem, perhaps it was too counter-factual an example. However, I wouldn’t say it’s comparable to the “high utility” of the winning lottery ticket: it the case of the lottery, the relevant probabilities are known. By contrast, Archimedes (supposing he were able to understand the theorem) would be ignorant of any evidence to confirm or disconfirm it. Thus I would hope that he would refrain from rejecting it, merely regarding it as a puzzling vision from Zeus, perhaps.
The probability that an arbitrary person has string in their pockets (given that they’re wearing pockets at the time) is knowable, and given no other information we could say that it’s X%. The proper attitude towards the claim “Rooney has string in his pockets” is that it has about an X% chance of being true. (Unless we get other evidence to the contrary—and the fact that someone made the claim might be evidence here.)
Say X is 3%. Then I should say that Rooney very likely has no string in his pockets. Say X were 50%. Then I should say that there’s an even chance Rooney has string in his pockets. In neither case am I withholding judgment. Given what you’ve said, Rooney, I think you might say that the latter would be withholding judgment? Or would you say that neither assertion is justified, and in that case, what does it mean to withhold judgment?
I think there’s a post somewhere last year where Eliezer went over these points.
Pdf, maybe you’re referring to “I Don’t Know”?
Rooney, I think you’re interpreting “reject” as “state with certainty that it is not true” or “behave as if there is definite evidence against it”. Whereas what I mean is that one should bet at odds that are tiny or even infinitesimal when dealing with an evidentially unsupported belief in a very large search space. You have no choice but to deal this way with the vast majority of such beliefs if you want your total probabilities to sum to 1.
By “suspending judgment” I mean neither accepting a claim as true, nor rejecting it as false. Claims about the probability of a given claim being true, helpful as they may be in many cases, are distinct from the claim itself. So, pdf, when you say “The proper attitude towards the claim “Rooney has string in his pockets” is that it has about an X% chance of being true”, where X is unknown, I don’t see how this is materially different from saying “I don’t know if Rooney has string in his pockets”, which is to say that you are (for the moment at least) suspending judgment about whether the claim (call it ‘string’) is true or false. And where X is estimated (on the basis of some hypothetical evidence) to be (say) .4, what is the proper attitude toward ‘string’? Saying “‘string’ has a 40% chance of being true” doesn’t answer the question, it makes a different claim, assigning probability. In such situations, the rational course of action is to suspend judgment about ‘string’. You may of course hold beliefs about the probability of ‘string’ being true and act on those beliefs accordingly (by placing real or hypothetical bets, etc.), but in such cases you’re neither accepting nor rejecting ‘string’.
You have no choice but to bet at some odds. Life is about action, action is about expected utility, and expected utility demands that you assign some subjective weighting to outcomes based on how likely they are. Walking down the street, I offer to bet you a million dollars against one dollar that a stranger has string in their pockets. Do you take the bet? Whether you say yes or no, you’ve just made a statement of probability. The null action is also an action. Refusing to bet is like refusing to allow time to pass.
Nor do I permit probabilities of zero and one. All belief is belief of probability.
I have to bet on every possible claim I (or any sentient entity capable of propositional attitudes in the universe) might entertain as a belief? That is highly implausible as a descriptive claim. Consider the claim “Xinwei has string in his pockets” (where Xinwei is a Chinese male I’ve never met). I have no choice but to assign probability to that claim? And all other claims, from “language is the house of being” to “a proof for Goldbach’s conjecture will be found by an unaided human mind”? If Eliezer offers me a million dollars to bet on someone’s pocket-contents, then, yes, if the utility is right, I will calculate probabilities, meager though my access to evidence may be. But that is not life. The null action may be an action, but lack of belief is not a belief. “I’ve never thought about it” is not equivalent to “it’s false” or “it’s very improbable”.
(Did Neanderthals assign probabilities, or was it a module that emerged at about the same time as the FOXP gene? Or did it have to wait until the invention of games of chance in western Europe? Is someone who refuses to bet on anything for religious reasons ipso facto irrational?)
And you don’t take the belief “2 + 2 = 4” as having probability of 1? Nor “2 + 2 = 5″ as 0?
I’m off, out of ISP range for a day, so I won’t reply for a bit. Cheers.
Michael Rooney: I don’t think Eliezer is saying that it’s invalid to say “I don’t know.” He’s saying it’s invalid to have as your position “I should not have a position.”
The analogy of betting only means that every action you take will have consequences. For example, the decision not to try to assign a probability to the statement that Xinwei has a string in his pocket will have some butterfly effect. You have recognized this, and have also recognized that you don’t care, and have taken the position that it doesn’t matter. The key here is that, as you admit, you have taken a position.
And now that we know that we’re going to be more biased. Why’d you have to say that?
Because knowing about biases can also help people. A cornerstone premise of Eliezer’s entire life strategy.
“Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases.”
Well, what about that always taking on the strongest opponent and the strongest arguments business? ;)
Actually, when I see a fellow with third degree in Philosophy, I leave him for someone, who’ll have a similiar degree. It isn’t that Sorbonne initiates are hopeless, it’s arguments with ’em, that really are (hopeless).
“Things will continue historically as they have” is in some contexts hardly the worst thing you could assume, particularly when the alternative is relying on expert advice that a) is from people who historically have not had skill at predicting things and b) are making predictions reliant on complex ideas that you’re in no position to personally evaluate.
I think I’ve got a pretty good feeling on those 6 predictions and have seen them in action numerous times. Most especially in discussions on religion. Does the following seem about right LWers?
The prior attitude effect, both atheists and theists have prior strong feelings of their respective positions and many of them tend to evaluate their supportive arguments more favourably, whilst also aggressively attacking counters to their arguments as predicted by the disconfirmation bias.
The internet being what it is, provides a ready source of material to confirm ones bias.
Polarization of attitude will occur, as a direct result of the disconfirmation bias. One classic example of this is the tendency in internet forum for one person to state their position and expect another to refute it, thereby polarizing the argument—that the people then “naturally” fall into a disconfirmation bias situation is quite ironic in my opinion. Is the classic debating style of “your for and I’m against” or vice versa an example of structured disconfirmation bias?
Whilst the sophistication effect as described precludes, or perhaps ignores that one measure of sophistication is to know the topic being discussed from multiple angles. I would hold that a person who uses their knowledge to only counter someone else’s argument is utilizing sophism, whilst a person who is intellectually honest will argue for both cases.
The link to the paper is dead. I found a copy here: Taber & Lodge (2006).
Here’s yet another link, this one not seemingly associated with an individual course:
As far as I can tell, there have been few other studies which demonstrate the sophistication effect. One new study on this is West et al. (forthcoming), “Cognitive Sophistication Does Not Attenuate the Bias Blind Spot.”
Here is the abstract:
Have there been any attempts to measure biases in researchers who study biases?
Unfortunately, the results of all such studies were rejected, due to… well, you know.
No formal ones I know of, although I’m sure Will Newsome would like that. But Kahneman and Tversky did say that every bias they studied, they first detected in themselves.
Not that I know of.
“For a true Bayesian, information would never have negative expected utility”. I’m probably being a technicality bitch, attacking an unintended interpretation, but I can see bland examples of this being false if taken literally: A robot scans people to see how much knowledge they have and harms them more if they have more knowledge, leading to a potential for negative utility given more knowledge.
“For a true Bayesian, information would never have negative expected utility.”
Is this true in general? It seems to me that if a Bayesian has limited information handling ability, then they need to give some thought (not too much!) to the risks of being swamped with information and of spending too many resources on gathering information.
I believe that in this situation “true Bayesian” implies unbounded processing power/ logical omniscience.
I suggest that “true Bayesian” is ambiguous enough (this seems to use it in the sense of a human using the principles of Bayes) that some other phrase—perhaps “unlimited Bayesian”—would be clearer.
The cost of gathering or processing the information may exceed the value of information, but the information is always positive value; At worst, you do nothing different, and the rest of the time you make a more informed choice.
Yes, in this technical sense.
A true Bayesian has unlimited information handling ability.
I think I see that—because if it didn’t, then not all of its probabilities would be properly updated, so its degrees of belief wouldn’t have the relations implied by probability theory, so it wouldn’t be a true Bayesian. Right?
Yes, one generally ignores the cost of making these computations. One might try to take it into account, but then one is ignoring the cost of doing that computation, etc. Historically, the “Bayesian revolution” needed computers before it could happen.
And, I notice, it has only gone as far as the computers allow. “True Bayesians” also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.
It is impossible, even in principal. The only way to have universal priors over all computable universes is if you have access to a source of hypercomputation, but that would mean the universe isn’t computable so the truth still isn’t in your prior set.
Is that written up as a theorem anywhere?
That depends on how one wants to formalize it.
Yeah, certainly. The search might be expensive. Or, some of its resources might be devoted to distinguishing the most relevant among the information it receives—diluting its input with irrelevant truths makes it work harder to find what’s really important.
An interpretation of the original statement that I think is true, though, is that in all these cases, receiving the information and getting a little more knowledgeable offsets the negative utility of whatever price was paid for it. The negative utility of the combination of search+learning is always negative because of the searching part of it—if you kept the searching but removed the learning at the end, it’d be even worse.
I’m not exactly sure what “a true Bayesian” refers to, if anything, but it’s possible that being whatever that is precludes having limited information handling ability.
“True Bayesian” is in this case a “True Scotsman”, if some information has negative utility for you, you are not a true Bayesian.
Given the unbelievable difficulty in overcoming cognitive bias (mentioned in this article and many others), is it even realistic to expect that it’s possible? Maybe there are a lucky few who may have that capacity, but what about a majority of even those with above-average intelligence, even after years of work at it? Would most of them not just sort of drill themselves into a deeper hole of irrationality? Even discussing their thoughts with others would be of no help, given the fact that most others will be afflicted with cognitive biases as well. Since this blog is devoted to precisely that effort (i.e. helping people become more rational), I would think that those who write posts here must have reason to believe that it is indeed quite possible, but do you have any examples of such improvement? Have any scientists done any studies on overcoming cognitive bias? The ones I’ve seen only show that being aware of cognitive bias barely removes its effects.
It almost seems like the only way to truly overcome cognitive biases is to do something like design a computer program based on something you know for sure you’re not biased about (e.g. statistics that people formed correct opinions about in various experiments) and then run it for something you are likely to be biased about.
I apologize if there are already a bunch of posts (or even comments!) answering this question; I’ve been on the site like all day and haven’t come across any, so I figured it couldn’t hurt to ask.
My main takeaway from this is that “I know about this bias, therefore I’m more immune to it” is wrong. To be less susceptible to a bias, you need to practice habits that help (like the premortem as a counter to the planning fallacy), not just know a lot of cognitive science.
Critical Review recently devoted an issue to discussions of this 2006 study. Taber & Lodge’s reply to the symposium on their paper is available here.
I Think is a good thing to be Humble to yourself, not to ague with yourself. if you you are always in self-doubt, you never speak out and learn. If you don’t hear yourself, only how ‘smart’ you sound, you never learn from your mistakes. I try to learn from my—and other’s- mistakes but I think observation of yourself is truly the key to being a rationalist, to remove self-imposed blocks on the path of understanding.
I Think it is great that you have such real-life experience, and have the courage to try. Keep living, learning and trying!
(I know this might be off-topic, but this is my first post and I don’t know where to start, so i posted somewhere that inspired me to write.)
On a related note to such despicable people; I just had a few minutes talk with a very old friend on mine who matched this description. I just wanted an update on his situation and see if the boundless rage and annoyance I experienced then still fit. It’s not super relevant, but the exact moment i started writing to him, my hands started shaking and i could feel a pressure on my chest, and my mind started clouding over. It’s probably something that’s shot into my system, but the exact reason why and what i dont know. Do any of you happen to know about this?
Also, there’s the added danger than someone otherwise smart, may lure in people to the dark side of things, and make them believe things like 9/11 conspiracies. It also taught me to trust my gut feeling sometimes instead of what seems to be factual evidence, and not to have belief in belief. This is one of my most embarrassing things I’ve ever experienced
You don’t believe in free will, correct?
I fear that the most common context in which people learn about cognitive biases is also the most detrimental. That is, they’re arguing about something on the internet and someone, within the discussion, links them an article or tries to lecture them about how they really need to learn more about cognitive biases/heuristics/logical fallacies etc.. What I believe commonly happens then is that people realise that these things can be weapons; tools to get the satisfaction of “winning”. I really wish everyone would just learn this in some neutral context (school maybe?) but most people learn this with an intent, and I think it colours their use of rationality in general, perhaps indefinitely. :/ But maybe I’m just being too pessimistic.
Your last sentence is funny, considering I immediately thought: ‘If we taught them in school and plenty of bad effects remained, which seems well within the realm of possibility, you might be wishing people learned about fallacies in a context that made them seem more important.’
THIS is the proper use of humility. I hope I’m less of a fanatic and more tempered in my beliefs in the future.
It seems to me like this is as intended. Most people who talk about biases and fallacies do so in the veil of them being wrong and bad, instead of mere tools, more or less sophisticated and consciously knowable. I am skeptical about what good argument and reasoning entails and whether any such single instance exists.
For a salient example, look no further than the politics board of 4chan. Stickied for the last five years is a list of 24 logical fallacies. Unfortunately, this doesn’t seem to dissuade the conspiratorial ramblings, but rather, lends an appearance of sophistication to their arguments for anyone unfamiliar with the subject. It’s how you get otherwise curious and bright 15 year olds parroting anti-semitic rhetoric.
I find on the internet that people treat logical fallacies like moves on a Chessboard. Meanwhile, IRL, they’re sort of guidelines you might use to treat something more carefully. An example I often give is that in court we try to establish the type of person the witness is—because we believe so strongly that Ad Hominem is a totally legitimate matter.
But Reddit or 4chan politics and religion is like, “I can reframe your argument into a form of [Fallacy number 13], check and mate!”
It’s obviously a total misunderstanding of what a logical fallacy even is. They treat it like rules of logical inference, which it is definitely not (and would disprove what someone said, however outside of exotic circumstances, such a mistake would be trivial to spot).