What’s a Bias?
The availability heuristic is a cognitive shortcut humans use to reach conclusions; and where this shortcut reliably causes inaccurate conclusions, we can say that an availability bias is at work. Scope insensitivity is another example of a cognitive bias.
“Cognitive biases” are those obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by the shape of our own mental machinery. For example, our mental processes might be evolutionarily adapted to specifically believe some things that arent true, so that we could win political arguments in a tribal context. Or the mental machinery might be adapted not to particularly care whether something is true, such as when we feel the urge to believe what others believe to get along socially. Or the bias may be a side-effect of a useful reasoning heuristic. The availability heuristic is not itself a bias, but it gives rise to them; the machinery uses an algorithm (give things more evidential weight if they come to mind more readily) that does some good cognitive work but also produces systematic errors.
Our brains are doing something wrong, and after a lot of experimentation and/or heavy thinking, someone identifies the problem verbally and concretely; then we call it a “(cognitive) bias.” Not to be confused with the colloquial “that person is biased,” which just means “that person has a skewed or prejudiced attitude toward something.”
In cognitive science, “biases” are distinguished from errors that arise from cognitive content, such as learned false beliefs. These we call “mistakes” rather than “biases,” and they are much easier to correct, once we’ve noticed them for ourselves. (Though the source of the mistake, or the source of the source of the mistake, may ultimately be some bias.)
“Biases” are also distinguished from errors stemming from damage to an individual human brain, or from absorbed cultural mores; biases arise from machinery that is humanly universal.
Plato wasn’t “biased” because he was ignorant of General Relativity—he had no way to gather that information, his ignorance did not arise from the shape of his mental machinery. But if Plato believed that philosophers would make better kings because he himself was a philosopher—and this belief, in turn, arose because of a universal adaptive political instinct for self-promotion, and not because Plato’s daddy told him that everyone has a moral duty to promote their own profession to governorship, or because Plato sniffed too much glue as a kid—then that was a bias, whether Plato was ever warned of it or not.
While I am not averse (as you can see) to discussing definitions, I don’t want to suggest that the project of better wielding our own minds rests on a particular choice of terminology. If the term “cognitive bias” turns out to be unhelpful, we should just drop it.
We don’t start out with a moral duty to “reduce bias,” simply because biases are bad and evil and Just Not Done. This is the sort of thinking someone might end up with if they acquired a deontological duty of “rationality” by social osmosis, which leads to people trying to execute techniques without appreciating the reason for them. (Which is bad and evil and Just Not Done, according to Surely You’re Joking, Mr. Feynman, which I read as a kid.) A bias is an obstacle to our goal of obtaining truth, and thus in our way.
We are here to pursue the great human quest for truth: for we have desperate need of the knowledge, and besides, we’re curious. To this end let us strive to overcome whatever obstacles lie in our way, whether we call them “biases” or not.
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 242 points) (
- Tuning your Cognitive Strategies by 27 Apr 2023 20:32 UTC; 129 points) (
- The Jordan Peterson Mask by 3 Mar 2018 19:49 UTC; 60 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Simultaneous Overconfidence and Underconfidence by 3 Jun 2015 21:04 UTC; 37 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- 24 Dec 2010 0:53 UTC; 13 points) 's comment on Newtonmas Meetup, 12/25/2010 by (
- [SEQ RERUN] The Proper Use of Humility by 22 Apr 2011 11:48 UTC; 12 points) (
- [SEQ RERUN] …What’s a bias, again? by 21 Apr 2011 19:18 UTC; 12 points) (
- I am Bad at Flirting; Realizing that by Noticing Confusion by 30 Jun 2020 20:05 UTC; 8 points) (
- 5 Apr 2011 13:27 UTC; 6 points) 's comment on The Nature of Self by (
- 20 May 2011 3:47 UTC; 5 points) 's comment on The Advantages of Being Technical by (
- 6 Aug 2020 19:53 UTC; 5 points) 's comment on Tags Discussion/Talk Thread by (
- 17 May 2011 11:48 UTC; 4 points) 's comment on Conceptual Analysis and Moral Theory by (
- 7 May 2011 21:21 UTC; 3 points) 's comment on Shifting Load to Explicit Reasoning by (
- 13 Jun 2011 15:09 UTC; 3 points) 's comment on Rewriting the sequences? by (
- Meetup : West LA: Wait a minute… just what is bias? by 29 May 2015 22:18 UTC; 2 points) (
- Meetup : Frankfurt Meet-Up by 4 Jul 2015 16:45 UTC; 2 points) (
- Meetup : Frankfurt Meetup Revival by 20 Jun 2015 11:00 UTC; 2 points) (
- 7 Jul 2011 23:45 UTC; 1 point) 's comment on A Defense of Naive Metaethics by (
- 27 Feb 2012 1:07 UTC; 1 point) 's comment on Selfism and Partiality by (
- 28 Aug 2007 11:26 UTC; 0 points) 's comment on The Futility of Emergence by (
- 10 Apr 2007 4:04 UTC; 0 points) 's comment on “Inductive Bias” by (
- Exploring Democratic Dialogue between Rationality, Silicon Valley, and the Wider World by 20 Aug 2021 16:04 UTC; -5 points) (
We seem to mostly agree about what we are about here, but it seems damn hard to very precisely define exactly what. I guess I’ll focus on coming up with concrete examples of bias and concrete mechanisms for avoiding it, and set aside for now the difficult task of defining it.
“it seems damn hard to very precisely define exactly what”
Robin, I don’t see why a definition offered in terms of the origin of a phenomenon (“the shape of our mental machinery”) should be any less a definition (or any less precise) than one that directly describes the characteristics of the phenomenon. Why isn’t the former sufficient?
Pdf, I didn’t mean to imply that Eliezer’s approach was inferior to the approach I was taking, just that all the approaches run into problems when you try to become more precise.
Is there a well-defined difference between the shape of one’s mental machinery and its limited computing power?
Oh, how curious. I’ve been reading on here a while, and I think I had previously misunderstood the adopted meaning of the word “bias”… using the term as it’s socially used, that is to say, a prior reason for holding a certain belief over another due to convenience. A judge might be biased because one side is paying him; a jury member might be biased because their sister is the one on trial. Are these “mistakes”? Or do they fall under a certain type of cognitive bias that is similar among all humans? *ponder*
I would call a judge who is favoring a side because they’re paying him “biased”, and not “mistaken” or any such thing. But it’s not a cognitive bias. The word “bias” has legitimate meanings other than what EY is saying, so it would have been clearer if the article used the term “cognitive bias” at least at the outset.
I would argue a corrupt judge only seems biased as biased people in my understanding are not aware of their underlying preferences. That also might be the common ground with a cognitive bias: you are never directly aware of its presence and can only deduce on it by analysis.
You are confusing two definitions for the same word. The judge is biased by one definition of “bias”, but not by the other definition as used in cognitive or statistical bias.
Biases seem like they could be understood in terms of logical validity. Even if you reason solely from sound premises, you could still adopt an invalid argument (aka a fallacy; a conclusion that does not actually follow from the premises, no matter how true). I suggest the definition that biases are whatever cause people to adopt invalid arguments.
False or incomplete/insufficient data can cause the adoption of invalid arguments.
Contrast this with:
I.e. on average, it doesn’t matter if people try to avoid hindsight bias. “prior outcome knowledge” literally corresponds to conclusion “prior outcome should’ve been deemed very likely”.
To avoid it, you literally have to INSIST on NOT knowing what actually happened, if you aim to accurately represent the decision making process that actually happened.
Or if you do have the knowledge, you might result in having to force yourself to assign an extra 1 : 10 odds factor against the actual outcome (or worse) in order to compensate.
This definition of bias seems problematic. If a putative bias is caused by absorbed cultural mores, then supposedly it is not a bias. But that causal chain can be tricky to track down; we go on thinking something is a ‘bias’ until we find the black swan culture where the bias doesn’t exist, and then realize that the problem was not inherent in our mental machinery. But is that distinction even worth making, if we don’t know what caused the bias?
I suspect the definition is worth making because even if we don’t know what caused the bias, we can use the label of a bias “not inherent in our mental machinery” as a marker for study of what it’s cause is in the future.
For example, I read in a contemporary undergraduate social psychology textbook that experimental results found that a common bias affected subjects from Western cultures more strongly than it affected subjects from more interdependent cultures such as China and Japan.
[Obviously, my example is useless. I just don’t have access to that book at the current moment. I will update this comment with more detail when I’m able.]
The Simple Truth link should be http://yudkowsky.net/rational/the-simple-truth/
Thanks, fixed!
Typo: “and besides, were curious.” ~ s/were/we’re/.
I wonder when a venerable old article reaches the “any remaining bugs become features” stage.
There’s still “things that arent true”, instead of “things that aren’t true”, in the second paragraph.