Right, the existence of the anecdote is the evidence, not the occurrence of the events that it alleges.
You can find people saying anecdotes on any side of a debate, and I see no reason the people who are right would cite anecdotes more.
It is true that, if a hypothesis has reached the point of being seriously debated, then there are probably anecdotes being offered in support of it. (… assuming that we’re taking about the kinds of hypotheses that would ever have an anecdote offered in support of it.) Therefore, the learning of the existence of anecdotes probably won’t move much probability around among the hypotheses being seriously debated.
In particular, hypothesis space contains hypotheses for which no anecdote has ever been offered. If you learned that a particular hypothesis H were true, you would increase your probability that H was among those hypotheses that are supported by anecdotes. (Right? The alternative is that which hypotheses get anecdotes is determined by mechanisms that have absolutely no correlation, or even negative correlation, with the truth.) Therefore, the existence of an anecdote is evidence for the hypothesis that the anecdote alleges is true.
A typical situation is that there’s a contentious issue, and some anecdotes reach your attention that support one of the competing hypotheses.
You have three ways to respond:
You can under-update your belief in the hypothesis, ignoring the anecdotes completely
You can update by precisely the measure warranted by the existence of these anecdotes and the fact that they reached you.
You can over-update by adding too much credence to the hypothesis.
In almost every situation you’re likely to encounter, the real danger is 3. Well-known biases are at work pulling you towards 3. These biases are often known to work even when you’re aware of them and trying to counteract them. Moreover, the harm from reaching 3 is typically far greater than the harm from reaching 1. This is because the correct added amount of credence in 2 is very tiny, particularly because you’re already likely to know that the competing hypotheses for this issue are all likely to have anecdotes going for them. In real-life situations, you don’t usually hear anecdotes supporting an incredibly unlikely-seeming hypothesis which you’d otherwise be inclined to think as capable of nurturing no anecdotes at all. So forgoing that tiny amount of credence is not nearly as bad as choosing 3 and updating, typically, by a large amount.
The saying “The plural of anecdotes is not data” exists to steer you away from 3. It works to counteract the very strong biases pulling you towards 3. Its danger, you are saying, is that it pulls you towards 1 rather than the correct 2. That may be pedantically correct, but is a very poor reason to criticize the saying. Even with its help, you’re almost always very likely to over-update—all it’s doing is lessening the blow.
Perhaps this as an example of “things Bayesianism has taught you” that are harming your epistemic rationality?
A similar thing I noticed is disdain towards “correlation does not imply causation” from enlightened Bayesians. It is counter-productive.
These biases are often known to work even when you’re aware of them and trying to counteract them.
This is the problem. I know, as an epistemic matter of fact, that anecdotes are evidence. I could try to ignore this knowledge, with the goal of counteracting the biases to which you refer. That is, I could try to suppress the Bayesian update or to undo it after it has happened. I could try to push my credence back to where it was “manually”. However, as you point out, counteracting biases in this way doesn’t work.
Far better, it seems to me, to habituate myself to the fact that updates can by miniscule. Credence is quantitative, not qualitative, and so can change by arbitrarily small amounts. “Update Yourself Incrementally”. Granting that someone has evidence for their claims can be an arbitrarily small concession. Updating on the evidence doesn’t need to move my credences by even a subjectively discernible amount. Nonetheless, I am obliged to acknowledge that the anecdote would move the credences of an ideal Bayesian agent by some nonzero amount.
...updates can by miniscule … Updating on the evidence doesn’t need to move my credences by even a subjectively discernible amount. Nonetheless, I am obliged to acknowledge that the anecdote would move the credences of an ideal Bayesian agent by some nonzero amount.
So, let’s talk about measurement and detection.
Presumably you don’t calculate your believed probabilities to the n-th significant digit, so I don’t understand the idea of a “miniscule” update. If it has no discernible consequences then as far as I am concerned it did not happen.
Let’s take an example. I believe that my probability of being struck by lightning is very low to the extent that I don’t worry about it and don’t take any special precautions during thunderstorms. Here is an anecdote which relates how a guy was stuck by lightning while sitting in his office inside a building. You’re saying I should update my beliefs, but what does it mean?
I have no numeric estimate of P(me being struck by lightning) so there’s no number I can adjust by 0.0000001. I am not going to do anything differently. My estimate of my chances to be electrocuted by Zeus’ bolt is still “very very low”. So where is that “miniscule update” that you think I should make and how do I detect it?
P.S. If you want to update on each piece of evidence, surely by now you must fully believe that product X is certain to enlarge your penis?
A typical situation is that there’s a contentious issue, and some anecdotes reach your attention that support one of the competing hypotheses.
It is interesting that you think of this as typical, or at least typical enough to be exclusionary of non-contentious issues. I avoid discussions about politics and possibly other contentious issues, and when I think of people providing anecdotes I usually think of them in support of neutral issues, like the efficacy of understudied nutritional supplements. If someone tells you, “I ate dinner at Joe’s Crab Shack and I had intense gastrointestinal distress,” I wouldn’t think it’s necessarily justified to ignore it on the basis that it’s anecdotal. If you have 3 more friends who all report the same thing to you, you should rightly become very suspicious of the sanitation at Joe’s Crab Shack. I think the fact that you are talking about contentious issues specifically is an important and interesting point of clarification.
Thanks for that comment! Eliezer often says people should be more sensitive to evidence, but an awful lot of real-life evidence is in fact much weaker, noisier, and easier to misinterpret than it seems. And it’s not enough to just keep in mind a bunch of Bayesian mantras—you need to be aware of survivor bias, publication bias, Simpson’s paradox and many other non-obvious traps, otherwise you silently go wrong and don’t even know it. In a world where most published medical results fail to replicate, how much should we trust our own conclusions?
Would it be more honest to recommend people to just never update at all? But then everyone will stick to their favorite theories forever… Maybe an even better recommendation would be to watch out for “motivated cognition”, try to be more skeptical of all theories including your favorites.
The alternative is that which hypotheses get anecdotes is determined by mechanisms that have absolutely no correlation, or even negative correlation, with the truth.
Doesn’t look implausible to me. Here’s an alternative hypothesis: the existence of anecdotes is a function of which beliefs are least supported by strong data because such beliefs need anecdotes for justification.
In general, I think anecdotes are way too filtered and too biased as an information source to be considered serious evidence. In particular, there’s a real danger of treating a lot of biased anecdotes as conclusive data and that danger, seems to me, outweighs the miniscule usefulness of anecdotes.
I would raise a hypothesis to consideration because someone was arguing for it, but I don’t think anecdotes are good evidence in that I would have similar confidence in a hypothesis supported by an anecdote, and a hypothesis that is flatly stated with no justification. The evidence to raise it to consideration comes from the fact that someone took the time to advocate it.
This is more of a heuristic than a rule, because there are anecdotes that are strong evidence (“I ran experiments on this last year and they didn’t fit”), but when dealing with murkier issues, they don’t count for much.
The evidence to raise it to consideration comes from the fact that someone took the time to advocate it, not the anecdote.
Yes, it may be that the mere fact that a hypothesis is advocated screens off whether that hypothesis is also supported by an anecdote. But I suspect that the existence of anecdotes still moves a little probability mass around, even among just those hypotheses that are being advocated.
I mean, if someone advocated for a hypothesis, and they couldn’t even offer an anecdote in support of it, that would be pretty deadly to their credibility. So, unless I am certain that every advocated hypothesis has supporting anecdotes (which I am not), I must concede that anecdotes are evidence, howsoever weak, over and above mere advocacy.
Here’s a situation where an anecdote should reduce our confidence in a belief:
A person’s beliefs are usually well-supported.
When he offers supporting evidence, he usually offers the strongest evidence he knows about.
If this person were to offer an anecdote, it should reduce our confidence in his proposition, because it makes it unlikely he knows of stronger supporting evidence.
I don’t know how applicable this is to actual people.
I don’t think this is necessarily valid, because people also know that anecdotes can be highly persuasive. So for many people, if you have an anecdote it will make sense to say so, since most people argue not to reach the truth but to persuade.
… For example, if you told me that you once met a powerful demon who works to stop anyone from ever telling anecdotes about him (regardless of whether the anecdotes are true or false), then I would decrease my credence in the existence of such a demon.
Right, the existence of the anecdote is the evidence, not the occurrence of the events that it alleges.
It is true that, if a hypothesis has reached the point of being seriously debated, then there are probably anecdotes being offered in support of it. (… assuming that we’re taking about the kinds of hypotheses that would ever have an anecdote offered in support of it.) Therefore, the learning of the existence of anecdotes probably won’t move much probability around among the hypotheses being seriously debated.
However, hypothesis space is vast. Many hypotheses have never even been brought up for debate. The overwhelming majority should never come to our attention at all.
In particular, hypothesis space contains hypotheses for which no anecdote has ever been offered. If you learned that a particular hypothesis H were true, you would increase your probability that H was among those hypotheses that are supported by anecdotes. (Right? The alternative is that which hypotheses get anecdotes is determined by mechanisms that have absolutely no correlation, or even negative correlation, with the truth.) Therefore, the existence of an anecdote is evidence for the hypothesis that the anecdote alleges is true.
A typical situation is that there’s a contentious issue, and some anecdotes reach your attention that support one of the competing hypotheses.
You have three ways to respond:
You can under-update your belief in the hypothesis, ignoring the anecdotes completely
You can update by precisely the measure warranted by the existence of these anecdotes and the fact that they reached you.
You can over-update by adding too much credence to the hypothesis.
In almost every situation you’re likely to encounter, the real danger is 3. Well-known biases are at work pulling you towards 3. These biases are often known to work even when you’re aware of them and trying to counteract them. Moreover, the harm from reaching 3 is typically far greater than the harm from reaching 1. This is because the correct added amount of credence in 2 is very tiny, particularly because you’re already likely to know that the competing hypotheses for this issue are all likely to have anecdotes going for them. In real-life situations, you don’t usually hear anecdotes supporting an incredibly unlikely-seeming hypothesis which you’d otherwise be inclined to think as capable of nurturing no anecdotes at all. So forgoing that tiny amount of credence is not nearly as bad as choosing 3 and updating, typically, by a large amount.
The saying “The plural of anecdotes is not data” exists to steer you away from 3. It works to counteract the very strong biases pulling you towards 3. Its danger, you are saying, is that it pulls you towards 1 rather than the correct 2. That may be pedantically correct, but is a very poor reason to criticize the saying. Even with its help, you’re almost always very likely to over-update—all it’s doing is lessening the blow.
Perhaps this as an example of “things Bayesianism has taught you” that are harming your epistemic rationality?
A similar thing I noticed is disdain towards “correlation does not imply causation” from enlightened Bayesians. It is counter-productive.
This is the problem. I know, as an epistemic matter of fact, that anecdotes are evidence. I could try to ignore this knowledge, with the goal of counteracting the biases to which you refer. That is, I could try to suppress the Bayesian update or to undo it after it has happened. I could try to push my credence back to where it was “manually”. However, as you point out, counteracting biases in this way doesn’t work.
Far better, it seems to me, to habituate myself to the fact that updates can by miniscule. Credence is quantitative, not qualitative, and so can change by arbitrarily small amounts. “Update Yourself Incrementally”. Granting that someone has evidence for their claims can be an arbitrarily small concession. Updating on the evidence doesn’t need to move my credences by even a subjectively discernible amount. Nonetheless, I am obliged to acknowledge that the anecdote would move the credences of an ideal Bayesian agent by some nonzero amount.
So, let’s talk about measurement and detection.
Presumably you don’t calculate your believed probabilities to the n-th significant digit, so I don’t understand the idea of a “miniscule” update. If it has no discernible consequences then as far as I am concerned it did not happen.
Let’s take an example. I believe that my probability of being struck by lightning is very low to the extent that I don’t worry about it and don’t take any special precautions during thunderstorms. Here is an anecdote which relates how a guy was stuck by lightning while sitting in his office inside a building. You’re saying I should update my beliefs, but what does it mean?
I have no numeric estimate of P(me being struck by lightning) so there’s no number I can adjust by 0.0000001. I am not going to do anything differently. My estimate of my chances to be electrocuted by Zeus’ bolt is still “very very low”. So where is that “miniscule update” that you think I should make and how do I detect it?
P.S. If you want to update on each piece of evidence, surely by now you must fully believe that product X is certain to enlarge your penis?
It is interesting that you think of this as typical, or at least typical enough to be exclusionary of non-contentious issues. I avoid discussions about politics and possibly other contentious issues, and when I think of people providing anecdotes I usually think of them in support of neutral issues, like the efficacy of understudied nutritional supplements. If someone tells you, “I ate dinner at Joe’s Crab Shack and I had intense gastrointestinal distress,” I wouldn’t think it’s necessarily justified to ignore it on the basis that it’s anecdotal. If you have 3 more friends who all report the same thing to you, you should rightly become very suspicious of the sanitation at Joe’s Crab Shack. I think the fact that you are talking about contentious issues specifically is an important and interesting point of clarification.
Thanks for that comment! Eliezer often says people should be more sensitive to evidence, but an awful lot of real-life evidence is in fact much weaker, noisier, and easier to misinterpret than it seems. And it’s not enough to just keep in mind a bunch of Bayesian mantras—you need to be aware of survivor bias, publication bias, Simpson’s paradox and many other non-obvious traps, otherwise you silently go wrong and don’t even know it. In a world where most published medical results fail to replicate, how much should we trust our own conclusions?
Would it be more honest to recommend people to just never update at all? But then everyone will stick to their favorite theories forever… Maybe an even better recommendation would be to watch out for “motivated cognition”, try to be more skeptical of all theories including your favorites.
Doesn’t look implausible to me. Here’s an alternative hypothesis: the existence of anecdotes is a function of which beliefs are least supported by strong data because such beliefs need anecdotes for justification.
In general, I think anecdotes are way too filtered and too biased as an information source to be considered serious evidence. In particular, there’s a real danger of treating a lot of biased anecdotes as conclusive data and that danger, seems to me, outweighs the miniscule usefulness of anecdotes.
We may agree. It depends on what work the word “serious” is doing in the quoted sentence.
In this context “serious” = “I’m willing to pay attention to it”.
I would raise a hypothesis to consideration because someone was arguing for it, but I don’t think anecdotes are good evidence in that I would have similar confidence in a hypothesis supported by an anecdote, and a hypothesis that is flatly stated with no justification. The evidence to raise it to consideration comes from the fact that someone took the time to advocate it.
This is more of a heuristic than a rule, because there are anecdotes that are strong evidence (“I ran experiments on this last year and they didn’t fit”), but when dealing with murkier issues, they don’t count for much.
Yes, it may be that the mere fact that a hypothesis is advocated screens off whether that hypothesis is also supported by an anecdote. But I suspect that the existence of anecdotes still moves a little probability mass around, even among just those hypotheses that are being advocated.
I mean, if someone advocated for a hypothesis, and they couldn’t even offer an anecdote in support of it, that would be pretty deadly to their credibility. So, unless I am certain that every advocated hypothesis has supporting anecdotes (which I am not), I must concede that anecdotes are evidence, howsoever weak, over and above mere advocacy.
Here’s a situation where an anecdote should reduce our confidence in a belief:
A person’s beliefs are usually well-supported.
When he offers supporting evidence, he usually offers the strongest evidence he knows about.
If this person were to offer an anecdote, it should reduce our confidence in his proposition, because it makes it unlikely he knows of stronger supporting evidence.
I don’t know how applicable this is to actual people.
I don’t think this is necessarily valid, because people also know that anecdotes can be highly persuasive. So for many people, if you have an anecdote it will make sense to say so, since most people argue not to reach the truth but to persuade.
I agree that it is at least hypothetically possible that the offering of an anecdote should reduce our credence in what the anecdote claims.
… For example, if you told me that you once met a powerful demon who works to stop anyone from ever telling anecdotes about him (regardless of whether the anecdotes are true or false), then I would decrease my credence in the existence of such a demon.