Fallacies as weak Bayesian evidence

Abstract: Exactly what is fallacious about a claim like ”ghosts exist because no one has proved that they do not”? And why does a claim with the same logical structure, such as ”this drug is safe because we have no evidence that it is not”, seem more plausible? Looking at various fallacies – the argument from ignorance, circular arguments, and the slippery slope argument—we find that they can be analyzed in Bayesian terms, and that people are generally more convinced by arguments which provide greater Bayesian evidence. Arguments which provide only weak evidence, though often evidence nonetheless, are considered fallacies.

As a Nefarious Scientist, Dr. Zany is often teleconferencing with other Nefarious Scientists. Negotiations about things such as ”when we have taken over the world, who’s the lucky bastard who gets to rule over Antarctica” will often turn tense and stressful. Dr. Zany knows that stress makes it harder to evaluate arguments logically. To make things easier, he would like to build a software tool that would monitor the conversations and automatically flag any fallacious claims as such. That way, if he’s too stressed out to realize that an argument offered by one of his colleagues is actually wrong, the software will work as backup to warn him.

Unfortunately, it’s not easy to define what counts as a fallacy. At first, Dr. Zany tried looking at the logical form of various claims. An early example that he considered was ”ghosts exist because no one has proved that they do not”, which felt clearly wrong, an instance of the argument from ignorance. But when he programmed his software to warn him about sentences like that, it ended up flagging the claim ”this drug is safe, because we have no evidence that it is not”. Hmm. That claim felt somewhat weak, but it didn’t feel obviously wrong the way that the ghost argument did. Yet they shared the same structure. What was the difference?

The argument from ignorance

Related posts: Absence of Evidence is Evidence of Absence, But Somebody Would Have Noticed!

One kind of argument from ignorance is based on negative evidence. It assumes that if the hypothesis of interest were true, then experiments made to test it would show positive results. If a drug were toxic, tests of toxicity of reveal this. Whether or not this argument is valid depends on whether the tests would indeed show positive results, and with what probability.

With some thought and help from AS-01, Dr. Zany identified three intuitions about this kind of reasoning.

1. Prior beliefs influence whether or not the argument is accepted.

A) I’ve often drunk alcohol, and never gotten drunk. Therefore alcohol doesn’t cause intoxication.

B) I’ve often taken Acme Flu Medicine, and never gotten any side effects. Therefore Acme Flu Medicine doesn’t cause any side effects.

Both of these are examples of the argument from ignorance, and both seem fallacious. But B seems much more compelling than A, since we know that alcohol causes intoxication, while we also know that not all kinds of medicine have side effects.

2. The more evidence found that is compatible with the conclusions of these arguments, the more acceptable they seem to be.

C) Acme Flu Medicine is not toxic because no toxic effects were observed in 50 tests.

D) Acme Flu Medicine is not toxic because no toxic effects were observed in 1 test.

C seems more compelling than D.

3. Negative arguments are acceptable, but they are generally less acceptable than positive arguments.

E) Acme Flu Medicine is toxic because a toxic effect was observed (positive argument)

F) Acme Flu Medicine is not toxic because no toxic effect was observed (negative argument, the argument from ignorance)

Argument E seems more convincing than argument F, but F is somewhat convincing as well.

“Aha!” Dr. Zany exclaims. “These three intuitions share a common origin! They bear the signatures of Bayonet reasoning!”

Bayesian reasoning”, AS-01 politely corrects.

“Yes, Bayesian! But, hmm. Exactly how are they Bayesian?”


Note: To keep this post as accessible as possible, I attempt to explain the underlying math without actually using any math. If you would rather see the math, please see the paper referenced at the end of the post.

As a brief reminder, the essence of Bayes’ theorem is that we have different theories about the world, and the extent to which we believe in these theories varies. Each theory also has implications about what you expect to observe in the world (or at least it should have such implications). The extent to which an observation makes us update our beliefs depends on how likely our theories say the observation should be. Dr. Zany has a strong belief that his plans will basically always succeed, and this theory says that his plans are very unlikely to fail. Therefore, when they do fail, he should revise his belief in the “I will always succeed” theory down. (So far he hasn’t made that update, though.) If this isn’t completely intuitive to you, I recommend komponisto’s awesome visualization.

Now let’s look at each of the above intuitions in terms of Bayes’ theorem.

1. Prior beliefs influence whether or not the argument is accepted. This is pretty straightforward -the expression “prior beliefs” is even there in the description of the intuition. Suppose that we hear the argument, “I’ve often drunk alcohol, and never gotten drunk. Therefore alcohol doesn’t cause intoxication”. The fact that this person has never gotten drunk from alcohol (or at least claims that he hasn’t) is evidence for alcohol not causing any intoxication, but we still have a very strong prior belief for alcohol causing intoxication. Updating on this evidence, we find that our beliefs in both the theory “this person is mistaken or lying” and the theory “alcohol doesn’t cause intoxication” have become stronger. Due to its higher prior probability, “this person is mistaken or lying” seems more plausible of the two, so we do not consider this a persuasive argument for alcohol not being intoxicating.

2. The more evidence found that is compatible with the conclusions of these arguments, the more acceptable they seem to be. This too is a relatively straightforward consequence of Bayes’ theorem. In terms of belief updating, we might encounter 50 pieces of evidence, one at a time, and make 50 small updates. Or we might encounter all of the 50 pieces of evidence at once, and perform one large update. The end result should be the same. More evidence leads to larger updates.

3. Negative arguments are acceptable, but they are generally less acceptable than positive arguments. This one needs a little explaining, and here we need the concepts of sensitivity and specifity. A test for something (say, a disease) is sensitive if it always gives a positive result when the disease is present, and specific if it only gives a positive result when the disease is present. There’s a trade-off between these two. For instance, an airport metal detector is designed to alert its operators if a person carries dangerous metal items. It is sensitive, because nearly any metal item will trigger an alarm—but it is not very specific, because even non-dangerous items will trigger an alarm.

A test which is both extremly sensitive and extremly non-specific is not very useful, since it will give more false alarms than true ones. An easy way of creating an extremely sensitive “test for disease” is to simply always say that the patient has the disease. This test has 100% sensitivity (it always gives a positive result, so it always gives a positive result when the disease is present, as well), but its specificity is very low—equal to the prevalence rate of the disease. It provides no information, and isn’t therefore a test at all.

How is this related to our intuition about negative and positive arguments? In short, our environment is such that like the airport metal detector, negative evidence often has high sensitivity but low specificity. We intuitively expect that a test for toxicity might not always reveal a drug to be toxic, but if it does, then the drug really is toxic. A lack of a “toxic” result is what we would expect if the drug weren’t toxic, but it’s also what we would expect in a lot of cases where the drug was toxic. Thus, negative evidence is evidence, but it’s usually much weaker than positive evidence.

“So, umm, okay”, Dr. Zany says, after AS-01 has reminded him of the way Bayes’ theorem works, and helped him figure out how his intuitions about the fallacies have Bayes-structure. “But let’s not lose track of what we were doing, which is to say, building a fallacy-detector. How can we use this to say whether a given claim is fallacious?”

“What this suggests is that we judge a claim to be a fallacy if it’s only weak Bayesian evidence”, AS-01 replies. “A claim like ‘an unreliable test of toxicity didn’t reveal this drug to be toxic, so it must be safe’ is such weak evidence that we consider it fallacious. Also, if we have a very strong prior belief against something, and a claim doesn’t shift this prior enough, then we might call it a ‘fallacy’ to believe in the thing on the basis of that claim. That was the case with the ‘I’ve had alcohol many times and never gotten drunk, so alcohol must not be intoxicating’ claim.”

“But that’s not what I was after at all! In that case I can’t program a simple fallacy-detector: I’d have to implement a full-blown artificial intelligence that could understand the conversation, analyze the prior probabilities of various claims, and judge the weight of evidence. And even if I did that, it wouldn’t help me figure out what claims were fallacies, because all of my AIs only want to eradicate the color blue from the universe! Hmm. But maybe the appeal from ignorance was a special case, and other fallacies are more accomodating. How about circular claims? Those must surely be fallacious?”

Circularity

A. God exists because the Bible says so, and the Bible is the word of God.

B. Electrons exist because we can see 3-cm tracks in a cloud chamber, and 3-cm tracks in cloud chambers are signatures of electrons.

“Okay, we have two circular claims here”, AS-01 notes. “Their logical structure seems to be the same, but we judge one of them to be a fallacy, while the other seems to be okay.”

“I have a bad feeling about this”, Dr. Zany says.

The argument for the fallaciousness of the above two claims is that they presume the conclusion in the premises. That is, it is presumed that the Bible is the word of God, but that is only possible if God actually exists. Likewise, if electrons don’t exist, then whatever we see in the cloud chamber isn’t the signature signs of electrons. Thus, in order to believe the conclusion, we need to already believe it as an implicit premise.

But from a Bayesian perspective, beliefs aren’t binary propositions: we can tentatively believe in a hypothesis, such as the existence of God or electrons. In addition to this tentative hypothesis, we have sense data about the existence of the Bible and the 3-cm tracks. This data we take as certain. We also have a second tentative belief, the ambiguous interpretation of this sense data as the word of God or the signature of electrons. The sense data is ambiguous in the sense that it might or might not be the word of God. So we have three components in our inference: the evidence (Bible, 3-cm tracks), the ambiguous interpretation (the Bible is the word of God, the 3-cm tracks are signatures of electrons), and the hypothesis (God exists, electrons exist).

We can conjecture a causal connection between these three components. Let’s suppose that God exists (the hypothesis). This then causes the Bible as his word (ambiguous interpretation), which in turn gives rise to the actual document in front of us (sense data). Likewise, if electrons exist (hypothesis), then this can give rise to the predicted signature effects (ambiguous interpretation), which become manifest as what we actually see in the cloud chamber (sense data).

The “circular” claim reverses the direction of the inference. We have sense data, which we would expect to see if the ambiguous interpretation was correct, and we would expect the interpretation to be correct if the hypothesis were true. Therefore it’s more likely that the hypothesis is true. Is this allowed? Yes! Take for example the inference “if there are dark clouds in the sky, then it will rain, in which case the grass will be wet”. The reverse inference, “the grass is wet, therefore it has rained, therefore there have been dark clouds in the sky” is valid. However, the inference “the grass is wet, therefore the sprinkler has been on, thefore there is a sprinkler near this grass” may also be a valid inference. The grass being wet is evidence for both the presence of dark clouds and for a sprinkler having been on. Which hypothesis do we judge to be more likely? That depends on our prior beliefs about the hypotheses, as well as the strengths of the causal links (e.g. “if there are dark clouds, how likely is it that it rains?”, and vice versa).

Thus, the “circular” arguments given above are actually valid Bayesian inferences. But there is a reason that we consider A to be a fallacy, while B sounds valid. Since the intepretation (the Bible is the word of God, 3-cm tracks are signatures of electrons) logically requires the hypothesis, the probability of the interpretation cannot be higher than the probability of the hypothesis. If we assign the existence of God a very low prior belief, then we must also assign a very low prior belief to the interpretation of the Bible as the word of God. In that case, seeing the Bible will not do much to elevate our belief in the claim that God exists, if there are more likely hypotheses to be found.

“So you’re saying that circular reasoning, too, is something that we consider fallacious if our prior belief in the hypothesis is low enough? And recognizing these kinds of fallacies is AI-complete, too?” Dr. Zany asks.

“Yup!”, AS-01 replies cheerfully, glad that for once, Dr. Zany gets it without a need to explain things fifteen times.

“Damn it. But… what about slippery slope arguments? Dr. Cagliostro claims that if we let minor supervillains stake claims for territory, then we would end up letting henchmen stake claims for territory as well, and eventually we’d give the right to people who didn’t even participate in our plans! Surely that must be a fallacy?”

Slippery slope

Slippery slope arguments are often treated as fallacies, but they might not be. There are cases where the stipulated “slope” is what would actually (or likely) happen. For instance, take a claim saying “if we allow microbes to be patented, then that will lead to higher life-forms being patented as well”:

There are cases in law, for example, in which a legal precedent has historically facilitated subsequent legal change. Lode (1999, pp. 511–512) cites the example originally identified by Kimbrell (1993) whereby there is good reason to believe that the issuing of a patent on a transgenic mouse by the U.S. Patent and Trademark Office in the year 1988 is the result of a slippery slope set in motion with the U.S. Supreme court’s decision Diamond v. Chakrabarty. This latter decision allowed a patent for an oil-eating microbe, and the subsequent granting of a patent for the mouse would have been unthinkable without the chain started by it. (Hahn & Oaksford, 2007)

So again, our prior beliefs, here ones about the plausibility of the slope, influence whether or not the argument is accepted. But there is also another component that was missing from the previous fallacies. Because slippery slope arguments are about actions, not just beliefs, the principle of expected utility becomes relevant. A slippery slope argument will be stronger (relative to its alternative) if it invokes a more undesirable potential consequence, if that consequence is more probable, and if the expected utility of the alternatives is smaller.

For instance, suppose for the sake of argument that both increased heroin consumption and increased reggae music consumption are equally likely consequences of cannabis legalization:

A. Legalizing cannabis will lead to an increase in heroin consumption.

B. Legalizing cannabis will lead to an increase in listening to reggae music.

Yet A would feel like a stronger argument against the legalization of cannabis than argument B, since increased heroin consumption feels like it would have lower utility. On the other hand, if the outcome is shared, then the stronger argument seems to be the one where the causal link seems more probable:

C. Legalizing Internet access would lead to an increase in the amount of World of Warcraft addicts.

D. Legalizing video rental stores would lead to an increase in the amount of World of Warcraft addicts.

“Gah. So a strong slippery slope argument is one where both the utility of the outcome, and the outcome’s probability is high? So the AI would not only need to evaluate probabilities, but expected utilities as well?”

“That’s right!”

“Screw it, this isn’t going anywhere. And here I thought that this would be a productive day.”

“They can’t all be, but we tried our best. Would you like a tuna sandwich as consolation?”

“Yes, please.”


Because this post is already unreasonably long, the above discussion only covers the theoretical reasons for thinking about fallacies as weak or strong Bayesian arguments. For math, experimental studies, and two other subtypes of the argument from ignorance (besides negative evidence), see:

Hahn, U. & Oaksford, M. (2007) The Rationality of Informal Argumentation: A Bayesian Approach to Reasoning Fallacies. Psychological Review, vol. 114, no. 3, 704-732.