Fallacies as weak Bayesian evidence

Ab­stract: Ex­actly what is fal­la­cious about a claim like ”ghosts ex­ist be­cause no one has proved that they do not”? And why does a claim with the same log­i­cal struc­ture, such as ”this drug is safe be­cause we have no ev­i­dence that it is not”, seem more plau­si­ble? Look­ing at var­i­ous fal­la­cies – the ar­gu­ment from ig­no­rance, cir­cu­lar ar­gu­ments, and the slip­pery slope ar­gu­ment—we find that they can be an­a­lyzed in Bayesian terms, and that peo­ple are gen­er­ally more con­vinced by ar­gu­ments which provide greater Bayesian ev­i­dence. Ar­gu­ments which provide only weak ev­i­dence, though of­ten ev­i­dence nonethe­less, are con­sid­ered fal­la­cies.

As a Ne­far­i­ous Scien­tist, Dr. Zany is of­ten tele­con­ferenc­ing with other Ne­far­i­ous Scien­tists. Ne­go­ti­a­tions about things such as ”when we have taken over the world, who’s the lucky bas­tard who gets to rule over Antarc­tica” will of­ten turn tense and stress­ful. Dr. Zany knows that stress makes it harder to eval­u­ate ar­gu­ments log­i­cally. To make things eas­ier, he would like to build a soft­ware tool that would mon­i­tor the con­ver­sa­tions and au­to­mat­i­cally flag any fal­la­cious claims as such. That way, if he’s too stressed out to re­al­ize that an ar­gu­ment offered by one of his col­leagues is ac­tu­ally wrong, the soft­ware will work as backup to warn him.

Un­for­tu­nately, it’s not easy to define what counts as a fal­lacy. At first, Dr. Zany tried look­ing at the log­i­cal form of var­i­ous claims. An early ex­am­ple that he con­sid­ered was ”ghosts ex­ist be­cause no one has proved that they do not”, which felt clearly wrong, an in­stance of the ar­gu­ment from ig­no­rance. But when he pro­grammed his soft­ware to warn him about sen­tences like that, it ended up flag­ging the claim ”this drug is safe, be­cause we have no ev­i­dence that it is not”. Hmm. That claim felt some­what weak, but it didn’t feel ob­vi­ously wrong the way that the ghost ar­gu­ment did. Yet they shared the same struc­ture. What was the differ­ence?

The ar­gu­ment from ignorance

Re­lated posts: Ab­sence of Ev­i­dence is Ev­i­dence of Ab­sence, But Some­body Would Have No­ticed!

One kind of ar­gu­ment from ig­no­rance is based on nega­tive ev­i­dence. It as­sumes that if the hy­poth­e­sis of in­ter­est were true, then ex­per­i­ments made to test it would show pos­i­tive re­sults. If a drug were toxic, tests of tox­i­c­ity of re­veal this. Whether or not this ar­gu­ment is valid de­pends on whether the tests would in­deed show pos­i­tive re­sults, and with what prob­a­bil­ity.

With some thought and help from AS-01, Dr. Zany iden­ti­fied three in­tu­itions about this kind of rea­son­ing.

1. Prior be­liefs in­fluence whether or not the ar­gu­ment is ac­cepted.

A) I’ve of­ten drunk al­co­hol, and never got­ten drunk. There­fore al­co­hol doesn’t cause in­tox­i­ca­tion.

B) I’ve of­ten taken Acme Flu Medicine, and never got­ten any side effects. There­fore Acme Flu Medicine doesn’t cause any side effects.

Both of these are ex­am­ples of the ar­gu­ment from ig­no­rance, and both seem fal­la­cious. But B seems much more com­pel­ling than A, since we know that al­co­hol causes in­tox­i­ca­tion, while we also know that not all kinds of medicine have side effects.

2. The more ev­i­dence found that is com­pat­i­ble with the con­clu­sions of these ar­gu­ments, the more ac­cept­able they seem to be.

C) Acme Flu Medicine is not toxic be­cause no toxic effects were ob­served in 50 tests.

D) Acme Flu Medicine is not toxic be­cause no toxic effects were ob­served in 1 test.

C seems more com­pel­ling than D.

3. Nega­tive ar­gu­ments are ac­cept­able, but they are gen­er­ally less ac­cept­able than pos­i­tive ar­gu­ments.

E) Acme Flu Medicine is toxic be­cause a toxic effect was ob­served (pos­i­tive ar­gu­ment)

F) Acme Flu Medicine is not toxic be­cause no toxic effect was ob­served (nega­tive ar­gu­ment, the ar­gu­ment from ig­no­rance)

Ar­gu­ment E seems more con­vinc­ing than ar­gu­ment F, but F is some­what con­vinc­ing as well.

“Aha!” Dr. Zany ex­claims. “Th­ese three in­tu­itions share a com­mon ori­gin! They bear the sig­na­tures of Bay­o­net rea­son­ing!”

Bayesian rea­son­ing”, AS-01 po­litely cor­rects.

“Yes, Bayesian! But, hmm. Ex­actly how are they Bayesian?”


Note: To keep this post as ac­cessible as pos­si­ble, I at­tempt to ex­plain the un­der­ly­ing math with­out ac­tu­ally us­ing any math. If you would rather see the math, please see the pa­per refer­enced at the end of the post.

As a brief re­minder, the essence of Bayes’ the­o­rem is that we have differ­ent the­o­ries about the world, and the ex­tent to which we be­lieve in these the­o­ries varies. Each the­ory also has im­pli­ca­tions about what you ex­pect to ob­serve in the world (or at least it should have such im­pli­ca­tions). The ex­tent to which an ob­ser­va­tion makes us up­date our be­liefs de­pends on how likely our the­o­ries say the ob­ser­va­tion should be. Dr. Zany has a strong be­lief that his plans will ba­si­cally always suc­ceed, and this the­ory says that his plans are very un­likely to fail. There­fore, when they do fail, he should re­vise his be­lief in the “I will always suc­ceed” the­ory down. (So far he hasn’t made that up­date, though.) If this isn’t com­pletely in­tu­itive to you, I recom­mend kom­pon­isto’s awe­some vi­su­al­iza­tion.

Now let’s look at each of the above in­tu­itions in terms of Bayes’ the­o­rem.

1. Prior be­liefs in­fluence whether or not the ar­gu­ment is ac­cepted. This is pretty straight­for­ward -the ex­pres­sion “prior be­liefs” is even there in the de­scrip­tion of the in­tu­ition. Sup­pose that we hear the ar­gu­ment, “I’ve of­ten drunk al­co­hol, and never got­ten drunk. There­fore al­co­hol doesn’t cause in­tox­i­ca­tion”. The fact that this per­son has never got­ten drunk from al­co­hol (or at least claims that he hasn’t) is ev­i­dence for al­co­hol not caus­ing any in­tox­i­ca­tion, but we still have a very strong prior be­lief for al­co­hol caus­ing in­tox­i­ca­tion. Up­dat­ing on this ev­i­dence, we find that our be­liefs in both the the­ory “this per­son is mis­taken or ly­ing” and the the­ory “al­co­hol doesn’t cause in­tox­i­ca­tion” have be­come stronger. Due to its higher prior prob­a­bil­ity, “this per­son is mis­taken or ly­ing” seems more plau­si­ble of the two, so we do not con­sider this a per­sua­sive ar­gu­ment for al­co­hol not be­ing in­tox­i­cat­ing.

2. The more ev­i­dence found that is com­pat­i­ble with the con­clu­sions of these ar­gu­ments, the more ac­cept­able they seem to be. This too is a rel­a­tively straight­for­ward con­se­quence of Bayes’ the­o­rem. In terms of be­lief up­dat­ing, we might en­counter 50 pieces of ev­i­dence, one at a time, and make 50 small up­dates. Or we might en­counter all of the 50 pieces of ev­i­dence at once, and perform one large up­date. The end re­sult should be the same. More ev­i­dence leads to larger up­dates.

3. Nega­tive ar­gu­ments are ac­cept­able, but they are gen­er­ally less ac­cept­able than pos­i­tive ar­gu­ments. This one needs a lit­tle ex­plain­ing, and here we need the con­cepts of sen­si­tivity and speci­fity. A test for some­thing (say, a dis­ease) is sen­si­tive if it always gives a pos­i­tive re­sult when the dis­ease is pre­sent, and spe­cific if it only gives a pos­i­tive re­sult when the dis­ease is pre­sent. There’s a trade-off be­tween these two. For in­stance, an air­port metal de­tec­tor is de­signed to alert its op­er­a­tors if a per­son car­ries dan­ger­ous metal items. It is sen­si­tive, be­cause nearly any metal item will trig­ger an alarm—but it is not very spe­cific, be­cause even non-dan­ger­ous items will trig­ger an alarm.

A test which is both ex­tremly sen­si­tive and ex­tremly non-spe­cific is not very use­ful, since it will give more false alarms than true ones. An easy way of cre­at­ing an ex­tremely sen­si­tive “test for dis­ease” is to sim­ply always say that the pa­tient has the dis­ease. This test has 100% sen­si­tivity (it always gives a pos­i­tive re­sult, so it always gives a pos­i­tive re­sult when the dis­ease is pre­sent, as well), but its speci­fic­ity is very low—equal to the prevalence rate of the dis­ease. It pro­vides no in­for­ma­tion, and isn’t there­fore a test at all.

How is this re­lated to our in­tu­ition about nega­tive and pos­i­tive ar­gu­ments? In short, our en­vi­ron­ment is such that like the air­port metal de­tec­tor, nega­tive ev­i­dence of­ten has high sen­si­tivity but low speci­fic­ity. We in­tu­itively ex­pect that a test for tox­i­c­ity might not always re­veal a drug to be toxic, but if it does, then the drug re­ally is toxic. A lack of a “toxic” re­sult is what we would ex­pect if the drug weren’t toxic, but it’s also what we would ex­pect in a lot of cases where the drug was toxic. Thus, nega­tive ev­i­dence is ev­i­dence, but it’s usu­ally much weaker than pos­i­tive ev­i­dence.

“So, umm, okay”, Dr. Zany says, af­ter AS-01 has re­minded him of the way Bayes’ the­o­rem works, and helped him figure out how his in­tu­itions about the fal­la­cies have Bayes-struc­ture. “But let’s not lose track of what we were do­ing, which is to say, build­ing a fal­lacy-de­tec­tor. How can we use this to say whether a given claim is fal­la­cious?”

“What this sug­gests is that we judge a claim to be a fal­lacy if it’s only weak Bayesian ev­i­dence”, AS-01 replies. “A claim like ‘an un­re­li­able test of tox­i­c­ity didn’t re­veal this drug to be toxic, so it must be safe’ is such weak ev­i­dence that we con­sider it fal­la­cious. Also, if we have a very strong prior be­lief against some­thing, and a claim doesn’t shift this prior enough, then we might call it a ‘fal­lacy’ to be­lieve in the thing on the ba­sis of that claim. That was the case with the ‘I’ve had al­co­hol many times and never got­ten drunk, so al­co­hol must not be in­tox­i­cat­ing’ claim.”

“But that’s not what I was af­ter at all! In that case I can’t pro­gram a sim­ple fal­lacy-de­tec­tor: I’d have to im­ple­ment a full-blown ar­tifi­cial in­tel­li­gence that could un­der­stand the con­ver­sa­tion, an­a­lyze the prior prob­a­bil­ities of var­i­ous claims, and judge the weight of ev­i­dence. And even if I did that, it wouldn’t help me figure out what claims were fal­la­cies, be­cause all of my AIs only want to erad­i­cate the color blue from the uni­verse! Hmm. But maybe the ap­peal from ig­no­rance was a spe­cial case, and other fal­la­cies are more ac­co­mo­dat­ing. How about cir­cu­lar claims? Those must surely be fal­la­cious?”

Circularity

A. God ex­ists be­cause the Bible says so, and the Bible is the word of God.

B. Elec­trons ex­ist be­cause we can see 3-cm tracks in a cloud cham­ber, and 3-cm tracks in cloud cham­bers are sig­na­tures of elec­trons.

“Okay, we have two cir­cu­lar claims here”, AS-01 notes. “Their log­i­cal struc­ture seems to be the same, but we judge one of them to be a fal­lacy, while the other seems to be okay.”

“I have a bad feel­ing about this”, Dr. Zany says.

The ar­gu­ment for the fal­la­cious­ness of the above two claims is that they pre­sume the con­clu­sion in the premises. That is, it is pre­sumed that the Bible is the word of God, but that is only pos­si­ble if God ac­tu­ally ex­ists. Like­wise, if elec­trons don’t ex­ist, then what­ever we see in the cloud cham­ber isn’t the sig­na­ture signs of elec­trons. Thus, in or­der to be­lieve the con­clu­sion, we need to already be­lieve it as an im­plicit premise.

But from a Bayesian per­spec­tive, be­liefs aren’t bi­nary propo­si­tions: we can ten­ta­tively be­lieve in a hy­poth­e­sis, such as the ex­is­tence of God or elec­trons. In ad­di­tion to this ten­ta­tive hy­poth­e­sis, we have sense data about the ex­is­tence of the Bible and the 3-cm tracks. This data we take as cer­tain. We also have a sec­ond ten­ta­tive be­lief, the am­bigu­ous in­ter­pre­ta­tion of this sense data as the word of God or the sig­na­ture of elec­trons. The sense data is am­bigu­ous in the sense that it might or might not be the word of God. So we have three com­po­nents in our in­fer­ence: the ev­i­dence (Bible, 3-cm tracks), the am­bigu­ous in­ter­pre­ta­tion (the Bible is the word of God, the 3-cm tracks are sig­na­tures of elec­trons), and the hy­poth­e­sis (God ex­ists, elec­trons ex­ist).

We can con­jec­ture a causal con­nec­tion be­tween these three com­po­nents. Let’s sup­pose that God ex­ists (the hy­poth­e­sis). This then causes the Bible as his word (am­bigu­ous in­ter­pre­ta­tion), which in turn gives rise to the ac­tual doc­u­ment in front of us (sense data). Like­wise, if elec­trons ex­ist (hy­poth­e­sis), then this can give rise to the pre­dicted sig­na­ture effects (am­bigu­ous in­ter­pre­ta­tion), which be­come man­i­fest as what we ac­tu­ally see in the cloud cham­ber (sense data).

The “cir­cu­lar” claim re­verses the di­rec­tion of the in­fer­ence. We have sense data, which we would ex­pect to see if the am­bigu­ous in­ter­pre­ta­tion was cor­rect, and we would ex­pect the in­ter­pre­ta­tion to be cor­rect if the hy­poth­e­sis were true. There­fore it’s more likely that the hy­poth­e­sis is true. Is this al­lowed? Yes! Take for ex­am­ple the in­fer­ence “if there are dark clouds in the sky, then it will rain, in which case the grass will be wet”. The re­verse in­fer­ence, “the grass is wet, there­fore it has rained, there­fore there have been dark clouds in the sky” is valid. How­ever, the in­fer­ence “the grass is wet, there­fore the sprin­kler has been on, the­fore there is a sprin­kler near this grass” may also be a valid in­fer­ence. The grass be­ing wet is ev­i­dence for both the pres­ence of dark clouds and for a sprin­kler hav­ing been on. Which hy­poth­e­sis do we judge to be more likely? That de­pends on our prior be­liefs about the hy­pothe­ses, as well as the strengths of the causal links (e.g. “if there are dark clouds, how likely is it that it rains?”, and vice versa).

Thus, the “cir­cu­lar” ar­gu­ments given above are ac­tu­ally valid Bayesian in­fer­ences. But there is a rea­son that we con­sider A to be a fal­lacy, while B sounds valid. Since the in­tepre­ta­tion (the Bible is the word of God, 3-cm tracks are sig­na­tures of elec­trons) log­i­cally re­quires the hy­poth­e­sis, the prob­a­bil­ity of the in­ter­pre­ta­tion can­not be higher than the prob­a­bil­ity of the hy­poth­e­sis. If we as­sign the ex­is­tence of God a very low prior be­lief, then we must also as­sign a very low prior be­lief to the in­ter­pre­ta­tion of the Bible as the word of God. In that case, see­ing the Bible will not do much to ele­vate our be­lief in the claim that God ex­ists, if there are more likely hy­pothe­ses to be found.

“So you’re say­ing that cir­cu­lar rea­son­ing, too, is some­thing that we con­sider fal­la­cious if our prior be­lief in the hy­poth­e­sis is low enough? And rec­og­niz­ing these kinds of fal­la­cies is AI-com­plete, too?” Dr. Zany asks.

“Yup!”, AS-01 replies cheer­fully, glad that for once, Dr. Zany gets it with­out a need to ex­plain things fif­teen times.

“Damn it. But… what about slip­pery slope ar­gu­ments? Dr. Cagliostro claims that if we let minor su­pervillains stake claims for ter­ri­tory, then we would end up let­ting hench­men stake claims for ter­ri­tory as well, and even­tu­ally we’d give the right to peo­ple who didn’t even par­ti­ci­pate in our plans! Surely that must be a fal­lacy?”

Slip­pery slope

Slip­pery slope ar­gu­ments are of­ten treated as fal­la­cies, but they might not be. There are cases where the stipu­lated “slope” is what would ac­tu­ally (or likely) hap­pen. For in­stance, take a claim say­ing “if we al­low microbes to be patented, then that will lead to higher life-forms be­ing patented as well”:

There are cases in law, for ex­am­ple, in which a le­gal prece­dent has his­tor­i­cally fa­cil­i­tated sub­se­quent le­gal change. Lode (1999, pp. 511–512) cites the ex­am­ple origi­nally iden­ti­fied by Kim­brell (1993) whereby there is good rea­son to be­lieve that the is­su­ing of a patent on a trans­genic mouse by the U.S. Pa­tent and Trade­mark Office in the year 1988 is the re­sult of a slip­pery slope set in mo­tion with the U.S. Supreme court’s de­ci­sion Di­a­mond v. Chakrabarty. This lat­ter de­ci­sion al­lowed a patent for an oil-eat­ing microbe, and the sub­se­quent grant­ing of a patent for the mouse would have been un­think­able with­out the chain started by it. (Hahn & Oaks­ford, 2007)

So again, our prior be­liefs, here ones about the plau­si­bil­ity of the slope, in­fluence whether or not the ar­gu­ment is ac­cepted. But there is also an­other com­po­nent that was miss­ing from the pre­vi­ous fal­la­cies. Be­cause slip­pery slope ar­gu­ments are about ac­tions, not just be­liefs, the prin­ci­ple of ex­pected util­ity be­comes rele­vant. A slip­pery slope ar­gu­ment will be stronger (rel­a­tive to its al­ter­na­tive) if it in­vokes a more un­de­sir­able po­ten­tial con­se­quence, if that con­se­quence is more prob­a­ble, and if the ex­pected util­ity of the al­ter­na­tives is smaller.

For in­stance, sup­pose for the sake of ar­gu­ment that both in­creased heroin con­sump­tion and in­creased reg­gae mu­sic con­sump­tion are equally likely con­se­quences of cannabis le­gal­iza­tion:

A. Le­gal­iz­ing cannabis will lead to an in­crease in heroin con­sump­tion.

B. Le­gal­iz­ing cannabis will lead to an in­crease in listen­ing to reg­gae mu­sic.

Yet A would feel like a stronger ar­gu­ment against the le­gal­iza­tion of cannabis than ar­gu­ment B, since in­creased heroin con­sump­tion feels like it would have lower util­ity. On the other hand, if the out­come is shared, then the stronger ar­gu­ment seems to be the one where the causal link seems more prob­a­ble:

C. Le­gal­iz­ing In­ter­net ac­cess would lead to an in­crease in the amount of World of War­craft ad­dicts.

D. Le­gal­iz­ing video rental stores would lead to an in­crease in the amount of World of War­craft ad­dicts.

“Gah. So a strong slip­pery slope ar­gu­ment is one where both the util­ity of the out­come, and the out­come’s prob­a­bil­ity is high? So the AI would not only need to eval­u­ate prob­a­bil­ities, but ex­pected util­ities as well?”

“That’s right!”

“Screw it, this isn’t go­ing any­where. And here I thought that this would be a pro­duc­tive day.”

“They can’t all be, but we tried our best. Would you like a tuna sand­wich as con­so­la­tion?”

“Yes, please.”


Be­cause this post is already un­rea­son­ably long, the above dis­cus­sion only cov­ers the the­o­ret­i­cal rea­sons for think­ing about fal­la­cies as weak or strong Bayesian ar­gu­ments. For math, ex­per­i­men­tal stud­ies, and two other sub­types of the ar­gu­ment from ig­no­rance (be­sides nega­tive ev­i­dence), see:

Hahn, U. & Oaks­ford, M. (2007) The Ra­tion­al­ity of In­for­mal Ar­gu­men­ta­tion: A Bayesian Ap­proach to Rea­son­ing Fal­la­cies. Psy­cholog­i­cal Re­view, vol. 114, no. 3, 704-732.