The Argument from Witness Testimony

(Note: This is essentially a rehash/​summarization of Jordan Sobel’s Lotteries and Miracles—you may prefer the original.)

George Mavrodes wrote an interesting analogy. Scenario 1: Suppose you read a newspaper report claiming that a particular individual (say, Henry Plushbottom of Topeka, Kansas) has won a very large lottery. Before reading the newspaper, you would have given quite low odds that Henry in particular had won the lottery. However, the newspaper report flips your beliefs quite drastically. Afterward, you would give quite high odds that Henry in particular had won the lottery. Scenario 2: You have read various claims that a particular individual (Jesus of Nazareth) arose from the dead. Before hearing those claims, you would have given quite low odds of anything so unlikely happening. However (since you are reading LessWrong) you presumably do not give quite high odds that Jesus arose from the dead.

What is it about the second scenario which makes it different from the first?

Let’s model Scenario 1 as a simple Bayes net. There are two nodes, one representing whether Henry wins, and one representing whether Henry is reported to win, and one arrow, from first to the second.

A two-node bayes net.

What are the parameters of the conditional probability tables? Before any information came in, it seemed very unlikely that Henry was the winner—perhaps he had a one in a million chance. Given that Henry did win, what is the chance that he would be reported to have won? Pretty likely—newspapers do err, but it’s reasonable to believe that 9 times out of 10, they get the name of the lottery winner correct. Now suppose that Henry didn’t win. What is the chance that he would be reported to have won by mistake? There’s nothing in particular to single him out from the other non-winners—being misreported is just as unlikely as winning, maybe even more unlikely.

So we have (using w to abbreviate “Henry Wins” and r to abbreviate “Henry is reported”):

    • P(w)=10-6 - Henry has a one-in-a-million-chance of winning.

    • P(!w)=1-10-6

    • P(r|w)=0.9 - Reporters are pretty careful about names in this kind of story.

    • P(!r|w)=0.1

    • P(r|!w)=10-7 - Not everyone plays, so there are even more people “competing” to be misreported, and Henry is supposed to be undistinguished.

    • P(!r|!w)=1-10-7

      With a simple computation, we can verify that this model replicates the phenomenon in question. After reading the report, one’s estimated probability should be:

      • P( w | r ) = (by Bayes’ Theorem)

      • P( w ) * P( r | w ) /​ P( r ) = (expand P( r ) by cases)

      • P( w ) * P( r | w ) /​ ( P( r | w ) * P( w ) + P( r | !w ) * P( !w ) ) = (substitute the numerical values)

      • 10-6 * 0.9 /​ ( 0.9 * 10-6 + 10-7 * (1 − 10-6) ) = (approximately)

      • 0.9

      Of course, Scenario 2 could be modeled with two nodes and one arrow in exactly the same way. If it is rational to come to a different conclusion, then the parameters must be different. How would you justify setting the parameters differently in the second case?

      Somewhat relatedly, Douglas Walton has an “argumentation scheme” for Argument from Witness Testimony. An argumentation scheme is (roughly) a useful pattern of “presumptive” reasoning—that is, uncertain reasoning. In general, the argumentation/​defeasible reasoning/​non-monotonic logic community seems strangely isolated from the Bayesian inference community, though nominally they’re both associated with artificial intelligence. Despite how odd each approach seems from the other side, there is a possibility of cross-fertilization here. Here are the so-called “premises” of the scheme (from Argumentation Schemes, p. 310):

      • Position to Know Premise: Witness W is in a position to know whether A is true or not.

      • Truth Telling Premise: Witness W is telling the truth.

      • Statement Premise: Witness W states that A is true.

      • Conclusion: A may be plausibly taken to be true.

      Here are the so-called “critical questions” associated with the argument from witness testimony:

      1. Is what the witness said internally consistent?

      2. Is what the witness said consistent with the known facts of the case (based on evidence apart from what the witness testified to)?

      3. Is what the witness said consistent with what other witnesses have (independently) testified to?

      4. Is there some kind of bias that can be attributed to the account given by the witness?

      5. How plausible is the statement A asserted by the witness?

      As I understand it, argumentation schemes are something like inference rules for plausible reasoning but the actual premises (including both the scheme’s “premises” and its “critical questions”) are treated differently. I have not yet been able to unpack Walton’s description of how they ought to be treated differently into the language of single agent reasoning. Usually argumentation theory is phrased and targeted for dialog between differing agents (for example, legal advocates), but it certainly can be applied to single agent reasoning. For example, Pollack’s OSCAR is based on defeasible reasoning.

      (Spoiler)

      Jordan Sobel’s answer is that the key aspect of the sudden flip is P(r|!w), the probability of observing a false report. In Scenario 1, the probability of a false report of Henry’s having won is even less likely than the probability of Henry winning. Given that humans are known to self-deceive regarding the things that are miraculous and wonderful, you should not carry that parameter through the analogy unchanged. Small increases in P(r|!w) lead to large reductions in P(w|r). For example, if P(r|!w) were equal to P(w), then the posterior probability that Henry won would drop below 0.5. If P(r|!w) were one in a hundred thousand, the posterior probability would drop below 0.1.