Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime

TLDR; though you can’t be 100% certain of anything, a lot of the people who go around talking about how you can’t be 100% certain of anything would be surprised at how often you can be 99.99% certain. Indeed, we’re often justified in assigning odds ratios well in excess of a million to one to certain claims. Realizing this is important for avoiding certain rookie Bayesian’s mistakes, as well as for thinking about existential risk.


53 is prime. I’m very confident of this. 99.99% confident, at the very least. How can I be so confident? Because of the following argument:

If a number is composite, it must have a prime factor no greater than its square root. Because 53 is less than 64, sqrt(53) is less than 8. So, to find out if 53 is prime or not, we only need to check if it can be divided by primes less than 8 (i.e. 2, 3, 5, and 7). 53′s last digit is odd, so it’s not divisible by 2. 53′s last digit is neither 0 nor 5, so it’s not divisible by 5. The nearest multiples of 3 are 51 (=17x3) and 54, so 53 is not divisible by 3. The nearest multiples of 7 are 49 (=7^2) and 56, so 53 is not divisible by 7. Therefore, 53 is prime.

(My confidence in this argument is helped by the fact that I was good at math in high school. Your confidence in your math abilities may vary.)

I mention this because in his post Infinite Certainty, Eliezer writes:

Suppose you say that you’re 99.99% confident that 2 + 2 = 4. Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once. Maybe for 2 + 2 = 4 this extraordinary degree of confidence would be possible: “2 + 2 = 4” extremely simple, and mathematical as well as empirical, and widely believed socially (not with passionate affirmation but just quietly taken for granted). So maybe you really could get up to 99.99% confidence on this one.

I don’t think you could get up to 99.99% confidence for assertions like “53 is a prime number”. Yes, it seems likely, but by the time you tried to set up protocols that would let you assert 10,000 independent statements of this sort—that is, not just a set of statements about prime numbers, but a new protocol each time—you would fail more than once. Peter de Blanc has an amusing anecdote on this point, which he is welcome to retell in the comments.

I think this argument that you can’t be 99.99% certain that 53 is prime is fallacious. Stuart Armstrong explains why in the comments:

If you say 99.9999% confidence, you’re implying that you could make one million equally fraught statements, one after the other, and be wrong, on average, about once.

Excellent post overall, but that part seems weakest—we suffer from an unavailability problem, in that we can’t just think up random statements with those properties. When I said I agreed 99.9999% with “P(P is never equal to 1)” it doesn’t mean that I feel I could produce such a list—just that I have a very high belief that such a list could exist.

In other words, it’s true that:

  • If a well-calibrated person claims to be 99.99% certain of 10,000 independent statements, on average one of those statements should be false.

But it doesn’t follow that:

  • If a well-calibrated person claims to be 99.99% certain of one statement, they should be able to produce 9,999 other independent statements of equal certainty and be wrong on average once.

If it’s not clear why this doesn’t follow consider the anecdote Eliezer references in the quote above, which runs as follows: A gets B to agree that if 7 is not prime, B will give A $100. B then makes the same agreement for 11, 13, 17, 19, and 23. Then A asks about 27. B refuses. What about 29? Sure. 31? Yes. 33? No. 37? Yes. 39? No. 41? Yes. 43? Yes. 47? Yes. 49? No. 51? Yes. And suddenly B is $100 poorer.

Now, B claimed to be 100% sure about 7 being prime, which I don’t agree with. But that’s not what lost him his $100. What lost him his $100 is that, as the game went on, he got careless. If he’d taken the time to ask himself, “am I really as sure about 51 as I am about 7?” he’d probably have realized the answer was “no.” He probably didn’t check he primality of 51 as carefully as I checked the primality of 53 at the beginning of this post. (From the provided chat transcript, sleep deprivation may have also had something to do with it.)

If you tried to make 10,000 statements with 99.99% certainty, sooner or later you would get careless. Heck, before I started writing this post, I tried typing up a list of statements I was sure of, and it wasn’t long before I’d typed 1 + 0 = 10 (I’d meant to type 1 + 9 = 10. Oops.) But the fact that, as the exercise went on, you’d start including statements that weren’t really as certain as the first statement doesn’t mean you couldn’t be justified in being 99.99% certain of that first statement.

I almost feel like I should apologize for nitpicking this, because I agree with the main point of the “Infinite Certainty” post, that you should never assign a proposition probability 1. Assigning a proposition a probability of 1 implies that no evidence could ever convince you otherwise, and I agree that that’s bad. But I think it’s important to say that you’re often justified in putting a lot of 9s after the decimal point in your probability assignments, for a few reasons.

One reason is arguments in the style of Eliezer’s “10,000 independent statements” argument lead to inconsistencies. From another post of Eliezer’s:

I would be substantially more alarmed about a lottery device with a well-defined chance of 1 in 1,000,000 of destroying the world, than I am about the Large Hadron Collider being switched on.

On the other hand, if you asked me whether I could make one million statements of authority equal to “The Large Hadron Collider will not destroy the world”, and be wrong, on average, around once, then I would have to say no.

What should I do about this inconsistency? I’m not sure, but I’m certainly not going to wave a magic wand to make it go away. That’s like finding an inconsistency in a pair of maps you own, and quickly scribbling some alterations to make sure they’re consistent.

I would also, by the way, be substantially more worried about a lottery device with a 1 in 1,000,000,000 chance of destroying the world, than a device which destroyed the world if the Judeo-Christian God existed. But I would not suppose that I could make one billion statements, one after the other, fully independent and equally fraught as “There is no God”, and be wrong on average around once.

Okay, so that’s just Eliezer. But in a way, it’s just a sophisticated version of a mistake a lot of novice students of probability make. Many people, when you tell them they can never be 100% certain of anything, respond switching to saying 99% or 99.9% whenever they previously would have said 100%.

In a sense they have the right idea—there are lots of situations where, while the appropriate probability is not 0, it’s still negligible. But 1% or even 0.1% isn’t negligible enough in many contexts. Generally, you should not be in the habit of doing things that have a 0.1% chance of killing you. Do so on a daily basis, and on average you will be dead in less than three years. Conversely, if you mistakenly assign a 0.1% chance that you will die each time you leave the house, you may never leave the house.

Furthermore, the ways this can trip people up aren’t just hypothetical. Christian apologist William Lane Craig claims the skeptical slogan “extraordinary claims require extraordinary evidence” is contradicted by probability theory, because it actually wouldn’t take all that much evidence to convince us that, for example, “the numbers chosen in last night’s lottery were 4, 2, 9, 7, 8 and 3.” The correct response to this argument is to say that the prior probability of a miracle occurring is orders of magnitude smaller than mere one in a million odds.

I suspect many novice students of probability will be uncomfortable with that response. They shouldn’t be, though. After all, if you tried to convince the average Christian of Joseph Smith’s story with the golden plates, they’d require much more evidence than they’d need to be convinced that last night’s lottery numbers were 4, 2, 9, 7, 8 and 3. That suggests their prior for Mormonism is much less than one in a million.

This also matters a lot for thinking about futurism and existential risk. If someone is in the habit of using “99%” as shorthand for “basically 100%,” they will have trouble grasping the thought “I am 99% certain this futuristic scenario will not happen, but the stakes are high enough that I need to take the 1% chance into account in my decision making.” Actually, I suspect that problems in this vicinity explain much of the problems ordinary people (read: including average scientists) have thinking about existential risk.

I agree with what Eliezer has said about being ware of picking numbers out of thin air and trying to do math with them. (Or if you are going to pick numbers out of thin air, at least be ready to abandon your numbers at the drop of a hat.) Such advice goes double for dealing with very small probabilities, which humans seem to be especially bad at thinking about.

But it’s worth trying to internalize a sense that there are several very different categories of improbable claims, along the lines of:

  • Things that have a probability of something like 1%. These are things you really don’t want to bet your life on if you can help it.

  • Things that have a probability of something like one in a million. Includes many common ways to die that don’t involve doing anything most people would regard as especially risky. For example, these stats suggest the odds of a 100 mile car trip killing you are somewhere on the order of one in a million.

  • Things whose probability is truly negligible outside alternate universes where your evidence is radically different than what it actually is. For example, the risk of the Earth being overrun by demons.

Furthermore, it’s worth trying to learn to think coherently about which claims belong in which category. That includes not being afraid to assign claims to the third category when necessary.

Added: I also recommend the links in this comment by komponisto.