Infinite Certainty

In “Ab­solute Author­ity,” I ar­gued that you don’t need in­finite cer­tainty:

If you have to choose be­tween two al­ter­na­tives A and B, and you some­how suc­ceed in es­tab­lish­ing know­ably cer­tain well-cal­ibrated 100% con­fi­dence that A is ab­solutely and en­tirely de­sir­able and that B is the sum of ev­ery­thing evil and dis­gust­ing, then this is a suffi­cient con­di­tion for choos­ing A over B. It is not a nec­es­sary con­di­tion . . . You can have un­cer­tain knowl­edge of rel­a­tively bet­ter and rel­a­tively worse op­tions, and still choose. It should be rou­tine, in fact.

Con­cern­ing the propo­si­tion that 2 + 2 = 4, we must dis­t­in­guish be­tween the map and the ter­ri­tory. Given the seem­ing ab­solute sta­bil­ity and uni­ver­sal­ity of phys­i­cal laws, it’s pos­si­ble that never, in the whole his­tory of the uni­verse, has any par­ti­cle ex­ceeded the lo­cal light­speed limit. That is, the light­speed limit may be not just true 99% of the time, or 99.9999% of the time, or (1 − 1/​googol­plex) of the time, but sim­ply always and ab­solutely true.

But whether we can ever have ab­solute con­fi­dence in the light­speed limit is a whole ’nother ques­tion. The map is not the ter­ri­tory.

It may be en­tirely and wholly true that a stu­dent pla­gia­rized their as­sign­ment, but whether you have any knowl­edge of this fact at all—let alone ab­solute con­fi­dence in the be­lief—is a sep­a­rate is­sue. If you flip a coin and then don’t look at it, it may be com­pletely true that the coin is show­ing heads, and you may be com­pletely un­sure of whether the coin is show­ing heads or tails. A de­gree of un­cer­tainty is not the same as a de­gree of truth or a fre­quency of oc­cur­rence.

The same holds for math­e­mat­i­cal truths. It’s ques­tion­able whether the state­ment “2 + 2 = 4” or “In Peano ar­ith­metic, SS0 + SS0 = SSSS0” can be said to be true in any purely ab­stract sense, apart from phys­i­cal sys­tems that seem to be­have in ways similar to the Peano ax­ioms. Hav­ing said this, I will charge right ahead and guess that, in what­ever sense “2 + 2 = 4” is true at all, it is always and pre­cisely true, not just roughly true (“2 + 2 ac­tu­ally equals 4.0000004”) or true 999,999,999,999 times out of 1,000,000,000,000.

I’m not to­tally sure what “true” should mean in this case, but I stand by my guess. The cred­i­bil­ity of “2 + 2 = 4 is always true” far ex­ceeds the cred­i­bil­ity of any par­tic­u­lar philo­soph­i­cal po­si­tion on what “true,” “always,” or “is” means in the state­ment above.

This doesn’t mean, though, that I have ab­solute con­fi­dence that 2 + 2 = 4. See the pre­vi­ous dis­cus­sion on how to con­vince me that 2 + 2 = 3, which could be done us­ing much the same sort of ev­i­dence that con­vinced me that 2 + 2 = 4 in the first place. I could have hal­lu­ci­nated all that pre­vi­ous ev­i­dence, or I could be mis­re­mem­ber­ing it. In the an­nals of neu­rol­ogy there are stranger brain dys­func­tions than this.

So if we at­tach some prob­a­bil­ity to the state­ment “2 + 2 = 4,” then what should the prob­a­bil­ity be? What you seek to at­tain in a case like this is good cal­ibra­tion—state­ments to which you as­sign “99% prob­a­bil­ity” come true 99 times out of 100. This is ac­tu­ally a hell of a lot more difficult than you might think. Take a hun­dred peo­ple, and ask each of them to make ten state­ments of which they are “99% con­fi­dent.” Of the 1,000 state­ments, do you think that around 10 will be wrong?

I am not go­ing to dis­cuss the ac­tual ex­per­i­ments that have been done on cal­ibra­tion—you can find them in my book chap­ter on cog­ni­tive bi­ases and global catas­trophic risk1—be­cause I’ve seen that when I blurt this out to peo­ple with­out proper prepa­ra­tion, they there­after use it as a Fully Gen­eral Coun­ter­ar­gu­ment, which some­how leaps to mind when­ever they have to dis­count the con­fi­dence of some­one whose opinion they dis­like, and fails to be available when they con­sider their own opinions. So I try not to talk about the ex­per­i­ments on cal­ibra­tion ex­cept as part of a struc­tured pre­sen­ta­tion of ra­tio­nal­ity that in­cludes warn­ings against mo­ti­vated skep­ti­cism.

But the ob­served cal­ibra­tion of hu­man be­ings who say they are “99% con­fi­dent” is not 99% ac­cu­racy.

Sup­pose you say that you’re 99.99% con­fi­dent that 2 + 2 = 4. Then you have just as­serted that you could make 10,000 in­de­pen­dent state­ments, in which you re­pose equal con­fi­dence, and be wrong, on av­er­age, around once. Maybe for 2 + 2 = 4 this ex­traor­di­nary de­gree of con­fi­dence would be pos­si­ble: “2 + 2 = 4” is ex­tremely sim­ple, and math­e­mat­i­cal as well as em­piri­cal, and widely be­lieved so­cially (not with pas­sion­ate af­fir­ma­tion but just quietly taken for granted). So maybe you re­ally could get up to 99.99% con­fi­dence on this one.

I don’t think you could get up to 99.99% con­fi­dence for as­ser­tions like “53 is a prime num­ber.” Yes, it seems likely, but by the time you tried to set up pro­to­cols that would let you as­sert 10,000 in­de­pen­dent state­ments of this sort—that is, not just a set of state­ments about prime num­bers, but a new pro­to­col each time—you would fail more than once.2

Yet the map is not the ter­ri­tory: If I say that I am 99% con­fi­dent that 2 + 2 = 4, it doesn’t mean that I think “2 + 2 = 4” is true to within 99% pre­ci­sion, or that “2 + 2 = 4” is true 99 times out of 100. The propo­si­tion in which I re­pose my con­fi­dence is the propo­si­tion that “2 + 2 = 4 is always and ex­actly true,” not the propo­si­tion “2 + 2 = 4 is mostly and usu­ally true.”

As for the no­tion that you could get up to 100% con­fi­dence in a math­e­mat­i­cal propo­si­tion—well, re­ally now! If you say 99.9999% con­fi­dence, you’re im­ply­ing that you could make one mil­lion equally fraught state­ments, one af­ter the other, and be wrong, on av­er­age, about once. That’s around a solid year’s worth of talk­ing, if you can make one as­ser­tion ev­ery 20 sec­onds and you talk for 16 hours a day.

Assert 99.9999999999% con­fi­dence, and you’re tak­ing it up to a trillion. Now you’re go­ing to talk for a hun­dred hu­man life­times, and not be wrong even once?

Assert a con­fi­dence of (1 − 1/​googol­plex) and your ego far ex­ceeds that of men­tal pa­tients who think they’re God.

And a googol­plex is a lot smaller than even rel­a­tively small in­con­ceiv­ably huge num­bers like 3 ↑↑↑ 3. But even a con­fi­dence of (1 − 1∕3 ↑↑↑ 3) isn’t all that much closer to PROBABILITY 1 than be­ing 90% sure of some­thing.

If all else fails, the hy­po­thet­i­cal Dark Lords of the Ma­trix, who are right now tam­per­ing with your brain’s cred­i­bil­ity as­sess­ment of this very sen­tence, will bar the path and defend us from the scourge of in­finite cer­tainty.

Am I ab­solutely sure of that?

Why, of course not.

As Ra­fal Smi­grod­ski once said:

I would say you should be able to as­sign a less than 1 cer­tainty level to the math­e­mat­i­cal con­cepts which are nec­es­sary to de­rive Bayes’s rule it­self, and still prac­ti­cally use it. I am not to­tally sure I have to be always un­sure. Maybe I could be le­gi­t­i­mately sure about some­thing. But once I as­sign a prob­a­bil­ity of 1 to a propo­si­tion, I can never undo it. No mat­ter what I see or learn, I have to re­ject ev­ery­thing that dis­agrees with the ax­iom. I don’t like the idea of not be­ing able to change my mind, ever.

1Eliezer Yud­kowsky, “Cog­ni­tive Bi­ases Po­ten­tially Affect­ing Judg­ment of Global Risks,” in Global Catas­trophic Risks, ed. Nick Bostrom and Milan M. irkovi (New York: Oxford Univer­sity Press, 2008), 91–119.

2Peter de Blanc has an amus­ing anec­dote on this point: http://​​www.space­andgames.com/​​?p=27. (I told him not to do it again.)