# Infinite Certainty

In “Ab­solute Author­ity,” I ar­gued that you don’t need in­finite cer­tainty:

If you have to choose be­tween two al­ter­na­tives A and B, and you some­how suc­ceed in es­tab­lish­ing know­ably cer­tain well-cal­ibrated 100% con­fi­dence that A is ab­solutely and en­tirely de­sir­able and that B is the sum of ev­ery­thing evil and dis­gust­ing, then this is a suffi­cient con­di­tion for choos­ing A over B. It is not a nec­es­sary con­di­tion . . . You can have un­cer­tain knowl­edge of rel­a­tively bet­ter and rel­a­tively worse op­tions, and still choose. It should be rou­tine, in fact.

Con­cern­ing the propo­si­tion that 2 + 2 = 4, we must dis­t­in­guish be­tween the map and the ter­ri­tory. Given the seem­ing ab­solute sta­bil­ity and uni­ver­sal­ity of phys­i­cal laws, it’s pos­si­ble that never, in the whole his­tory of the uni­verse, has any par­ti­cle ex­ceeded the lo­cal light­speed limit. That is, the light­speed limit may be not just true 99% of the time, or 99.9999% of the time, or (1 − 1/​googol­plex) of the time, but sim­ply always and ab­solutely true.

But whether we can ever have ab­solute con­fi­dence in the light­speed limit is a whole ’nother ques­tion. The map is not the ter­ri­tory.

It may be en­tirely and wholly true that a stu­dent pla­gia­rized their as­sign­ment, but whether you have any knowl­edge of this fact at all—let alone ab­solute con­fi­dence in the be­lief—is a sep­a­rate is­sue. If you flip a coin and then don’t look at it, it may be com­pletely true that the coin is show­ing heads, and you may be com­pletely un­sure of whether the coin is show­ing heads or tails. A de­gree of un­cer­tainty is not the same as a de­gree of truth or a fre­quency of oc­cur­rence.

The same holds for math­e­mat­i­cal truths. It’s ques­tion­able whether the state­ment “2 + 2 = 4” or “In Peano ar­ith­metic, SS0 + SS0 = SSSS0” can be said to be true in any purely ab­stract sense, apart from phys­i­cal sys­tems that seem to be­have in ways similar to the Peano ax­ioms. Hav­ing said this, I will charge right ahead and guess that, in what­ever sense “2 + 2 = 4” is true at all, it is always and pre­cisely true, not just roughly true (“2 + 2 ac­tu­ally equals 4.0000004”) or true 999,999,999,999 times out of 1,000,000,000,000.

I’m not to­tally sure what “true” should mean in this case, but I stand by my guess. The cred­i­bil­ity of “2 + 2 = 4 is always true” far ex­ceeds the cred­i­bil­ity of any par­tic­u­lar philo­soph­i­cal po­si­tion on what “true,” “always,” or “is” means in the state­ment above.

This doesn’t mean, though, that I have ab­solute con­fi­dence that 2 + 2 = 4. See the pre­vi­ous dis­cus­sion on how to con­vince me that 2 + 2 = 3, which could be done us­ing much the same sort of ev­i­dence that con­vinced me that 2 + 2 = 4 in the first place. I could have hal­lu­ci­nated all that pre­vi­ous ev­i­dence, or I could be mis­re­mem­ber­ing it. In the an­nals of neu­rol­ogy there are stranger brain dys­func­tions than this.

So if we at­tach some prob­a­bil­ity to the state­ment “2 + 2 = 4,” then what should the prob­a­bil­ity be? What you seek to at­tain in a case like this is good cal­ibra­tion—state­ments to which you as­sign “99% prob­a­bil­ity” come true 99 times out of 100. This is ac­tu­ally a hell of a lot more difficult than you might think. Take a hun­dred peo­ple, and ask each of them to make ten state­ments of which they are “99% con­fi­dent.” Of the 1,000 state­ments, do you think that around 10 will be wrong?

I am not go­ing to dis­cuss the ac­tual ex­per­i­ments that have been done on cal­ibra­tion—you can find them in my book chap­ter on cog­ni­tive bi­ases and global catas­trophic risk1—be­cause I’ve seen that when I blurt this out to peo­ple with­out proper prepa­ra­tion, they there­after use it as a Fully Gen­eral Coun­ter­ar­gu­ment, which some­how leaps to mind when­ever they have to dis­count the con­fi­dence of some­one whose opinion they dis­like, and fails to be available when they con­sider their own opinions. So I try not to talk about the ex­per­i­ments on cal­ibra­tion ex­cept as part of a struc­tured pre­sen­ta­tion of ra­tio­nal­ity that in­cludes warn­ings against mo­ti­vated skep­ti­cism.

But the ob­served cal­ibra­tion of hu­man be­ings who say they are “99% con­fi­dent” is not 99% ac­cu­racy.

Sup­pose you say that you’re 99.99% con­fi­dent that 2 + 2 = 4. Then you have just as­serted that you could make 10,000 in­de­pen­dent state­ments, in which you re­pose equal con­fi­dence, and be wrong, on av­er­age, around once. Maybe for 2 + 2 = 4 this ex­traor­di­nary de­gree of con­fi­dence would be pos­si­ble: “2 + 2 = 4” is ex­tremely sim­ple, and math­e­mat­i­cal as well as em­piri­cal, and widely be­lieved so­cially (not with pas­sion­ate af­fir­ma­tion but just quietly taken for granted). So maybe you re­ally could get up to 99.99% con­fi­dence on this one.

I don’t think you could get up to 99.99% con­fi­dence for as­ser­tions like “53 is a prime num­ber.” Yes, it seems likely, but by the time you tried to set up pro­to­cols that would let you as­sert 10,000 in­de­pen­dent state­ments of this sort—that is, not just a set of state­ments about prime num­bers, but a new pro­to­col each time—you would fail more than once.2

Yet the map is not the ter­ri­tory: If I say that I am 99% con­fi­dent that 2 + 2 = 4, it doesn’t mean that I think “2 + 2 = 4” is true to within 99% pre­ci­sion, or that “2 + 2 = 4” is true 99 times out of 100. The propo­si­tion in which I re­pose my con­fi­dence is the propo­si­tion that “2 + 2 = 4 is always and ex­actly true,” not the propo­si­tion “2 + 2 = 4 is mostly and usu­ally true.”

As for the no­tion that you could get up to 100% con­fi­dence in a math­e­mat­i­cal propo­si­tion—well, re­ally now! If you say 99.9999% con­fi­dence, you’re im­ply­ing that you could make one mil­lion equally fraught state­ments, one af­ter the other, and be wrong, on av­er­age, about once. That’s around a solid year’s worth of talk­ing, if you can make one as­ser­tion ev­ery 20 sec­onds and you talk for 16 hours a day.

Assert 99.9999999999% con­fi­dence, and you’re tak­ing it up to a trillion. Now you’re go­ing to talk for a hun­dred hu­man life­times, and not be wrong even once?

Assert a con­fi­dence of (1 − 1/​googol­plex) and your ego far ex­ceeds that of men­tal pa­tients who think they’re God.

And a googol­plex is a lot smaller than even rel­a­tively small in­con­ceiv­ably huge num­bers like 3 ↑↑↑ 3. But even a con­fi­dence of (1 − 1∕3 ↑↑↑ 3) isn’t all that much closer to PROBABILITY 1 than be­ing 90% sure of some­thing.

If all else fails, the hy­po­thet­i­cal Dark Lords of the Ma­trix, who are right now tam­per­ing with your brain’s cred­i­bil­ity as­sess­ment of this very sen­tence, will bar the path and defend us from the scourge of in­finite cer­tainty.

Am I ab­solutely sure of that?

Why, of course not.

As Ra­fal Smi­grod­ski once said:

I would say you should be able to as­sign a less than 1 cer­tainty level to the math­e­mat­i­cal con­cepts which are nec­es­sary to de­rive Bayes’s rule it­self, and still prac­ti­cally use it. I am not to­tally sure I have to be always un­sure. Maybe I could be le­gi­t­i­mately sure about some­thing. But once I as­sign a prob­a­bil­ity of 1 to a propo­si­tion, I can never undo it. No mat­ter what I see or learn, I have to re­ject ev­ery­thing that dis­agrees with the ax­iom. I don’t like the idea of not be­ing able to change my mind, ever.

1Eliezer Yud­kowsky, “Cog­ni­tive Bi­ases Po­ten­tially Affect­ing Judg­ment of Global Risks,” in Global Catas­trophic Risks, ed. Nick Bostrom and Milan M. irkovi (New York: Oxford Univer­sity Press, 2008), 91–119.

2Peter de Blanc has an amus­ing anec­dote on this point: http://​​www.space­andgames.com/​​?p=27. (I told him not to do it again.)

• Let me ask you in re­ply, Paul, if you think you would re­fuse to change your mind about the “law of non-con­tra­dic­tion” no mat­ter what any math­e­mat­i­cian could con­ceiv­ably say to you—if you would re­fuse to change your mind even if ev­ery math­e­mat­i­cian on Earth first laughed scorn­fully at your state­ment, then offered to ex­plain the truth to you over a cou­ple of hours… Would you just re­ply calmly, “But I know I’m right,” and walk away? Or would you, on this ev­i­dence, up­date your “zero prob­a­bil­ity” to some­thing some­what higher?

Why can’t I re­pose a very tiny cre­dence in the nega­tion of the law of non-con­tra­dic­tion? Con­di­tion­ing on this tiny cre­dence would pro­duce var­i­ous null im­pli­ca­tions in my rea­son­ing pro­cess, which end up be­ing dis­carded as in­co­her­ent—I don’t see that as a kil­ler ob­jec­tion.

In fact, the above just trans­lates the in­tu­itive re­ply, “What if a math­e­mat­i­cian con­vinces me that ‘snow is white’ is both true and false? I don’t con­sider my­self en­ti­tled to rule it out ab­solutely, but I can’t imag­ine what else would fol­low from that, so I’ll wait un­til it hap­pens to worry about it.”

As for Descartes’s lit­tle chain of rea­son­ing, it in­volves far too many deep, con­fus­ing, and ill-defined con­cepts to be as­signed a prob­a­bil­ity any­where near 1. I am not sure any­thing ex­ists, let alone that I do; I am far more con­fi­dent that an­gu­lar mo­men­tum is con­served in this uni­verse than I am that the state­ment “the uni­verse ex­ists” rep­re­sents any­thing but con­fu­sion.

The one that I con­fess is giv­ing me the most trou­ble is P(A|A). But I would pre­fer to call that a syn­tac­tic elimi­na­tion rule for prob­a­bil­is­tic rea­son­ing, or per­haps a set equal­ity be­tween events, rather than claiming that there’s some spe­cific propo­si­tion that has “Prob­a­bil­ity 1”.

• I am not sure any­thing ex­ists, let alone that I do; I am far more con­fi­dent that an­gu­lar mo­men­tum is con­served in this uni­verse than I am that the state­ment “the uni­verse ex­ists” rep­re­sents any­thing but con­fu­sion.

I don’t know what the above sen­tence means. You must be us­ing the word “ex­ist” differ­ently than I do.

• Let me ask you in re­ply, Paul, if you think you would re­fuse to change your mind about the “law of non-con­tra­dic­tion” no mat­ter what any math­e­mat­i­cian could con­ceiv­ably say to you—if you would re­fuse to change your mind even if ev­ery math­e­mat­i­cian on Earth first laughed scorn­fully at your state­ment, then offered to ex­plain the truth to you over a cou­ple of hours… Would you just re­ply calmly, “But I know I’m right,” and walk away? Or would you, on this ev­i­dence, up­date your “zero prob­a­bil­ity” to some­thing some­what higher?

This seems to me to be a very differ­ent ques­tion. “Do I doubt A?” and “Could any ex­pe­rience lead me to doubt A?” are differ­ent ques­tions. They are equiv­a­lent for ideal rea­son­ers. And we ap­prox­i­mate ideal rea­son­ers closely enough that treat­ing the ques­tions as in­ter­change­able is typ­i­cally a use­ful heuris­tic. Nonethe­less, if ab­solute cer­tainty is an in­tel­ligible con­cept at all, then I can imagine

1. be­ing ab­solutely cer­tain now that A is true, while

2. think­ing it likely that some stream of words or ex­pe­riences in the fu­ture could so con­fuse or cor­rupt me that I would doubt A.

But, if I al­low that I could be cor­rupted into doubt­ing what I am now cer­tain is true, how can I be cer­tain that my pre­sent cer­tainty isn’t a re­sult of such a cor­rup­tion? At this point, my re­cur­sive jus­tifi­ca­tion would hit bot­tom: I am cer­tain that my eval­u­a­tion of P(A) as equal to 1 is not the re­sult of a cor­rup­tion be­cause I am cer­tain that A is true. Sure, the cor­rupted fu­ture ver­sion of my­self would look back on my pre­sent cer­tainty as mis­taken. But that ver­sion of me is cor­rupted, so why would I listen to him?

ETA:

In your ac­tual sce­nario, where all other math­e­mat­i­ci­ans scorn my be­lief that ~(P&~P), I would prob­a­bly con­clude that ev­ery­one is do­ing some­thing very differ­ent with log­i­cal sym­bols than what I thought that they were do­ing. If they per­sisted in not un­der­stand­ing why I thought that ~(P&~P) fol­lowed from the na­ture of con­junc­tion, I would con­clude that my brain works in such a differ­ent way that I can­not even map my con­cepts of ba­sic log­i­cal op­er­a­tion into the con­cepts that other peo­ple use. I would start to doubt that my con­cept of con­junc­tion is as use­ful as I thought (since ev­ery­one else ap­par­ently prefers some al­ter­na­tive), so I would spend a lot of effort try­ing to un­der­stand the con­cepts that they use in place of mine. I would con­sider it pretty likely that I would choose to use their con­cepts as soon as I un­der­stood them well-enough to do so.

• Eli said:

Peter de Blanc has an amus­ing anec­dote on this point, which he is wel­come to retell in the com­ments.

Here’s the anec­dote.

• We can go even stronger than math­e­mat­i­cal truths. How about the fol­low­ing state­ment?

~(P &~P)

I think it’s safe to say that if any­thing is true, that state­ment (the flip­ping law of non-con­tra­dic­tion) is true. And it’s the pre­con­di­tion for any other knowl­edge (for no other rea­son than if you deny it, you can prove any­thing). I mean, there are log­ics that per­mit con­tra­dic­tions, but then you’re in a space that’s com­pletely alien to nor­mal rea­son­ing.

So that’s lots stronger than 2+2=4. You can rea­son with­out 2+2=4. Maybe not very well, but you can do it.

So Eliezer, do you have a prob­a­bil­ity of 1 in the law of non-con­tra­dic­tion?

• The truth of prob­a­bil­ity the­ory it­self de­pends on non-con­tra­dic­tion, so I don’t re­ally think that prob­a­bil­ity is a valid frame­work for rea­son­ing about the truth of fun­da­men­tal logic, be­cause if logic is sus­pect prob­a­bil­ity it­self be­comes sus­pect.

• Gray Area said: “Amus­ingly, this is one of the more con­tro­ver­sial tau­tolo­gies to bring up. This is be­cause con­struc­tivist math­e­mat­i­ci­ans re­ject this state­ment.”

Ac­tu­ally con­struc­tivist math­e­mat­i­ci­ans re­ject the law of the ex­cluded mid­dle, (P v ~P), not the law of non-con­tra­dic­tion (they are not equiv­a­lent in in­tu­ition­is­tic logic, the law of non-con­tra­dic­tion is ac­tu­ally equiv­a­lent to the dou­ble nega­tion of the ex­cluded mid­dle).

• Ben, you’re mak­ing an ob­vi­ous er­ror: you are tak­ing the state­ment that “P never equals 1” has a prob­a­bil­ity of less than 1 to mean that in some pro­por­tion of cases, we ex­pect the prob­a­bil­ity to equal 1. This would be the same as sup­pos­ing that as­sign­ing the light-speed limit a prob­a­bil­ity of less than 1 im­plies that we think that the speed of light is some­times ex­ceeded.

But it doesn’t mean this, it means that if we were to enun­ci­ate enough sup­posed phys­i­cal laws, we would some­times be mis­taken. In the same way, a prob­a­bil­ity of less than 1 for the propo­si­tion that we should never as­sign a prob­a­bil­ity of 1 sim­ply means that if we take enough sup­posed claims re­gard­ing math­e­mat­ics, logic, and prob­a­bil­ity the­ory, each of which we take to be as cer­tain as the claim re­ject­ing a prob­a­bil­ity of unity, we would some­times be mis­taken. This doesn’t mean that any propo­si­tion has a prob­a­bil­ity of unity.

I have per­son­ally wit­nessed a room of peo­ple nod their heads in agree­ment with a defi­ni­tion of a par­tic­u­lar term in soft­ware test­ing. Then when we dis­cussed ex­am­ples of that term in ac­tion, we dis­cov­ered that many of us hav­ing agreed with the words in the defi­ni­tion, had a very differ­ent in­ter­pre­ta­tion of those words. To my great dis­cour­age­ment, I learned that agree­ing on a sign is not the same as agree­ing on the in­ter­pre­tant or the ob­ject. (sign, ob­ject, and in­ter­pre­tant are the three parts of Peirce’s semiotic tri­an­gle)

In the case of 2+2=4, I think I know what that means, but when Eu­clid, Euler, or Laplace thought of 2+2=4, were they think­ing the same thing I am? Maybe they were, but I’m not con­fi­dent of that. And when some­day a ar­tifi­cial in­tel­li­gence pon­ders 2+2=4, will it be think­ing what I’m think­ing?

I feel 100% pos­i­tive that 2+2=4 is true, and 100% pos­i­tive that I don’t en­tirely know what I mean by “2+2=4”. I am also not en­tirely sure what other peo­ple mean by it. Maybe they mean “any two ob­jects, com­bined with two ob­jects, always re­sults in four ob­jects”, which is ob­vi­ously not true.

In think­ing about cer­tainty, it helps me to con­sider the his­tory of the num­ber zero. That some­thing so ob­vi­ous could be un­known (or un­rec­og­nized as im­por­tant) for so long is sober­ing. The Greeks would also have sworn that the square root of nega­tive one has no mean­ing and cer­tainly no use in math­e­mat­ics. 100% cer­tain! The Pythagore­ans would have sworn it just be­fore ston­ing you to death for math heresy.

• Mr. Bach,

I think you’re right to point out that “num­ber” meant a differ­ent thing to the Greeks; but I think that should make us more, not less, con­fi­dent that “2+2=4.” If the Greeks had meant the same thing by num­ber as mod­ern math­e­mat­i­ci­ans do, than they were wrong to be very con­fi­dent that the square root of nega­tive one was not a num­ber. How­ever, the square root of nega­tive one does in fact fall short of be­ing a sim­ple, definite mul­ti­tude—what Eu­clid, at least, meant by num­ber. So if they were in er­ror, it was the prac­ti­cal er­ror of draw­ing an un­nec­es­sary dis­tinc­tion, not a con­tra­dic­tory one.

Per­haps “100% cer­tain” or “P=1″ could mean that I be­lieve some­thing to be true with the same level of cer­tainty as that by which I be­lieve cer­tainty and prob­a­bil­ity to be co­her­ent terms. We can only eval­u­ate judg­ments if we ac­cept “judg­ment” as a valid kind of thought any­way.

• Yeah, imag­ine what a mess it would be to try to rewrite the ax­ioms of prob­a­bil­ity as them­selves prob­a­bil­is­tic?

• If you say 99.9999% con­fi­dence, you’re im­ply­ing that you could make one mil­lion equally fraught state­ments, one af­ter the other, and be wrong, on av­er­age, about once.

Ex­cel­lent post over­all, but that part seems weak­est—we suffer from an un­availa­bil­ity prob­lem, in that we can’t just think up ran­dom state­ments with those prop­er­ties. When I said I agreed 99.9999% with “P(P is never equal to 1)” it doesnt’t mean that I feel I could pro­duce such a list—just that I have a very high be­lief that such a list could ex­ist.

An in­ter­me­di­ate po­si­tion would be to come up with a hun­dred equally fraught state­ments in a ran­domly cho­sen nar­row area, and ex­trapoltate from that re­sult.

• Also (and sorry for the rapid-fire com­ment­ing), do you ac­cept that we can have con­di­tional prob­a­bil­ities of one? For ex­am­ple, P(A|A)=1? And, for that mat­ter, P(B|(A-->B, A))=1? If so, I be­lieve I can force you to ac­cept at least prob­a­bil­ities of 1 in sound de­duc­tive ar­gu­ments. And per­haps (I’ll have to think about it some more) in the log­i­cal laws that get you to the sound de­duc­tive ar­gu­ments. I’m just try­ing to get the camel’s nose in the tent here...

• I don’t think you could get up to 99.99% con­fi­dence for as­ser­tions like “53 is a prime num­ber”. Yes, it seems likely, but by the time you tried to set up pro­to­cols that would let you as­sert 10,000 in­de­pen­dent state­ments of this sort—that is, not just a set of state­ments about prime num­bers, but a new pro­to­col each time—you would fail more than once.

If you forced me to come up wit 10,000 state­ments I knew to >=99.99% I would find it easy, given suffi­cient time. Most of them would be prob­a­bil­ity much much more than 99.99% how­ever.

Here is a sam­ple of the list: I am not the Duke of Ed­in­burgh. Ron­ald Mc­don­ald is not on my roof I am not cur­rently in a bath I am cur­rently mak­ing a list of things I be­lieve are highly likely Eliezer Yu­dowsky is not a pa­per­clip max­imis­ing AI I am not the 10,000th sen­tient be­ing ever to have ex­isted. The Queen is not a cock­erspaniel in dis­guise. I am not a P-zom­bie.

53 has no prime fac­tors other than it­self. (this is much greater cer­tainty; as I can hold in my mind the fol­low­ing facts “the root of 53 is less than 8. 53 is not in the 7 times table. 53 is not in the 5 times table. 53 is not in the 3 times table and 53 is odd” si­mul­ta­neously. For 53 not to be prime would re­quire, as for 2+2 not to equal 4, that I be very in­sane. My prob­a­bil­ity of be­ing that in­sane is less than 1 in 10,000, and of hav­ing that spe­cific in­san­ity is lower still.)

The difficult part is in find­ing 10,000 state­ments with pre­cisely 1 in 10,000 odds; not find­ing 10,000 state­ments with less than 1 in 10,000 odds.

• I per­ceive the in­ten­tion of the origi­nal as­ser­tion is that even in this case you would still fail in mak­ing 10.000 in­de­pen­dent state­ments of such sort—i.e., in try­ing to do it, you are quite likely some­how make a mis­take at least once, say, by a typo, a slip of the tongue, ac­ci­den­tally om­mit­ting ‘not’ or what­ever. All it takes to fail on a state­ment like “53 to be prime” all it takes is for you to not no­tice that it ac­tu­ally says ’51 is prime’ or make some mis­take when di­vid­ing.

Any ran­dom state­ment of yours has a ‘ceiling’ of x-nines ac­cu­racy.

Even any ran­dom state­ment of yours where it is known that you aren’t rushed, tired, on med­i­ca­tion, sober, not sleepy, had a chance and in­tent to re­view it sev­eral times still has some ac­cu­racy ceiling, a cou­ple or­ders of mag­ni­tude higher, but still definitely not 1.

• If you can make a state­ment ev­ery two sec­onds, you could ac­tu­ally stand up and do this. If I could get spon­sor­ship to offset ex­is­ten­tial risk, I’d take this challenge on to ac­tu­ally stand up for the best part of a day and make 10,000 true state­ments with nary a false one.

I would how­ever go for less va­ri­ety than you if I wanted to be con­fi­dent of win­ning this challenge. “My teeth are smaller than Jupiter. The Queen is smaller than Jupiter. A Ford Mon­deo is smaller than Jupiter...”

• Those state­ments aren’t even ap­prox­i­mately in­de­pen­dent though, if Jupiter turns out to be re­ally small, they’re all true. That’s why mine were so weird, the in­de­pen­dence clause.*

*(they still aren’t ac­tu­ally in­de­pen­dent, but I’m >99.99% sure you couldn’t make a set of state­ments that were)

How­ever, it’s pos­si­ble to make a set of state­ments that are mu­tu­ally ex­clu­sive, which might ac­tu­ally be a su­pe­rior task “I am not the 11,043rd sen­tient en­tity ever to ex­ist. I am not the 21,043rd sen­tient en­tity ever to ex­ist, etc.”

• Why is the un­cer­tainty fetish so ap­peal­ing that peo­ple will en­ter­tain such weird ideas to re­tain it?

Why is the cer­tainty fetish so ap­peal­ing that peo­ple will ig­nore the ob­vi­ous fact that all con­clu­sions are con­tin­gent?

• Huh, I must be slowed down be­cause it’s late at night… P(A|A) is the sim­plest case of all. P(x|y) is defined as P(x,y)/​P(y). P(A|A) is defined as P(A,A)/​P(A) = P(A)/​P(A) = 1. The ra­tio of these two prob­a­bil­ities may be 1, but I deny that there’s any ac­tual prob­a­bil­ity that’s equal to 1. P(|) is a mere no­ta­tional con­ve­nience, noth­ing more. Just be­cause we con­ven­tion­ally write this ra­tio us­ing a “P” sym­bol doesn’t make it a prob­a­bil­ity.

• But it does obey the Kol­mogorov ax­ioms (it can’t be greater than 1 for in­stance); that seems im­por­tant.

• Good point about in­finite cer­tainty, poor ex­am­ple.

Assert 99.9999999999% con­fi­dence, and you’re tak­ing it up to a trillion. Now you’re go­ing to talk for a hun­dred hu­man life­times, and not be wrong even once?

Leaky in­duc­tion. Didn’t that feel a lit­tle forced?

ev­i­dence that con­vinced me that 2 + 2 = 4 in the first place.

“(the sum of) 2 + 2” means “4“; or to make it more ob­vi­ous, “1 + 1” means “2”. Th­ese aren’t state­ments about the real world*, hence they’re not sub­ject to falsifi­ca­tion, they con­tain no com­po­nent of ig­no­rance, and they don’t fall un­der the purview of prob­a­bil­ity the­ory.

*Here your counter has been that mean­ing is in the brain and the brain is part of the real world. Yet such a line of rea­son­ing, even if it weren’t based on a cat­e­gory er­ror, proves too much: it cuts the ground from un­der your ab­solute cer­tainty in the Bayesian ap­proach—the same cer­tainty you needed in or­der to make ac­cu­rate state­ments about 99.99---% prob­a­bil­ities in the first place.

The laws of prob­a­bil­ity are only use­ful for ra­tio­nal­ity if you know when they do and don’t ap­ply.

1. We can be wrong about what the words we use mean.

2. What cat­e­gory er­ror would that be?

3. We don’t have ab­solute cer­tainty in ‘the Bayesian ap­proach’. It would be counter-pro­duc­tive at best if we did, since then our cer­tainty would be too great for ev­i­dence from the world to change our mind, hence we’d have no rea­son to think that if the ev­i­dence did con­tra­dict ‘the Bayesian ap­proach’, we’d be­lieve differ­ently. In other words, we’d have no rea­son as Bayesi­ans to be­lieve our be­lief, though we’d re­main ir­ra­tionally caught in the grips of that delu­sion.

4. Even as­sum­ing that it’s a mat­ter of word mean­ings that the four mil­lionth digit of pi is 0, you can still be un­cer­tain about that fact, and Bayesian rea­son­ing ap­plies to such un­cer­tainty in pre­cisely the same way that it ap­plies to any­thing else. You can ac­quire new ev­i­dence that makes you re­vise your be­liefs about math­e­mat­i­cal the­o­rems, etc.

• The ra­tio of these two prob­a­bil­ities may be 1, but I deny that there’s any ac­tual prob­a­bil­ity that’s equal to 1. P(|) is a mere no­ta­tional convenience

I’d have to di­a­gree with that. The ax­ioms I’ve seen of prob­a­bilty/​mea­sure the­ory do not make the case that P() is a prob­a­bil­ity while P(|) is not—they are both, ulit­mately, the same type of ob­ject (just taken from differ­ent mea­surable sets).

How­ever, you don’t need to ap­peal to this type of rea­son­ing to get rid of P(A|A) = 1. Your prob­a­bil­ity of cor­rectly re­mem­ber­ing the be­gin­ning of the state­ment when reach­ing the end is not 1 - hence there is room for doubt. Even your prob­a­bil­ity of cor­rectly un­der­stand­ing the state­ment is not 1.

P(P is never equal to 1) = ?

I know, I know, ‘this state­ment is not true’.

Would this be an ar­gu­ment for al­low­ing “prob­a­bil­ities of prob­a­bil­ities”? So that you can as­sign 99.9999% (that’s enough 9′s I feel) to the state­ment “P(P is never equal to 1)”.

• P(P is never equal to 1) = ?

I know, I know, ‘this state­ment is not true’. But we’ve long since left the real world any­way. How­ever, if you tell me the above is less than one, that means that in some cases, in­finite cer­tainty can ex­ist, right?

Get some sleep first though Eliezer and Paul. It’s 9.46am here.

• P(P is never equal to 1) = ?

He an­swered that.

Am I ab­solutely sure of that?

Why, of course not.

How­ever, if you tell me the above is less than one, that means that in some cases, in­finite cer­tainty can ex­ist, right?

It means that there might be cases where in­finite cer­tainty can ex­ist. There also might be cases where the speed of light can be ex­ceeded, con­ser­va­tion of en­ergy can be vi­o­lated, etc. There prob­a­bly aren’t cases of any of these.

• If you get past that one, I’ll offer you an­other.

“There is some en­tity [even if only a simu­la­tion] that is hav­ing this thought.” Surely you have a prob­a­bil­ity of 1 in that. Or you’re go­ing to have to an­swer to Descartes’s up­load, yo.

• Well, maybe you fell asleep halfway that thought, and thought the last half af­ter you woke, with­out notic­ing you slept.

• That doesn’t an­swer it. You still had the thought, even with some time lapse. But even if you some­how say that doesn’t count, a triv­ial fix which that sup­po­si­tion to­tally can­not an­swer would be “There is some en­tity [even if only a simu­la­tion] that is hav­ing at least a por­tion of this thought”.

• If the goal here is to make a state­ment to which one can as­sign prob­a­bil­ity 1, how about this: some­thing ex­ists. That would be quite difficult to con­tra­dict (albeit it has been done by non-re­al­ists).

• Is “ex­ist” even a mean­ingful term? My prob­a­bil­ity on that is high­ish but no where near unity.

• “Ex­ist” is mean­ingful in the sense that “true” is mean­ingful, as de­scribed in EY’s The Sim­ple Truth. I’m not re­ally sure why any­one cares about say­ing some­thing with prob­a­bil­ity 1 though; no mat­ter how care­fully you think about it, there’s always the chance that in a few sec­onds you’ll wake up and re­al­ize that even though it seems to make sense now, you were ac­tu­ally spout­ing gib­ber­ish. Your brain is ca­pa­ble of mak­ing mis­takes while as­sert­ing that it can­not pos­si­bly be mak­ing a mis­take, and there is no do­main on which this does not hold.

• Your brain is ca­pa­ble of mak­ing mis­takes while as­sert­ing that it can­not pos­si­bly be mak­ing a mis­take, and there is no do­main on which this does not hold.

I must raise an ob­jec­tion to that last point, there are 1 or more do­main(s) on which this does not hold. For in­stance, my be­lief that A→A is eas­ily 100%, and there is no way for this to be a mis­take. If you don’t be­lieve me, sub­sti­tute A=”2+2=4″. Similarly, I can never be mis­taken in say­ing “some­thing ex­ists” be­cause for me to be mis­taken about it, I’d have to ex­ist.

• my be­lief that A→A is eas­ily 100%

You could be mis­taken about logic, a de­mon might be play­ing tricks on you etc.

Similarly, I can never be mis­taken in say­ing “some­thing ex­ists” be­cause for me to be mis­taken about it, I’d have to ex­ist.

You can say “Sher­lock Holmes was cor­rect in his de­duc­tion.” That does not rely on Sher­lock Holmes ac­tu­ally ex­ist­ing, it’s just not­ing a re­la­tion be­tween one con­cept (Sher­lock Holmes) and an­other (a cor­rect de­duc­tion).

• You could be mis­taken about logic, a de­mon might be play­ing tricks on you etc.

What would you say, if asked to defend this pos­si­bil­ity?

You can say “Sher­lock Holmes was cor­rect in his de­duc­tion.” That does not rely on Sher­lock Holmes ac­tu­ally ex­ist­ing, it’s just not­ing a re­la­tion be­tween one con­cept (Sher­lock Holmes) and an­other (a cor­rect de­duc­tion).

This is true, but (at least if we’re chan­nel­ing Descartes) the ques­tion is whether or not we can raise a doubt about the truth of the claim that some­thing ex­ists. Our abil­ity to have this thought doesn’t prove that it’s true, but it may well close off any doubts.

• What would you say, if asked to defend this pos­si­bil­ity?

The com­plex­ity based prior for liv­ing in such a world is very low, but non-zero. Con­se­quently, you can’t be straight 1.0 con­vinced it’s not the case.

A teapot could ac­tu­ally be an alien space­ship mas­querad­ing as a teapot-lookalike. That pos­si­bil­ity is heav­ily, heav­ily dis­counted against us­ing your fa­vorite ver­sion of ev­ery­one’s fa­vorite heuris­tic (Oc­cam’s Ra­zor). How­ever, since it can be for­mu­lated (with a lot of ex­tra bits), its prob­a­bil­ity is non-zero. Enough to re­duc­tio the “eas­ily 100%”.

• The com­plex­ity based prior for liv­ing in such a world is very low, but non-zero. Con­se­quently, you can’t be straight 1.0 con­vinced it’s not the case.

Well, this is a restate­ment of the claim that it’s pos­si­ble to be de­ceived about tau­tolo­gies, not a defense of that claim. But your post clar­ifies the situ­a­tion quite a lot, so maybe I can rephrase my re­quest: how would you defend the claim that it is pos­si­ble (with any ar­bi­trar­ily large num­ber of bits) to for­mu­late a world in which a con­tra­dic­tions is true?

I ad­mit I for one don’t know how I would defend the con­trary claim, that no such world could be for­mu­lated.

• for­mu­late a world in which a con­tra­dic­tions is true?

Prob­a­bly heav­ily de­pends on the mean­ing of “for­mu­late”, “con­tra­dic­tion” and “true”. For ex­am­ple, what’s the differ­ence be­tween “imag­ine” and “for­mu­late”? In other words, with “any ar­bi­trar­ily large num­ber of bits” you can likely ac­cu­rately “for­mu­late” a model of the hu­man brain/​mind which imag­ines “a world in which a con­tra­dic­tion is true”.

• I mean what­ever Ka­woomba meant, and so he’s free to tell me whether or not I’m ask­ing for some­thing im­pos­si­ble (though that would be a dan­ger­ous line for him to take).

In other words, with “any ar­bi­trar­ily large num­ber of bits” you can likely ac­cu­rately “for­mu­late” a model of the hu­man brain/​mind which imag­ines “a world in which a con­tra­dic­tion is true”.

Is your thought that un­less we can (with cer­tainty) rule out the pos­si­bil­ity of such a model or rule out the pos­si­bil­ity that this model rep­re­sents a world in which a con­tra­dic­tion is true, then we can’t call our­selves cer­tain about the law of non-con­tra­dic­tion? I grant that the falsity of that dis­junct seems far from cer­tain.

• [in] a world in which a con­tra­dic­tion is true, then we can’t call our­selves cer­tain about the law of non-con­tra­dic­tion?

I am not a math­e­mat­i­cian, but to me the law of non-con­tra­dic­tion is some­thing like a the­o­rem in propo­si­tional calcu­lus, un­re­lated to a par­tic­u­lar world. A propo­si­tional calcu­lus may or may not be a use­ful model, de­pends on the ap­pli­ca­tion, of course. But I sup­pose this is stray­ing dan­ger­ously close to the dis­cus­sion of in­stru­men­tal­ism, which led us nowhere last time we had it.

• It seems more like an ax­iom to me than a the­o­rem: I know of no way to ar­gue for it that doesn’t pre­sup­pose it. So I kind of read Aris­to­tle for a liv­ing (don’t laugh), and he takes an in­ter­est­ing shot at ar­gu­ing for the LNC: he seems to say it’s sim­ply im­pos­si­ble to for­mu­late a con­tra­dic­tion in thought, or even in speech. The sen­tence ‘this is a man and not a man’ just isn’t gen­uine propo­si­tion.

That doesn’t seem su­per plau­si­ble, how­ever in­ter­est­ing a strat­egy it is, and I don’t know of any­thing bet­ter.

• he seems to say it’s sim­ply im­pos­si­ble to for­mu­late a con­tra­dic­tion in thought, or even in speech. The sen­tence ‘this is a man and not a man’ just isn’t gen­uine propo­si­tion.

This seems like a ver­sion of “no true Scots­man”. Any­way, I don’t know much about Aris­to­tle’s ideas, but what I do know, mostly physics-re­lated, ei­ther is out­right wrong or has been ob­so­lete for the last 500 years. If this is any in­di­ca­tion, his ideas on logic are prob­a­bly long su­per­seded by the first-or­der logic or some­thing, and his ideas on lan­guage and mean­ing by some­thing else rea­son­ably mod­ern. Maybe he is fun to read from the his­tor­i­cal or liter­ary per­spec­tive, I don’t know, but I doubt that it adds any­thing to one’s un­der­stand­ing of the world.

• This seems like a ver­sion of “no true Scots­man”.

Well, his ar­gu­ment con­sists of more than the above as­ser­tion (he lays out a bunch of in­de­pen­dent crite­ria for the ex­pres­sion of a thought, and ar­gues that con­tra­dic­tions can never satisfy them). How­ever I can’t dis­agree with you on this: no one reads Aris­to­tle to learn about physics or logic or biol­ogy or what-have-you. To say that mod­ern ver­sions are more pow­er­ful, more ac­cu­rate, and more use­ful is mas­sive un­der­state­ment. Peo­ple still read Aris­to­tle as a rele­vant eth­i­cal philoso­pher, though I have my doubts as to how use­ful he can be, given that he was an ad­vo­cate for slav­ery, sex­ism, in­fan­ti­cide, etc. Not a good start for an ethi­cist.

On the other hand, al­most no con­tem­po­rary lo­gi­ci­ans think con­tra­dic­tions can be true, but no one I know of has an ar­gu­ment for this. It’s just a prim­i­tive.

• You can say “Sher­lock Holmes was cor­rect in his de­duc­tion.” That does not rely on Sher­lock Holmes ac­tu­ally ex­ist­ing, it’s just not­ing a re­la­tion be­tween one con­cept (Sher­lock Holmes) and an­other (a cor­rect de­duc­tion).

This is true, but (at least if we’re chan­nel­ing Descartes) the ques­tion is whether or not we can raise a doubt about the truth of the claim that some­thing ex­ists. Our abil­ity to have this thought doesn’t prove that it’s true, but it may well close off any doubts.

• Sure, it sounds pretty rea­son­able. I mean, it’s an el­e­men­tary facet of logic, and there’s no way it’s wrong. But, are you re­ally, 100% cer­tain that there is no pos­si­ble con­figu­ra­tion of your brain which would re­sult in you hold­ing that A im­plies not A, while feel­ing the ex­act same sub­jec­tive feel­ing of cer­tainty (along with be­ing able to offer log­i­cal proofs, such that you feel like it is a triv­ial truth of logic)? Re­mem­ber that our brains are not perfect log­i­cal com­put­ers; they can make mis­takes. Triv­ially, there is some prob­a­bil­ity of your brain en­ter­ing into any given state for no good rea­son at all due to quan­tum effects. Ridicu­lously un­likely, but not liter­ally 0. Un­less you be­lieve with ab­solute cer­tainty that it is im­pos­si­ble to have the sub­jec­tive ex­pe­rience of be­liev­ing that A im­plies not A in the same way you cur­rently be­lieve that A im­plies A, then you can’t say that you are liter­ally 100% cer­tain. You will feel 100% cer­tain, but this is a very differ­ent thing than ac­tu­ally liter­ally pos­sess­ing 100% cer­tainty. Are you cer­tain, 100%, that you’re not brain dam­aged and wildly mis­in­ter­pret­ing the en­tire field of logic? When you posit cer­tainty, there can be liter­ally no way that you could ever be wrong. Liter­ally none. That’s an in­sanely hard thing to prove, and sub­jec­tive ex­pe­rience can­not pos­si­bly get you there. You can’t be cer­tain about what ex­pe­riences are pos­si­ble, and that puts some amount of un­cer­tainty into liter­ally ev­ery­thing else.

• So by that logic I should as­sign a nonzero prob­a­bil­ity to ¬(A→A). And if some­thing has nonzero prob­a­bil­ity, you should bet on it if the pay­out is suffi­ciently high. Would you bet any amount of money or utilons at any odds on this propo­si­tion? If not, then I don’t be­lieve you truly be­lieve 100% cer­tainty is im­pos­si­ble. Also, 100% cer­tainty can’t be im­pos­si­ble, be­cause im­pos­si­bil­ity im­plies that it is 0% likely, which would be a self-defeat­ing ar­gu­ment. You may find it highly im­prob­a­ble that I can truly be 100% cer­tain. What prob­a­bil­ity do you as­sign to me be­ing able to as­sign 100% prob­a­bil­ity?

• Yes, 0 is no more a prob­a­bil­ity than 1 is. You are cor­rect that I do not as­sign 100% cer­tainty to the idea that 100% cer­tainty is im­pos­si­ble. The propo­si­tion is of pre­cisely that form though, that it is im­pos­si­ble—I would ex­pect to find that it was sim­ply not true at all, rather than ex­pect to see it al­most always hold true but some­times break down. In any case, yes, I would be will­ing to make many such bets. I would hap­pily ac­cept a bet of one penny, right now, against a source of effec­tively limitless re­sources, for one ex­am­ple.

As to what prob­a­bil­ity you as­sign; I do not find it in the slight­est im­prob­a­ble that you claim 100% cer­tainty in full hon­esty. I do ques­tion, though, whether you would make liter­ally any bet offered to you. Would you take the other side of my bet; hav­ing limitless re­sources, or a FAI, or some­thing, would you be will­ing to bet los­ing it in ex­change for a value roughly equal to that of a penny right now? In fact, you ought to be will­ing to risk los­ing it for no gain—you’d be in­differ­ent on the bet, and you get free sig­nal­ing from it.

• Would you take the other side of my bet; hav­ing limitless re­sources, or a FAI, or some­thing, would you be will­ing to bet los­ing it in ex­change for a value roughly equal to that of a penny right now? In fact, you ought to be will­ing to risk los­ing it for no gain—you’d be in­differ­ent on the bet, and you get free sig­nal­ing from it.

In­deed, I would bet the world (or many wor­lds) that (A→A) to win a penny, or even to win noth­ing but re­in­forced sig­nal­ing. In fact, re­fusal to use 1 and 0 as prob­a­bil­ities can lead to be­ing money-pumped (or at least ex­ploited, I may be mi­sus­ing the term “money-pump”). Let’s say you as­sign a 1/​10^100 prob­a­bil­ity that your mind has a crit­i­cal logic er­ror of some sort, caus­ing you to bound prob­a­bil­ities to the range of (1/​10^100, 1-1/​10^100) (should be brack­ets but for­mat­ting won’t al­low it). You can now be pas­cal’s mugged if the pay­off offered is greater than the amount asked for by a fac­tor of at least 10^100. If you claim the prob­a­bil­ity is less than 10^100 due to a lev­er­age penalty or any other rea­son, you are ad­mit­ting that your brain is ca­pa­ble of be­ing more cer­tain than the afore­men­tioned num­ber (and such a sce­nario can be set up for any such num­ber).

• That’s not how de­ci­sion the­ory works. The bounds on my prob­a­bil­ities don’t ac­tu­ally ap­ply quite like that. When I’m mak­ing a de­ci­sion, I can use­fully talk about the ex­pected util­ity of tak­ing the bet, un­der the as­sump­tion that I have not made an er­ror, and then mul­ti­ply that by the odds of me not mak­ing an er­ror, adding the fi­nal re­sult to the ex­pected util­ity of tak­ing the bet given that I have made an er­ror. This will give me the cor­rect ex­pected util­ity for tak­ing the bet, and will not re­sult in me tak­ing stupid bets just be­cause of the chance I’ve made a logic er­ror; af­ter all, given that my en­tire rea­son­ing is wrong, I shouldn’t ex­pect tak­ing the bet to be any bet­ter or worse than not tak­ing it. In shorter terms: EU(ac­tion) = EU(ac­tion & ¬er­ror) + EU(ac­tion & er­ror); also EU(ac­tion & er­ror) = EU(anyOtherAc­tion & er­ror), mean­ing that when I com­pare any 2 ac­tions I get EU(ac­tion) - EU(oth­erAc­tion) = EU(ac­tion & ¬er­ror) - EU(oth­erAc­tion & ¬er­ror). Even though my prob­a­bil­ity es­ti­mates are af­fected by the pres­ence of an er­ror fac­tor, my de­ci­sions are not. On the sur­face this seems like an ar­gu­ment that the dis­tinc­tion is some­how triv­ial or pointless; how­ever, the crit­i­cal differ­ence comes in the fact that while I can­not pre­dict the na­ture of such an er­ror ahead of time, I can po­ten­tially re­cover from it iff I as­sign >0 prob­a­bil­ity to it oc­cur­ring. Other­wise I will never ever as­sign it any­thing other than 0, no mat­ter how much ev­i­dence I see. In the in­cred­ibly im­prob­a­ble event that I am wrong, given ex­traor­di­nary amounts of ev­i­dence I can be con­vinced of that fact. And that will cause all of my other prob­a­bil­ities to up­date, which will cause my de­ci­sions to change.

• Your calcu­la­tions aren’t quite right. You’re treat­ing `EU(ac­tion)` as though it were a prob­a­bil­ity value (like `P(ac­tion)`). `EU(ac­tion)` would be more log­i­cally writ­ten `E(util­ity | ac­tion)`, which it­self is an in­te­gral over `util­ity * P(util­ity | ac­tion)` for `util­ity∈(-∞,∞)`, which, due to lin­ear­ity of `*` and in­te­grals, does have all the nor­mal iden­tities, like

`E(util­ity | ac­tion) = E(util­ity | ac­tion, e) * P(e | ac­tion) + E(util­ity | ac­tion, ¬e) * P(¬e | ac­tion)`.

In this case, if you do ex­pand that out, us­ing `p<<1` for the prob­a­bil­ity of an er­ror, which is in­de­pen­dent of your ac­tion, and as­sum­ing `E(util­ity|ac­tion1,er­ror) = E(util­ity|ac­tion2,er­ror)`, you get `E(util­ity | ac­tion) = E(util­ity | er­ror) * p + E(util­ity | ac­tion, ¬er­ror) * (1 - p)`. Or for the differ­ence be­tween two ac­tions, `EU1 - EU2 = (EU1′ - EU2′) * (1 - p)` where `EU1′, EU2′` are the ex­pected util­ities as­sum­ing no er­rors.

Any­way, this seems like a good model for “there’s a su­per­in­tel­li­gent de­mon mess­ing with my head” kind of er­ror sce­nar­ios, but not so much for the ev­ery­day kind of math er­rors. For ex­am­ple, if I work out in my head that 51 is a prime num­ber, I would ac­cept an even odds bet on “51 is prime”. But, if I knew I had made an er­ror in the proof some­where, it would be a bet­ter idea not to take the bet, since less than half of num­bers near 50 are prime.

• Right, I didn’t quite work all the math out pre­cisely, but at least the con­clu­sion was cor­rect. This model is, as you say, ex­clu­sively for fatal logic er­rors; the sorts where the law of non-con­tra­dic­tion doesn’t hold, or some­thing equally un­think­able, such that ev­ery­thing you thought you knew is in­val­i­dated. It does not ap­ply in the case of nor­mal math er­rors for less ob­vi­ous con­clu­sions (well, it does, but your ex­pected util­ity given no er­rors of this class still has to ac­count for er­rors of other classes, where you can still make other pre­dic­tions).

• In fact, re­fusal to use 1 and 0 as prob­a­bil­ities can lead to be­ing money-pumped (or at least ex­ploited, I may be mi­sus­ing the term “money-pump”)

The us­age of “money-pump” is cor­rect.

(Do note, how­ever, that us­ing 1 and 0 as prob­a­bil­ities when you in fact do not have that much cer­tainty also im­plies the pos­si­bil­ity for ex­ploita­tion, and un­like the money pump sce­nario you do not even have the op­por­tu­nity to learn from the first ex­ploita­tion and self cor­rect.)

• Also, 100% cer­tainty can’t be im­pos­si­ble, be­cause im­pos­si­bil­ity im­plies that it is 0% likely, which would be a self-defeat­ing ar­gu­ment. You may find it highly im­prob­a­ble that I can truly be 100% cer­tain. What prob­a­bil­ity do you as­sign to me be­ing able to as­sign 100% prob­a­bil­ity?

When I say 100% cer­tainty is im­pos­si­ble, I mean that there are no cases where as­sign­ing 100% to some­thing is cor­rect, but I have less than 100% con­fi­dence in this claim. It’s similar to the claim that it’s im­pos­si­ble to travel faster than the speed of light.

• A lot of this is a fram­ing prob­lem. Re­mem­ber that any­thing we’re dis­cussing here is in hu­man terms, not (for ex­am­ple) raw Univer­sal Tur­ing Ma­chine tape-streams with mea­surable Ko­mol­gorov com­plex­ities. So when you say “what prob­a­bil­ity do you as­sign to me be­ing able to as­sign 100% prob­a­bil­ity”, you’re ab­stract­ing a LOT of lit­tle de­tails that oth­er­wise need to be ac­counted for.

I.e., if I’m com­put­ing prob­a­bil­ities as a set of propo­si­tions, each of which is a com­putable func­tion that might pre­dict the uni­verse and a prob­a­bil­ity that I as­sign to whether it ac­cu­rately does so, and in all of those com­putable func­tions my se­man­tic rep­re­sen­ta­tion of ‘prob­a­bil­ity’ is en­coded as log odds with finite pre­ci­sion, then your ques­tion trans­lates into a func­tion which tra­verses all of my pos­si­ble wor­lds, looks to see if one of those prob­a­bil­ities that refers to your self-as­signed prob­a­bil­ity is en­coded as the num­ber ‘INFINITY’, mul­ti­plies that by the prob­a­bil­ity that I as­signed that world be­ing the cor­rect one, and then tab­u­lates.

Since “en­coded as log odds with finite pre­ci­sion” and “en­coded as the num­ber ‘INFINITY’” are not si­mul­ta­neously pos­si­ble given cer­tain en­cod­ing schemes, this re­ally re­solves it­self to “do I en­code float­ing-point num­bers us­ing a man­tissa no­ta­tion or other scheme that al­lows for val­ues like +INF/​-INF/​+NaN/​-NaN?”

Which sounds NOTHING like the ques­tion you asked, but it the an­swers do hap­pen to perfectly cor­re­late (to within the pre­ci­sion al­lowed by the lan­guage we’re us­ing to com­mu­ni­cate right now).

Did that make sense?

• If any agent within a sys­tem were able to as­sign a 1 or 0 prob­a­bil­ity to any be­lief about that sys­tem be­ing true, that would mean that the map-ter­ri­tory di­vide would have been bro­ken.

How­ever, since that agent can never rule out be­ing mis­taken about its own on­tol­ogy, its rea­son­ing mechanism, fol­low­ing an in­visi­ble (if van­ish­ingly un­likely) in­ter­nal failure, it can never gain fi­nal cer­tainty about any fea­ture of ter­ri­tory, al­though it can get ar­bi­trar­ily close.

• Is “ex­ist” even a mean­ingful term?

My at­tempts to taboo “ex­ist” led me to in­stru­men­tal­ism, so be­ware.

• My at­tempts to taboo “ex­ist” led me to in­stru­men­tal­ism, so be­ware.

Is in­stru­men­tal­ism such a bad thing, though? It seems like in­stru­men­tal­ism is a bet­ter gen­er­al­iza­tion of Bayesian rea­son­ing than sci­en­tific re­al­ism, and it ap­proaches sci­en­tific re­al­ism asymp­tot­i­cally as your prior for “some­thing ex­ists” ap­proaches 1. (Then again, I may have been thor­oughly cor­rupted in my youth by the works of Robert Wil­son).

• Is in­stru­men­tal­ism such a bad thing, though? It seems like in­stru­men­tal­ism is a bet­ter gen­er­al­iza­tion of Bayesian rea­son­ing than sci­en­tific realism

If you take in­stru­men­tal­ism se­ri­ously, then you re­move ex­ter­nal “re­al­ity” as mean­ingless, and only talk about in­puts (and maybe out­puts) and mod­els. Ba­si­cally in this diagram

from Up­date then For­get you re­move the top row of W’s, leav­ing dan­gling ar­rows where “ob­jec­tive re­al­ity” used to be. This is not very aes­thet­i­cally satis­fac­tory, since the W’s link cur­rent ac­tions to fu­ture ob­ser­va­tions, and with­out them the causal­ity is not ap­par­ent or even nec­es­sary. This is not nec­es­sar­ily a bad thing, if you take care to avoid the known AIXI pit­falls of wire­head­ing and anvil drop­ping. But this is cer­tainly not one of the more pop­u­lar on­tolo­gies.

• What ev­i­dence con­vinces you now that some­thing ex­ists? What would the world look like if it were not the case that some­thing ex­isted?

Imag­ine your­self as a brain in a jar, with­out the brain and the jar. Would you re­main con­vinced that some­thing ex­isted if con­fronted with a world that had ev­i­dence against that propo­si­tion?

• I’m to­tally miss­ing the “N in­de­pen­dent state­ments” part of the dis­cus­sion; that seems like a to­tal non-se­quitur to me. Can some­one point me at some kind of ex­pla­na­tion?

-Robin

• It’s an oddly fre­quen­tist ap­proach to Bayesi­anism.

• First, an in­di­vi­d­ual par­ti­cle can briefly ex­ceeed the speed of light; the group ve­loc­ity can­not. Go read up on Cerenkov ra­di­a­tion: It’s the blue glow cre­ated by (IIRC) neu­trons briefly break­ing through c, then slow­ing down. The de­crease in en­ergy reg­isters as emit­ted blue light.

Break­ing through the speed of light in a medium, but re­main­ing un­der c (the speed of light in a vac­uum).

• For one rea­son, again, if we’re in any con­ven­tional (i.e. not para­con­sis­tent) logic, ad­mit­ting any con­tra­dic­tion en­tails that I can prove any propo­si­tion to be true.

Yes, but con­di­tioned on the truth of some state­ment P&~P, my prob­a­bil­ity that logic is para­con­sis­tent is very high.

Bayesi­anism is all about ra­tios of prob­a­bil­ities, yes, but we can write these ra­tios with­out ever us­ing the P(|) no­ta­tion if we please.

• Wait a sec­ond, con­di­tional prob­a­bil­ities aren’t prob­a­bil­ities? Huhhh? Isn’t Bayesi­anism all con­di­tional prob­a­bil­ities?

• I’m 99 per­cent sure that the state­ment “con­scious­ness ex­ists/​is” has a PROBABILITY 1 at be­ing true. All of the speci­fic­i­ties we as­so­ci­ate with it cer­tainly do not, but that fact that some­thing is ex­pe­rienc­ing some­thing seems ir­refutable. Can some­one con­coct a line of rea­son­ing that would prove this wrong, say similar to 2 + 2 = 3

• I’m not sure what PROBABILITY means the way you’re us­ing it.

Can some­one con­coct a line of rea­son­ing that would prove this wrong, say similar to 2 + 2 = 3

Are dogs con­scious? Ants? Plants?

that fact that some­thing is ex­pe­rienc­ing some­thing seems ir­refutable.

In the case of con­scious­ness, this does seem valid (to me), to the ex­tent that some­thing I don’t un­der­stand well enough to cre­ate, can be said to ex­ist.* How­ever, not ev­ery­thing peo­ple say about their ex­pe­rience should be taken with­out some salt—the liter­a­ture on bi­ases (repli­ca­tions aside) claims that 1) there are ways to ma­nipu­late peo­ple’s de­ci­sions where 2) they claim said thing which ‘had a mea­surable effect’ had no effect.

*That is, if we’re not con­scious, then what would con­scious­ness mean? The difficulty of rul­ing whether this ap­plies, or to what de­gree it does, is how­ever, less clear.

• It’s not at all hard for a math­e­mat­i­cian to come up with ar­bi­trar­ily large num­bers of state­ments that have about the same con­fi­dence as 2+2=4. There’s lots of ways. Per­haps the most ob­vi­ous is “n+2 = (n+1)+1” for ar­bi­trary large n a whole num­ber. It’s rather silly to talk about how many life­times it would take to say these state­ments be­cause there they are in 2 sec­onds.

I sup­pose the an­ti­ci­pated re­sponse would be to ques­tion whether these are in­de­pen­dent state­ments. Why would they not be? If we are an­ti­ci­pat­ing that 2+2 may not be 4 I don’t see how we can for cer­tainty say that any similar state­ment in ar­ith­metic would im­ply any other. But per­haps it would be clearer if I changed the for­mula for the state­ments to be this: “2+2 is not equal to n”, for ar­bi­trar­ily large n, a whole num­ber greater than 4. Of course this is no real differ­ence ex­cept it now looks a lot like an ar­gu­ment for say­ing that a 1 in a mil­lion prob­a­bil­ity is sen­si­ble in cases where you have 1 mil­lion eas­ily enu­mer­ated cases.

For ex­am­ple say I claim that the chance of win­ning a lot­tery by guess­ing a 6 digit num­ber is 1 in a mil­lion. By the logic of the ar­ti­cle this is a pre­pos­ter­ous, ego­tis­ti­cal no­tion un­less I can come up with a mil­lion or so other state­ments of similar con­fi­dence. Easy enough. “the win­ning num­ber is n” for each num­ber 1 through 1 mil­lion. I think this has been used as an ex­am­ple in an­other ar­ti­cle some­where. Th­ese 1 mil­lion state­ments have a cor­re­spon­dence with similar state­ments like “2+2 is not 5″ etc. What is 2+2? is it 4? is it 5? is it 6? etc If the lot­tery ex­am­ple counts as “in­de­pen­dent” state­ments then so does the 2+2 se­ries. And if they do not then are we say­ing it’s ego­tis­ti­cal to de­mand you know what the prob­a­bil­ity of the lot­tery is?

In­ci­den­tally the lot­tery ex­am­ple isn’t a set of in­de­pen­dent state­ments in the prob­a­bil­ity sense. Know­ing if one state­ment is true or false gives me in­for­ma­tion about all the oth­ers. eg if you tell me the win­ning num­ber is 1 then I know it’s not 2. So what is the mean­ing of the word “in­de­pen­dent” when ask­ing for in­de­pen­dent state­ments in this ar­ti­cle? It seems to be some vague sense of not hav­ing much to do with each other some­how. Is it ever pos­si­ble to have a large num­ber of state­ments like this?

In the pre­vi­ous es­say in this se­ries ev­i­dence ac­cept­able for think­ing 2+2=3 was dis­cussed. One ex­am­ple was that the per­son might be hyp­no­tized. To me that seems like the only re­al­is­tic ex­pla­na­tion but cer­tainly it’s a likely one. That’s great but if you’ve been hyp­no­tized to think 2+2=3 like that isn’t it sud­denly much more likely that you might have been hyp­no­tized to thing any num­ber of other similar con­fi­dence level state­ments are true? So doesn’t this challenge the real in­de­pen­dence of all those sup­pos­edly in­de­pen­dent state­ments of similar con­fi­dence you might have made?

It seems like this word “in­de­pen­dent” is a prob­lem within the ar­ti­cle.

• Eliezer, what could con­vince you that Baye’s The­o­rem it­self was wrong? Can you prop­erly ad­just your be­liefs to ac­count for ev­i­dence if that ad­just­ment is sys­tem­at­i­cally wrong?

• First we’d have to at­tach a mean­ing to the claim, yes? I’ve seen ev­i­dence for var­i­ous claims about Bayes’ The­o­rem, in­clud­ing but prob­a­bly not limited to ‘Any work­able ex­ten­sion of logic to deal with un­cer­tainty will ap­prox­i­mate Bayes,’ and ‘Bayes works bet­ter in prac­tice than fre­quen­tist meth­ods’. De­cide which claim you want to talk about and you’ll know what ev­i­dence against it would look like.

(Halpern more or less ar­gues against the first one, but I’m look­ing at his ar­ti­cle and so far he just seems to be point­ing out Jaynes’ most com­mon­sen­si­cal re­quire­ments.)

• I in­tended the claim posed here about tests and pri­ors. It is posed as
p(A|X) = [p(X|A)p(A)]/​[p(X|A)p(A) + p(X|~A)*p(~A)]

But does it make sense for that to be wrong? It is a the­o­rem, un­like the state­ment 2+2=4. Maybe some sort of way to show that the ax­ioms and defi­ni­tions that are used to prove Baye’s The­o­rem are in­con­sis­tent, which is a pretty clear kind of proof. I’m not sure any­more that what I said has mean­ing. Well, thanks for the help.

• It is a the­o­rem, un­like the state­ment 2+2=4.

Uh, 2+2=4 is most definitely a the­o­rem. A very sim­ple and ob­vi­ous the­o­rem, yes. But a the­o­rem.

• For Godel-Bayes is­sues, you can start with the re­sponses to my post on the sub­ject. (I’ve since learned and re­mem­bered more about Godel.)

We should have the abil­ity to talk about sub­jec­tive un­cer­tainty in, at the very least, par­tic­u­lar proofs and prob­a­bil­ities. I don’t know that we can. But I like the fol­low­ing ar­gu­ment, which I re­call see­ing here some­where:

If there ex­ists a perfect prob­a­bil­ity calcu­la­tion based on a set of back­ground in­for­ma­tion, it must take this un­cer­tainty into ac­count. There­fore, ap­ply­ing this un­cer­tainty again to the an­swer would mean dou­ble-count­ing the ev­i­dence, which is strictly ver­boten. We there­fore can­not use this line of rea­son­ing to pro­duce a con­tra­dic­tion. Bar­ring other ar­gu­ments, we can as­sume the un­cer­tainty equals a re­ally small frac­tion.

• E.g., sup­pose a guy comes out to­mor­row with a proof of the Rie­mann Hy­poth­e­sis. What are the chances he is wrong? Surely not zero.

But the chance that the Rie­mann Hy­poth­e­sis it­self is wrong, if it has a proof? Well, that kinda seems like zero. (But then, how would we know that? It does seem like we have to filter through our un­re­li­able senses.)

• Hrmm… I’m still tak­ing high school ge­om­e­try, so “in­finite set of ax­ioms” doesn’t re­ally make sense yet. I’ll try to re-read that thread once I’ve started col­lege-level math.

• If you put a chair next to an­other chair, and you found that there were three chairs where be­fore there was one, would it be more likely that 1 + 1 = 3 or that ar­ith­metic is not the cor­rect model to de­scribe these chairs? A true math­e­mat­i­cal propo­si­tion is a pure con­duit be­tween its premises and ax­ioms and its con­clu­sions.

But note that you can never be quite com­pletely cer­tain that you haven’t made any mis­takes. It is un­cer­tain whether “S0 + S0 = SS0” is a true propo­si­tion of Peano ar­ith­metic, be­cause we may all co­in­ci­den­tally have got­ten some­thing hilar­i­ously wrong.

This is why, when an ex­per­i­ment does not go as pre­dicted, the first re­course is to check that your math has been done cor­rectly.

• Assert a con­fi­dence of (1 − 1/​googol­plex) and your ego far ex­ceeds that of men­tal pa­tients who think they’re God.

For the record, I as­sign a prob­a­bil­ity larger than 1/​google­plex to the pos­si­bil­ity that one of the men­tal pa­tients ac­tu­ally is God.

• It doesn’t make sense to say that this sub­jec­tive per­sonal prob­a­bil­ity (which, by the way, he chose to calcu­late based on a tiny sub­set of the vast amounts of in­for­ma­tion he has in his mind) based on his ob­served ev­i­dence is some­how the ab­solute prob­a­bil­ity that, say, evolu­tion is “true”.

Where does he? I as­sume as a Bayesian he would deny the re­al­ity of any such “ab­solute prob­a­bil­ity”.

• There are such things as ob­jec­tive Bayesi­ans, though I’m pretty sure Eliezer is a sub­jec­tive Bayesian.

• Sub­jec­tively ob­jec­tive, by his words.

• No, no, no. Three prob­lems, one in the anal­ogy and two in the prob­a­bil­ities.

First, an in­di­vi­d­ual par­ti­cle can briefly ex­ceeed the speed of light; the group ve­loc­ity can­not. Go read up on Cerenkov ra­di­a­tion: It’s the blue glow cre­ated by (IIRC) neu­trons briefly break­ing through c, then slow­ing down. The de­crease in en­ergy reg­isters as emit­ted blue light.

Se­cond: con­di­tional prob­a­bil­ities are not nec­es­sar­ily given by a ra­tio of den­si­ties. You’re con­di­tion­ing on (or work­ing with) events of mea­sure-zero. Th­ese puz­zlers are why mea­sure the­ory ex­ists—to step around the seem­ing ‘in­con­sis­ten­cies’.

Third: The prob­a­bil­ity of a prob­a­bil­ity is su­perflu­ous. Prob­a­bil­ities are (thanks to Kol­mogorov) just the ex­pec­ta­tion of in­di­ca­tor vari­ables. Thus P(P()=1) = E(I(E(I())=1)) = 0 or 1; the ran­dom­ness is all elimi­nated by the in­side ex­pec­ta­tion.

Leave the mus­ings on prob­a­bil­ities to the statis­ti­ci­ans; they’ve already thought about these sup­posed para­doxes.

• Cerenkov ra­di­a­tion: It’s the blue glow cre­ated by (IIRC) neu­trons briefly break­ing through c

I thought it was due to neu­trons ex­ceed­ing the phase ve­loc­ity of light in a medium, which is in­vari­ably slower than c. The neu­tron is still go­ing slower than c:

Wikipedia

While elec­tro­dy­nam­ics holds that the speed of light in a vac­uum is a uni­ver­sal con­stant (c), the speed at which light prop­a­gates in a ma­te­rial may be sig­nifi­cantly less than c. For ex­am­ple, the speed of the prop­a­ga­tion of light in wa­ter is only 0.75c. Mat­ter can be ac­cel­er­ated be­yond this speed (al­though still to less than c) dur­ing nu­clear re­ac­tions and in par­ti­cle ac­cel­er­a­tors. Cherenkov ra­di­a­tion re­sults when a charged par­ti­cle, most com­monly an elec­tron, trav­els through a di­elec­tric (elec­tri­cally po­lariz­able) medium with a speed greater than that at which light prop­a­gates in the same medium.

• Q: let’s say I offer you a choice be­tween (a) and (b).

a. To­mor­row morn­ing you can flip that coin in your hand, and if it comes up heads, then I’ll give you a dol­lar. b. To­mor­row morn­ing, if it is rain­ing, then I will give you a dol­lar.

If you choose (b) then your prob­a­bil­ity for rain to­mor­row morn­ing must be higher than 12.

Well… kinda. It could just be that if it rains, you will need to buy a \$1 um­brella, but if it doesn’t rain then you don’t need money at all. It would be nice if we had some sort of mea­sure­ment of re­ward that didn’t de­pend on the situ­a­tion you find your­self in. De­ci­sion the­o­rists like to call this “util­ity.”

I’m not sure if it’s silly to try to define prob­a­bil­ities in terms of de­ci­sion the­ory rather than vice versa. ET Jaynes defines prob­a­bil­ities as real num­bers that we as­sign to propo­si­tions rep­re­sent­ing a “de­gree of plau­si­bil­ity,” and satis­fy­ing some desider­ata. Eli has lately been talk­ing about prob­a­bil­ities in terms of the frac­tion of state­ments as­signed that prob­a­bil­ity which are true, but I don’t think he con­sid­ers this a defi­ni­tion of prob­a­bil­ity (I hope not; it would be a bad defi­ni­tion).

Any­way, I’ll say that what makes some­thing a prob­a­bil­ity is not any prop­erty of the thing it refer­ences; it’s what you do with it. If you use it to weight hy­pothe­ses in ex­pected util­ity calcu­la­tions which de­ter­mine your ac­tions, then it’s a prob­a­bil­ity.

• Should there ever be an ex­am­ple which vi­o­lates this a pri­ori as­ser­tion, it is sim­ply held to be un­real, be­cause re­al­ity is a con­struct of con­sen­sus.

I hope the gen­tle­man got bet­ter.

• Z._M._Davis: No. Why? Be­cause I said so ;-)

Point taken, I need to bet­ter con­strain the prob­lem. So, how about, “It must be able to sus­tain trans­fer of in­for­ma­tion be­tween two au­tonomous agents.” But then I’ve used the con­cept of “two”, au­tonomous agent. eek!

So a bet­ter speci­fi­ca­tion would be, “The world must con­tain in­for­ma­tion.” Or, more rigor­ously, “The world must have ob­serv­able phe­nom­ena that aid in pre­dict­ing fu­ture phe­nom­ena.”

Now, can such a simu­lated world ex­ist? And is there a whole branch of philos­phy ad­dress­ing this prob­lem that I need to brush up on?

• Can some­one write/​has some­one writ­ten a pro­gram that simu­lates ex­is­tence in a world in which 2+2=4 (and the rest of Peano ar­ith­metic) is use­less i.e. it cor­re­sponds to no ob­serv­able phe­nomenon in that world?

• What would such a simu­la­tion look like?

• The propo­si­tion in which I re­pose my con­fi­dence is the propo­si­tion that “2 + 2 = 4 is always and ex­actly true”, not the propo­si­tion “2 + 2 = 4 is mostly and usu­ally true”.

I have con­fused the map with the ter­ri­tory. Apolo­gies. Re­vised claim: I be­lieve, with 99.973% prob­a­bil­ity, that P can­not equal 1, 100% of the time! I be­lieve very strongly that I am cor­rect, and if I am cor­rectly, I am com­pletely cor­rect. But I’m not sure. Much bet­ter.

I sup­pose we should be ask­ing our­selves why we tend to try hard to re­tain the abil­ity to be 100% sure. A long long list of rea­sons spring to mind....

• (Wak­ing up.) Sure, if I thought I had ev­i­dence (how) of P&~P, that would be pretty good rea­son to be­lieve a para­con­sis­tent logic was true (ex­cept what does true mean in this con­text? not just about log­ics, but about para­con­sis­tent ones!!)

But if that ever hap­pened, if we went there, the rules for be­ing ra­tio­nal would be so rad­i­cally changed that there wouldn’t nec­es­sar­ily be good rea­son to be­lieve that one has to up­date one’s prob­a­bil­ities in that way. (Per­haps one could say the prob­a­bil­ity of the law of non-con­tra­dic­tion be­ing true is both 1 and 0? Who knows?)

I think the prob­lem with tak­ing a high prob­a­bil­ity that logic is para­con­sis­tent is that all other be­liefs stop work­ing. I don’t know how to think in a para­con­sis­tent logic. And I doubt any­one else does ei­ther. (Can you get Bayes Rule out of a para­con­sis­tent logic? I doubt it. I mean, maybe… who knows?)

• Stu­art: When I said I agreed 99.9999% with “P(P is never equal to 1)” it doesnt’t mean that I feel I could pro­duce such a list—just that I have a very high be­lief that such a list could ex­ist.

So, us­ing Eliezer’s logic, would you ex­pect that one time in a mil­lion, you’d get this wrong, and P = 1? I don’t need to you to pro­duce a list. This is a case where no num­ber of 9s will sort you out—if you as­sign a prob­a­bil­ity less than 1, you ex­pect to be in er­ror at some point, which leaves you up the creek. If I’m mak­ing a big fat er­ror (and I fear I may be), some­one please set me straight.

• Eliezer, I want to com­ple­ment you on this post. But I would sug­gest that you ap­ply it more gen­er­ally, not only to math­e­mat­ics. For ex­am­ple, it seems to me that any of us should be (or rather, could be af­ter think­ing about it for a while) more sure that 53 is a prime num­ber than that a cre­ation­ist with whom we dis­agree is wrong. This seems to im­ply that our cer­tainty of the the­ory of evolu­tion shouldn’t be more than 99.99%, ac­cord­ing to your figure, definitely less than a string of nines as long as the Bible (as you have rhetor­i­cally sug­gested in the past.)

• A string of nines as long as the Bible is re­ally, re­ally long.

But if we aren’t will­ing to as­sign prob­a­bil­ities over some ar­bi­trary limit (other than 1 it­self), we’ve got some very se­ri­ous prob­lems in our episte­mol­ogy. I would as­sign a prob­a­bil­ity to the Modern Syn­the­sis some­where around 0.99999999999999 my­self.

If propo­si­tion An is the propo­si­tion “the nth per­son gets struck by light­ning to­mor­row”, then con­sider the fol­low­ing con­junc­tion, n go­ing of course from 1 to 7 billion: P(A1 & A2 & … & A7e9) Now con­sider the nega­tion of this con­junc­tion: P(~(A1 & ~A2 & … A7e9))

I had damn well bet­ter be able to as­sign a prob­a­bil­ity greater than 0.9999 to the nega­tion, or else I couldn’t as­sign a prob­a­bil­ity lower than 0.0001 to the origi­nal con­junc­tion. And then I’m es­ti­mat­ing a 1/​10000 chance of ev­ery­one on Earth get­ting struck by light­ning on any given day, which means it should have hap­pened sev­eral times in the last cen­tury. Also, I can’t as­sign a prob­a­bil­ity of any one per­son be­ing struck as less than 1/​10000, be­cause ob­vi­ously that per­son must get struck if ev­ery­one is to be struck.

• Assert a con­fi­dence of (1 − 1/​googol­plex) and your ego far ex­ceeds that of men­tal pa­tients who think they’re God.

So we are con­sid­er­ing the pos­si­bil­ity of brain malfunc­tions, and deities chang­ing re­al­ity. Fine. But what is the use of hav­ing a strictly ac­cu­rate Bayesian rea­son­ing pro­cess when your brain is malfunc­tion­ing and/​or deities are chang­ing the pa­ram­e­ters of re­al­ity?

• Hah, I’ll let De­cartes go (or con­di­tion him on a work­able con­cept of ex­is­tence—but that’s more of a spit­ball than the hard­ball I was go­ing for).

But in an­swer to your non-con­tra­dic­tion ques­tion… I think I’d be epistem­i­cally en­ti­tled to just sneer and walk away. For one rea­son, again, if we’re in any con­ven­tional (i.e. not para­con­sis­tent) logic, ad­mit­ting any con­tra­dic­tion en­tails that I can prove any propo­si­tion to be true. And, gig­gle gig­gle, that in­cludes the propo­si­tion “the law of non-con­tra­dic­tion is true.” (Isn’t logic a beau­tiful thing?) So if this math­e­mat­i­cian thinks s/​he can ar­gue me into ac­cept­ing the nega­tion of the law of non-con­tra­dic­tion, and takes the fur­ther step of as­sert­ing any state­ment what­so­ever to which it pur­port­edly ap­plies (i.e. some P, for which P&~P, such as the white­ness of snow), then lo and be­hold, I get the law of non-con­tra­dic­tion right back.

I sup­pose if we wanted to split hairs, we could say that one can deny the law of non-con­tra­dic­tion with­out fur­ther as­sert­ing an ac­tual state­ment to which that de­nial ap­plies—i.e. ~(~(P&~P)) doesn’t have to en­tail the ex­is­tence of a state­ment P which is both true and false ((∃p)Np, where N stands for “is true and not true?” Abus­ing no­ta­tion? Never!) But then what would be the point of deny­ing the law?

(That be­ing said, what I’d ac­tu­ally do is stop long enough to listen to the ar­gu­ment—but I don’t think that com­mits me to chang­ing my zero prob­a­bil­ity. I’d listen to the ar­gu­ment solely in or­der to re­fute it.)

As for the very tiny cre­dence in the nega­tion of the law of non-con­tra­dic­tion (let’s just call it NNC), I won­der what the point would be, if it wouldn’t have any effect on any rea­son­ing pro­cess EXCEPT that it would cre­ate weird glitches that you’d have to dis­card? It’s as if you de­liber­ately loos­ened one of the spark plugs in your en­g­ine.

• There are, ap­par­ently, cer­tain Eastern philoso­phies that per­mit and even cel­e­brate log­i­cal con­tra­dic­tion. To what ex­tent this is metaphor­i­cal I couldn’t say, but I re­cently spoke to an ad­her­ent who quite firmly be­lieved that a given state­ment could be both true and false. After some ini­tial be­wil­der­ment, I ver­ified that she wasn’t talk­ing about state­ments that con­tained both true and false claims, or were in­for­mal and thus true or false un­der differ­ent in­ter­pre­ta­tions, but ac­tu­ally meant what she’d origi­nally seemed to mean.

I didn’t at first know how to ar­gue such a ba­sic ax­iom—it seemed like try­ing to talk a rock into con­scious­ness—but on re­flec­tion, I be­came in­creas­ingly un­cer­tain what her as­ser­tion would even mean. Does she, when she thinks “Hmm, this is both true and false” ac­tu­ally take any ac­tion differ­ent than I would? Does be­lief in NNC wrongly con­strain some sen­sory an­ti­ci­pa­tion? As Paul notes, need the law of non-con­tra­dic­tion hold when not mak­ing any ac­tual as­ser­tions?

All this is to say that the mat­ter which at first seemed very sim­ple be­came con­fus­ing along a num­ber of axes, and though Paul might call any one of these com­plaints “split­ting hairs” (as would I), he would prob­a­bly claim this with far less cer­tainty than his origi­nal 100% con­fi­dence in NNC’s false­hood: That is, he might be more open-minded about a com­mu­nity of math­e­mat­i­ci­ans ex­plain­ing why ac­tu­ally some par­tic­u­lar com­plaint isn’t split­ting hairs at all and is highly im­por­tant for some non-ob­vi­ous rea­sons and due to some fun­da­men­tal as­sump­tions be­ing con­fused it would be mis­lead­ing to call NNC ‘false’.

But more sim­ply, I think Paul may have failed to imag­ine how he would ac­tu­ally feel in the ac­tual situ­a­tion of a com­mu­nity of math­e­mat­i­ci­ans tel­ling him that he was wrong. Even more sim­ply, I think we can ex­trap­o­late a broader mis­take of peo­ple who are pre­sented with the ar­gu­ment against in­finite cer­tainty re­ply­ing with a par­tic­u­lar thing they’re cer­tain about, and claiming that they’re even more cer­tain about their thing than the last per­son to try a similar ar­gu­ment. Maybe the cor­rect gen­eral re­sponse to this is to just restate Eliezer’s rea­son­ing about any 100% prob­a­bil­ity sim­ply be­ing in the refer­ence class of other 100% prob­a­bil­ities, less than 100% of which are cor­rect.

• That would be Jain logic.

• (Note: This com­ment is not re­ally di­rected at Paul him­self, see­ing as he’s long gone, but at any­one who shares the sen­ti­ments he ex­presses in the above com­ment)

I think I’d be epistem­i­cally en­ti­tled to just sneer and walk away.

Note that there is al­most cer­tainly at least one per­son out there who is in­sane, drugged up, or oth­er­wise cog­ni­tively im­paired, who be­lieves that the Law of Non-Con­tra­dic­tion is in fact false, is com­pletely and in­tu­itively con­vinced of this “fact”, and who would sneer at any math­e­mat­i­cian who tried to con­vince him/​her oth­er­wise, be­fore walk­ing away. Do you in fact as­sign 100% prob­a­bil­ity to the hy­poth­e­sis that you are not that drugged-up per­son?

• I agree that you can never be „in­finitly cer­tain“ about the way the phys­i­cal world is (be­cause there‘s always a very tiny pos­si­bil­ity that things might sud­denly change, or ev­ery­thing is just a simu­la­tion, or a dream, or […] ), but you should as­sign prob­a­bil­ity 1 to math­e­mat­i­cal state­ments for which there isn‘t just ev­i­dence, but ac­tual, solid proof.

Sup­pose you have the choice beetween the fol­low­ing op­tions: A You get a lot­tery with a 1-Ep­silon chance of win­ning. B You win if 2+2=4 and 53 is a prime num­ber and Pi is an ir­ra­tional num­ber.

Is there any Ep­silon>0 for which you would chose op­tion A? What if some­thing re­ally bad hap­pens if you lose (like, all of hu­man­ity be­ing tor­tured for [in­sert large num­ber] years)?

I would chose op­tion B for any Ep­silon>0, which means as­sign­ing Bayes-prob­a­bil­ity 1 to op­tion B.

• You might want to see How to Con­vince Me That 2 + 2 = 3

Even if you be­lieve that math­e­mat­i­cal truths are nec­es­sar­ily true, you can still ask why you be­lieve that they are nec­es­sar­ily true. What caused you to be­lieve it? Likely what­ever pro­cess it is is fal­lible.

I’ll quote you what I com­mented el­se­where on this topic:

Let’s sup­pose you be­lieve that 2+2=4 fol­lows ax­io­mat­i­cally from Peano ax­ioms or some­thing. The ques­tion is what kind of ev­i­dence should con­vince you that 2+2=4 doesn’t fol­low from those ax­ioms? Ac­cord­ing the post, it’d be ex­actly the same kind of ev­i­dence that con­vinced you 2+2=4 does fol­low from the ax­ioms. Per­haps you wake up one day and find that when you sit down to ap­ply the ax­ioms, work­ing through them step by step, you get 2+2=3. And when you open up a text­book it shows the same thing, and when you ask your math pro­fes­sor friend, and when you just think about it in your head.
I sup­pose the point is that how you in­ter­act with math­e­mat­i­cal proofs isn’t much differ­ent from how you in­ter­act with the rest of the world. Math­e­mat­i­cal re­sults fol­low in some log­i­cally nec­es­sary ways, but there’s a pro­cess of ev­i­dence that causes you to have con­tin­gent be­liefs even about things that them­selves are seem­ingly nec­es­sar­ily could only be one way.
Cf. log­i­cal om­ni­science and re­lated lines of in­quiry.

I re­al­ize I haven’t en­gage with your Ep­silon sce­nario. It does seem pretty hard to imag­ine and as­sign prob­a­bil­ities to, but ac­tu­ally as­sign­ing I seems like a mis­take.

• As­sign­ing Bayes-prob­a­bil­ities <1 to math­e­mat­i­cal state­ments (that have been definitly proven) seems ab­surd and log­i­cally con­tra­dic­tory, be­cause you need math­e­mat­ics to even asign prob­a­bil­ities.

If you as­sign any Bayes prob­a­bil­ity to the state­ment that Bayes prob­a­bil­ities even work, you already as­sume that they do work.

And, ar­guably, 2+2=4 is much sim­pler than the con­cept of Bayes-prob­a­bil­ity (To be fair, the same might not be true for my most com­plex state­ment that Pi is ir­ra­tional)

• “But once I as­sign a prob­a­bil­ity of 1 to a propo­si­tion, I can never undo it. No mat­ter what I see or learn, I have to re­ject ev­ery­thing that dis­agrees with the ax­iom. ”

I think this is what causes the re­li­gious ar­gu­ment para­dox. On a deep down level, most of us re­al­ize this is true.

• I’m re­ally not sure what ex­actly you mean by “in­de­pen­dent state­ments” in this post.

• “But iter­ated ex­pec­ta­tions, all with the same con­di­tion­ing, is su­perflu­ous. That’s why I took care not to put any con­di­tion­ing into my ex­pec­ta­tions.”

Fair enough. My point is that the de Finetti the­o­rem pro­vides a way to think sen­si­bly about hav­ing a prob­a­bil­ity of a prob­a­bil­ity, par­tic­u­larly in a Bayesian frame­work.

Let me give a toy ex­am­ple to demon­strate why the con­cept is not su­perflu­ous, as you as­sert. Com­pare two situ­a­tions:

(a) I toss a coin that I know to be as sym­met­ri­cal in con­struc­tion as pos­si­ble.

(b) A ma­gi­cian friend of mine, who I know has ac­cess to dou­ble-headed and dou­ble-tailed coins, tosses a coin. I have no idea about the prove­nance of the coin she is us­ing.

My epistemic prob­a­bil­ity for the out­come of the toss, in both cases, is 0.5, from sym­me­try ar­gu­ments. (Not phys­i­cal sym­me­try, epistemic sym­me­try—that is, sym­me­try of the available pre-toss in­for­ma­tion to an in­ter­change of heads and tails.) My epistemic “prob­a­bil­ity of the prob­a­bil­ity” of the toss is differ­ent in the two cases. In case (a) it is nearly a delta func­tion at 0.5, the sharp­ness of dis­tri­bu­tion be­ing a func­tion of my knowl­edge of the state of the art in sym­met­ri­cal coin mint­ing. In case (b), it is a mix­ture of dis­tri­bu­tions en­cod­ing the pos­si­ble types of coins my friend might have cho­sen.

• And this could make a real differ­ence, if you are shown the product of 5 tosses (they were all heads) and then asked to bet on the fol­low­ing re­sult.

• de Finetti as­sumes con­di­tion­ing. If I am tak­ing con­di­tional ex­pec­ta­tions, then iter­ated ex­pec­ta­tions (with differ­ent con­di­tion­ings) is very use­ful.

But iter­ated ex­pec­ta­tions, all with the same con­di­tion­ing, is su­perflu­ous. That’s why I took care not to put any con­di­tion­ing into my ex­pec­ta­tions.

Or we can crit­i­cize the prob­a­bil­ity-of-a-prob­a­bil­ity mus­ings an­other way as hav­ing un­defined fil­tra­tions for each of the stated prob­a­bil­ities.

• Cu­mu­lant-nim­bus,

There’s no short­age of statis­ti­ci­ans who would dis­agree with your as­ser­tion that the prob­a­bil­ity of a prob­a­bil­ity is su­perflu­ous. A good place to start is with de Finetti’s the­o­rem.

• Thank you.

I’ve ac­tu­ally used Bayesian per­spec­tives (max­i­mum en­tropy, etc) but I’ve never looked at it as a sub­jec­tive de­gree of plau­si­bil­ity. Based on the Wikipe­dia ar­ti­cle, I guess I haven’t been look­ing at it the way oth­ers have. I un­der­stand where Eli is com­ing from in ap­ply­ing In­for­ma­tion the­ory. He doesn’t have com­plete in­for­ma­tion, so he won’t say that he has prob­a­bil­ity 1. He could get an­other bit of in­for­ma­tion which changes his be­lief, but he thinks (based on prior ob­ser­va­tion) that is very low.

I guess, I have prob­lem with him maybe over­reach­ing. It doesn’t make sense to say that this sub­jec­tive per­sonal prob­a­bil­ity (which, by the way, he chose to calcu­late based on a tiny sub­set of the vast amounts of in­for­ma­tion he has in his mind) based on his ob­served ev­i­dence is some­how the ab­solute prob­a­bil­ity that, say, evolu­tion is “true”.

• Q, Eliezer’s prob­a­bil­ities are Bayesian prob­a­bil­ities. (Note the “Bayesian” tag on the post.)

• It means that, given Eliezer’s knowl­edge, the prob­a­bil­ities of the nec­es­sary pre­con­di­tions for the state in ques­tion mul­ti­plied to­gether yield 0.99.

If you have a coin that you be­lieve to be fair, and you flip it, how likely do you think it is that it will land on edge?

• I’m sorry. Eliezer, can you please ex­plain to me what you mean when you say the how cer­tain you are (prob­a­bil­ity %) that some­thing is true? I’ve stud­ied a lot of statis­tics, but I re­ally have no idea what you mean.

If I say that this fair coin in my hand has a 50% chance of com­ing up heads, then that means that if I flip it a lot of times, then it’ll be heads 50% of the times. I can do that with a lot of real, mea­surable things.

So, what do you mean by, you are 99% cer­tain of some­thing?

• Poke, con­sid­er­a­tion of the pos­si­bil­ity of be­ing in the ma­trix doesn’t nec­es­sar­ily re­quire “an ex­cep­tion­ally weird sort of skep­ti­cism.” It might only re­quire an “ex­cep­tion­ally weird” form of fu­tur­ism.

• It’s nice that you’re hon­est and open about the fact that your po­si­tion pre­sup­poses an ex­cep­tion­ally weird sort of skep­ti­cism (hence the need to fall back on the pos­si­bil­ity of be­ing in The Ma­trix). Since hu­mans are finite, there’s no rea­son to think ab­solute con­fi­dence in ev­ery­thing isn’t at­tain­able, just in­nu­mer­ate the bi­ases. Only by posit­ing some weird sort of sub­jec­tivism can you get the sort of in­finite regress needed to dis­count the pos­si­bil­ity; I can never re­ally know be­cause I’m trapped in­side my head. Why is the un­cer­tainty fetish so ap­peal­ing that peo­ple will en­ter­tain such weird ideas to re­tain it?

• Silas, does the “null world” count?

• Well, the deeper is­sue is “Must we rely on the Peano ax­ioms?” I shall not get into all the Godelian is­sues that can arise, but I will note that by suit­able rein­ter­pre­ta­tions, one can in­deed pose real world cases where an “ap­par­ent two plus an­other ap­par­ent two” do not equal “ap­par­ent four,” with­out be­ing ut­terly ridicu­lous. The prob­lem is that such cases are not read­ily amenable to be­ing eas­ily put to­gether into use­ful ax­io­matic sys­tems. There may be some­thing bet­ter out there than Peano, but Peano seems to work pretty well an awful lot.

As for “what is re­ally true?” Well… . . . .

• Oh, on the ra­tios of prob­a­bil­ities thing, whether we call them prob­a­bil­ities or schmob­a­bil­ities, it still seems like they can equal 1. But if we ac­cept that there are schmob­a­bil­ities that equal 1, and that we are war­ranted in giv­ing them the same level of con­fi­dence that we’d give prob­a­bil­ities of 1, isn’t that good enough?

Put a differ­ent way, P(A|A)=1 (or per­haps I should call it S(A|A)=1) is just equiv­a­lent to yet an­other one of those log­i­cal tau­tolo­gies, A-->A. Which again seems pretty hard to live with­out. (I’d like to see some­one prove NCC to me with­out bind­ing me to ac­cept NCC!)

• Well, the real rea­son why it is use­ful in ar­ith­metic to ac­cept that 2+2=4 is that this is part of a deeper re­la­tion in the ar­ith­metic field re­gard­ing re­la­tions be­tween the three ba­sic ar­ith­metic op­er­a­tions: ad­di­tion, mul­ti­pli­ca­tion, and ex­po­nen­ti­a­tion. Thus, 2 is the solu­tion to the fol­low­ing ques­tion: what is x such that x plus x equals x times x equals x to the x power? And, of course, all of these op­er­a­tions equal 4.

• “I’d listen to the ar­gu­ment solely in or­der to re­fute it.”

Paul re­futes the data! Eliezer, an idiot dis­agree­ing with you shouldn’t nec­es­sar­ily shift your be­liefs at all. By that to­ken, there’s no rea­son to shift your be­liefs if the whole world told you 2 + 2 were 3, un­less they showed some ev­i­dence. I would think it vastly more likely that the whole world was pul­ling my leg.

• Some­times I feel like re­li­gion is the whole world pul­ling my leg.

• The same holds for math­e­mat­i­cal truths. It’s ques­tion­able whether the state­ment “2 + 2 = 4” or “In Peano ar­ith­metic, SS0 + SS0 = SSSS0″ can be said to be true in any purely ab­stract sense, apart from phys­i­cal sys­tems that seem to be­have in ways similar to the Peano ax­ioms.

Why is that im­por­tant?

• If I cor­rectly re­mem­ber my Je­suit teach­ers’ ex­pla­na­tion from 40 years ago, the epis­to­molog­i­cal branch of clas­si­cal philos­o­phy deals thusly with this situ­a­tion: an “a pri­ori” as­ser­tion is one which ex­hibits the twin char­ac­ter­is­tics of uni­ver­sal­ity and ne­ces­sity. 2+2=4 would be such an as­ser­tion. Should there ever be an ex­am­ple which vi­o­lates this a pri­ori as­ser­tion, it is sim­ply held to be un­real, be­cause re­al­ity is a con­struct of con­sen­sus. Con­sen­sus dic­tates to re­al­ity but not to ex­pe­rience. So if, for ex­am­ple, you see a ghost or are ab­ducted by a UFO, you’re sim­ply out of con­tact with re­al­ity, and, as a crazy per­son, you can’t le­gi­t­i­mately challenge what the rest of us hold to be in­dis­putably true.

• Paul Gow­der said:

“We can go even stronger than math­e­mat­i­cal truths. How about the fol­low­ing state­ment?

~(P &~P)

I think it’s safe to say that if any­thing is true, that state­ment (the flip­ping law of non-con­tra­dic­tion) is true.”

Amus­ingly, this is one of the more con­tro­ver­sial tau­tolo­gies to bring up. This is be­cause con­struc­tivist math­e­mat­i­ci­ans re­ject this state­ment.

• No, they re­ject P V ~P.

They do not re­ject ~(P&~P). Only para­con­sis­tent lo­gi­ci­ans do that.

And para­con­sis­tent lo­gi­ci­ans are silly.

• The Banach Tarski Para­dox is a plau­si­ble way in which 1 = 2, and thus 3 = 2 + 2.

• 13 Jun 2014 8:07 UTC
−3 points

Why would a ra­tio­nal hu­man agent even WANT in­finite cer­tainty? It’s in­her­ently patholog­i­cal.

OCD check­ers feel a gen­eral and rel­a­tively strong need to be cer­tain about the ve­rac­ity of rec­ol­lec­tions and that they have high stan­dards for mem­ory perfor­mance. This may ex­plain ear­lier find­ings that OCD check­ers have a gen­eral ten­dency to dis­trust their epi­sodic mem­ory. A need for cer­tainty and a crit­i­cal at­ti­tude to­wards mem­ory perfor­mance may not be prob­le­matic or ab­nor­mal. It is sug­gested that clini­cal prob­lems arise when the pa­tient tries to fight mem­ory dis­trust by re­peated check­ing. The lat­ter does not re­duce dis­trust but rather in­creases dis­trust and the pa­tient may get trapped in a spiral of mu­tu­ally re­in­forc­ing check­ing be­hav­ior and mem­ory dis­trust.

• There are un­countably many pos­si­ble wor­lds. Us­ing stan­dard real-num­ber-val­ued prob­a­bil­ities, we have to as­sign prob­a­bil­ity zero to (I think) al­most all of them. In other words, for al­most all of the pos­si­ble wor­lds, the prob­a­bil­ity of the com­ple­ment of that pos­si­ble world is 1.

(Are there ways around this, per­haps us­ing non-real-val­ued prob­a­bil­ities?)