# Logic as Probability

Fol­lowup To: Put­ting in the Numbers

Be­fore talk­ing about log­i­cal un­cer­tainty, our fi­nal topic is the re­la­tion­ship be­tween prob­a­bil­is­tic logic and clas­si­cal logic. A robot run­ning on prob­a­bil­is­tic logic stores prob­a­bil­ities of events, e.g. that the grass is wet out­side, P(wet), and then if they col­lect new ev­i­dence they up­date that prob­a­bil­ity to P(wet|ev­i­dence). Clas­si­cal logic robots, on the other hand, de­duce the truth of state­ments from ax­ioms and ob­ser­va­tions. Maybe our robot starts out not be­ing able to de­duce whether the grass is wet, but then they ob­serve that it is rain­ing, and so they use an ax­iom about rain caus­ing wet­ness to de­duce that “the grass is wet” is true.

Clas­si­cal logic re­lies on com­plete cer­tainty in its ax­ioms and ob­ser­va­tions, and makes com­pletely cer­tain de­duc­tions. This is un­re­al­is­tic when ap­plied to rain, but we’re go­ing to ap­ply this to (first or­der, for starters) math later, which a bet­ter fit for clas­si­cal logic.

The gen­eral pat­tern of the de­duc­tion “It’s rain­ing, and when it rains the grass is wet, there­fore the grass is wet” was modus po­nens: if ‘U im­plies R’ is true, and U is true, then R must be true. There is also modus tol­lens: if ‘U im­plies R’ is true, and R is false, then U has to be false too. Third, there is the law of non-con­tra­dic­tion: “It’s si­mul­ta­neously rain­ing and not-rain­ing out­side” is always false.

We can imag­ine a robot that does clas­si­cal logic as if it were writ­ing in a note­book. Ax­ioms are en­tered in the note­book at the start. Then our robot starts writ­ing down state­ments that can be de­duced by modus po­nens or modus tol­lens. Even­tu­ally, the note­book is filled with state­ments de­ducible from the ax­ioms. Mo­dus tol­lens and modus po­nens can be thought of as con­sis­tency con­di­tions that ap­ply to the con­tents of the note­book.

Do­ing math is one im­por­tant ap­pli­ca­tion of our clas­si­cal-logic robot. The robot can read from its note­book “If vari­able A is a num­ber, A=A+0” and “SS0 is a num­ber,” and then write down “SS0=SS0+0.”

Note that this re­quires the robot to in­ter­pret vari­able A differ­ently than sym­bol SS0. This is one of many up­grades we can make to the ba­sic robot so that it can in­ter­pret math more eas­ily. We also want to pro­gram in spe­cial re­sponses to sym­bols like ‘and’, so that if A and B are in the note­book our robot will write ‘A and B’, and if ‘A and B’ is in the note­book it will add in A and B. In this light, modus po­nens is just the robot hav­ing a pro­grammed re­sponse to the ‘im­plies’ sym­bol.

Cer­tainty about our ax­ioms is what lets us use clas­si­cal logic, but you can rep­re­sent com­plete cer­tainty in prob­a­bil­is­tic logic too, by the prob­a­bil­ities 1 and 0. Th­ese two meth­ods of rea­son­ing shouldn’t con­tra­dict each other—if a clas­si­cal logic robot can de­duce that it’s rain­ing out, a prob­a­bil­is­tic logic robot with the same in­for­ma­tion should as­sign P(rain)=1.

If it’s rain­ing out, then my grass is wet. In the lan­guage of prob­a­bil­ities, this is P(wet|rain)=1. If I look out­side and see rain, P(rain)=1, and then the product rule says that P(wet and rain) = P(rain)·P(wet|rain), and that’s equal to 1, so my grass must be wet too. Hey, that’s modus po­nens!

The rules of prob­a­bil­ity can also be­have like modus tol­lens (if P(B)=0, and P(B|A)=1, P(A)=0) and the law of the ex­cluded mid­dle (P(A|not-A)=0). Thus, when we’re com­pletely cer­tain, prob­a­bil­is­tic logic and clas­si­cal logic give the same an­swers.

There’s a very short way to prove this, which is that one of Cox’s desider­ata for how prob­a­bil­ities must be­have was “when you’re com­pletely cer­tain, your plau­si­bil­ities should satisfy the rules of clas­si­cal logic.”

In Foun­da­tions of Prob­a­bil­ity, I al­luded to the idea that we should be able to ap­ply prob­a­bil­ities to math. Dutch book ar­gu­ments work be­cause our robot must act as if it had prob­a­bil­ities in or­der to avoid los­ing money. Sav­age’s the­o­rem ap­plies be­cause the re­sults of our robot’s ac­tions might de­pend on math­e­mat­i­cal re­sults. Cox’s the­o­rem ap­plies be­cause be­liefs about math be­have like other be­liefs.

This is com­pletely cor­rect. Math fol­lows the rules of prob­a­bil­ity, and thus can be de­scribed with prob­a­bil­ities, be­cause clas­si­cal logic is the same as prob­a­bil­is­tic logic when you’re cer­tain.

We can even use this cor­re­spon­dence to figure out what num­bers the prob­a­bil­ities take on:

1 for ev­ery state­ment that fol­lows from the ax­ioms, 0 for their nega­tions.

This raises an is­sue: what about bet­ting on the last digit of the 3^^^3′th prime? We dragged prob­a­bil­ity into this mess be­cause it was sup­posed to help our robot stop try­ing to prove the an­swer and just bet as if P(last digit is 1)=1/​4. But it turns out that there is one true prob­a­bil­ity dis­tri­bu­tion over math­e­mat­i­cal state­ments, given the ax­ioms. The right dis­tri­bu­tion is ob­tained by straight­for­ward ap­pli­ca­tion of the product rule—never mind that it takes 4^^^3 steps—and if you de­vi­ate from the right dis­tri­bu­tion that means you vi­o­late the product rule at some point.

This is why log­i­cal un­cer­tainty is differ­ent. Even though our robot doesn’t have enough re­sources to find the right an­swer, us­ing log­i­cal un­cer­tainty vi­o­lates Sav­age’s the­o­rem and Cox’s the­o­rem. If we want our robot to act as if it has some “log­i­cal prob­a­bil­ity,” it’s go­ing to need a stranger sort of foun­da­tion.

Part of the se­quence Log­i­cal Uncertainty

Pre­vi­ous Post: Put­ting in the Numbers

Next post: Ap­proach­ing Log­i­cal Uncertainty

• This raises an is­sue: what about bet­ting on the last digit of the 3^^^3′th prime?

This is a go to ex­am­ple here of us­ing sub­jec­tive prob­a­bil­ity un­der re­source con­straints. There are plenty of more fa­mil­iar ex­am­ples, such as hav­ing to an­swer a mul­ti­ple-choice ques­tion on a test in 30 sec­onds, and hav­ing to es­ti­mate the prob­a­bil­ities of each an­swer in or­der to pick the like­liest. Every­one has done it, and used the ba­si­cally the same tools as for the so-called “true” prob­a­bil­ities.

• the law of the ex­cluded mid­dle (P(A|not-A)=0

Isn’t this the LNC?

• (I had to look it up: Law of Non-Con­tra­dic­tion.)

• Yeah, this looks more like the Law of Non-Con­tra­dic­tion than the Law of Ex­cluded Mid­dle to me (which makes Man­fred’s jokey re­sponse seem dou­bly fool­ish).

• No, of course it’s not the Lon­don Ne­crop­o­lis Com­pany.

• it turns out that there is one true prob­a­bil­ity dis­tri­bu­tion over math­e­mat­i­cal state­ments, given the axioms

I guess you meant to say “math­e­mat­i­cal state­ments with­out quan­tifiers”?

• If you want quan­tifiers, you can just pro­gram your robot to re­spond to the sym­bol “for all” so that when it sees “for all x, x=y” it writes all the im­pli­ca­tions in the note­book, and when x=y for all x, it writes “for all x, x=y”. This is an in­finite amount to writ­ing to do, but there was always an in­finite amount of writ­ing to do—the robot is in­finitely fast, and any­way is just a metaphor for the rules of our lan­guage.

• Sorry, I should’ve said “state­ments that are prov­able or dis­prov­able from the ax­ioms”, men­tion­ing quan­tifiers was kinda ir­rele­vant. Are you say­ing that your robot will even­tu­ally write out truth val­ues for state­ments that are in­de­pen­dent of the ax­ioms as well? (Like the con­tinuum hy­poth­e­sis in ZFC.)

• So if you give your robot the ax­ioms of ZFC, it will even­tu­ally tell you if the con­tinuum hy­poth­e­sis is true or false?

• Are you as­sum­ing that x can only range over the nat­u­ral num­bers? If x can range over re­als or sets, or some ar­bi­trary kind of ob­jects de­scribed by the ax­ioms, then it’s harder to de­scribe what the robot should do. The first prob­lem is that an in­di­vi­d­ual x can have no finite de­scrip­tion. The sec­ond, more se­ri­ous prob­lem is that trans­lat­ing state­ments with quan­tifiers into state­ments of in­finite length would re­quire the robot to use some “true” model of the ax­ioms, but of­ten there are in­finitely many mod­els by Lowen­heim-Skolem and no ob­vi­ous way of pick­ing out a true one.

Also, my origi­nal com­ment was slightly mis­lead­ing—the “one true dis­tri­bu­tion” would in fact cover many state­ments with quan­tifiers, and miss many state­ments with­out quan­tifiers. The cor­rect dis­tinc­tion is be­tween state­ments that are prov­able or dis­prov­able from the ax­ioms, and state­ments that are in­de­pen­dent of the ax­ioms. If the ax­ioms are talk­ing about nat­u­ral num­bers, then all state­ments with­out quan­tifiers should be cov­ered by the “one true dis­tri­bu­tion”, but in gen­eral that doesn’t have to be true.

• Well, it’s cer­tainly a good point that there are lots of math­e­mat­i­cal is­sues I’m ig­nor­ing. But for the top­ics in this se­quence, I am in­ter­ested not in those is­sues them­selves, but in how they are differ­ent be­tween clas­si­cal logic and prob­a­bil­is­tic logic.

This isn’t triv­ial, since state­ments that are clas­si­cally un­de­ter­mined by the ax­ioms can still have ar­bi­trary prob­a­bil­ities (Hm, should that be its own post, do you think? I’ll have to men­tion it in pass­ing when dis­cussing the cor­re­spon­dence be­tween in­con­sis­tency and limited in­for­ma­tion). But in this post, the ques­tion is whether there is no differ­ence for state­ments that are prov­able or dis­prov­able from the ax­ioms. I’m claiming there’s no differ­ence. Do you think that’s right?

• Yeah, I agree with the point that clas­si­cal logic would in­stantly set­tle all digits of pi, so it can’t be the ba­sis of a the­ory that would let us bet on digits of pi. But that’s prob­a­bly not the only rea­son why we want a the­ory of log­i­cal un­cer­tainty. The value of a digit of pi is always prov­able (be­cause it’s a quan­tifier-free state­ment), but our math in­tu­ition also al­lows us to bet on things like Con(PA), which is in­de­pen­dent, or P!=NP, for which we don’t know if it’s in­de­pen­dent. You may or may not want a the­ory of log­i­cal un­cer­tainty that can cover all three cases uniformly.

• But it turns out that there is one true prob­a­bil­ity dis­tri­bu­tion over math­e­mat­i­cal state­ments, given the ax­ioms. The right dis­tri­bu­tion is ob­tained by straight­for­ward ap­pli­ca­tion of the product rule—never mind that it takes 4^^^3 steps—and if you de­vi­ate from the right dis­tri­bu­tion that means you vi­o­late the product rule at some point.

This does not seem right to me. I feel like you are sneak­ily try­ing to con­di­tion all of the robots prob­a­bil­ities on math­e­mat­i­cal proofs that it does not have a-pri­ori. E.g. con­sider A, A->B, there­fore B. To learn that P(A->B)=1, the robot has to do a big calcu­la­tion to ob­tain the proof. After this, it can con­clude that P(B|A,A->B)=1. But be­fore it has the proof, it should still have some P(B|A)!=1.

Sure, it seems tempt­ing to call the prob­a­bil­ities you would have af­ter ob­tain­ing all the proofs of ev­ery­thing the “true” prob­a­bilties, but to me it doesn’t ac­tu­ally seem differ­ent to the claim that “af­ter I roll my dice an in­finity of times, I will know the ‘true’ prob­a­bil­ity of rol­ling a 1″. I should still have some be­liefs about a one be­ing rol­led be­fore I have ob­served vast num­bers of rolls.

In other words I sug­gest that proof of math­e­mat­i­cal re­la­tion­ships should be treated ex­actly the same as any other data/​ev­i­dence.

edit: in fact surely one has to con­sider this so that the robot can in­cor­po­rate the cost of com­put­ing the proof into its loss func­tion, in or­der to de­cide if it should bother do­ing it or not. Know­ing the an­swer for cer­tain may still not be worth the time it takes (not to men­tion that even af­ter com­put­ing the proof the robot may still not have to­tal con­fi­dence in it; if it is a re­ally long proof, the prob­a­bil­ity that cos­mic rays have caused lots of bit-flips to mess up the logic may be­come sig­nifi­cant. If the robot knows it can­not ever get the an­swer with suffi­cient con­fi­dence within the given time con­straints, it must choose an ac­tion which ac­counts for this. And the logic it uses should be just the same as how it knows when to stop rol­ling die).

edit2: I re­al­ised I was a lit­tle sloppy above; let me make it clearer here:

The robot knows P(B|A,A->B)=1 apri­ori. But it does not know “A->B” is true apri­ori. It there­fore calculates

P(B|A) = P(B|A,A->B) P(A->B|A) + P(B|A,not A->B) P(not A->B|A) = P(A->B|A)

After it ob­tains proof that “A->B”, call this p, we have P(A->B|A,p) = 1, so

P(B|A,p) = P(B|A,A->B,p) P(A->B|A,p) + P(B|A,not A->B,p) P(not A->B|A,p)

col­lapses to

P(B|A,p) = P(B|A,A->B,p) = P(B|A,A->B) = 1

But I don’t think it is rea­son­able to skip straight to this fi­nal state­ment, un­less the cost of ob­tain­ing p is neg­ligible.

edit3: If this some­how vi­o­lates Sav­age or Cox’s the­o­rems I’d like to know why :).

• If this some­how vi­o­lates Sav­age or Cox’s the­o­rems I’d like to know why

Well, Cox’s the­o­rem has as a re­quire­ment that when your ax­ioms are com­pletely cer­tain, you as­sign prob­a­bil­ity 1 to all clas­si­cal con­se­quences of those ax­ioms. As­sign­ing prob­a­bil­ity 0.5 to any of those con­se­quences thus vi­o­lates Cox’s the­o­rem. But this is kind of un­satis­fy­ing, so: where do we vi­o­late the product rule?

Sup­pose our robot knows that P(wet out­side | rain­ing) = 1. And it ob­serves that it’s rain­ing, so P(rain)=1. But it’s hav­ing trou­ble figur­ing out whether it’s wet out­side within its time limit, so it just gives up and says P(wet out­side)=0.5. Has it vi­o­lated the product rule? Yes. P(wet out­side) >= P(wet out­side and rain­ing) = P(wet out­side | rain) * P(rain) = 1.

If we ac­cept that the ax­ioms have prob­a­bil­ity 1, we can de­duce the con­se­quences with cer­tainty us­ing the product rule. If at any point we stop de­duc­ing the con­se­quences with cer­tainty, this means we have stopped us­ing the product rule.

• Hmm this does not feel the same as what I am sug­gest­ing.

Let me map my sce­nario onto yours:

A = “rain­ing”

B = “wet out­side”

A->B = “It will be wet out­side if it is rain­ing”

The robot does not know P(“wet out­side” | “rain­ing”) = 1. It only knows P(“wet out­side” | “rain­ing”, “rain­ing->wet out­side”) = 1. It ob­serves that it is rain­ing, so we’ll con­di­tion ev­ery­thing on “rain­ing”, tak­ing it as true.

We need some pri­ors. Let P(“wet out­side”) = 0.5. We also need a prior for “rain­ing->wet out­side”, let that be 0.5 as well. From this it fol­lows that

P(“wet out­side” | “rain­ing”) = P(“wet out­side” | “rain­ing”, “rain­ing->wet out­side”) P(“rain­ing->wet out­side”|”rain­ing”) + P(“wet out­side” | “rain­ing”, not “rain­ing->wet out­side”) P(not “rain­ing->wet out­side”|”rain­ing”) = P(“rain­ing->wet out­side”|”rain­ing”) = P(“rain­ing->wet out­side”) = 0.5

ac­cord­ing to our pri­ors [first and sec­ond equal­ities are the same as in my first post, third equal­ity fol­low since whether or not it is “rain­ing” is not rele­vant for figur­ing out if “rain­ing->wet out­side”].

So the product rule is not vi­o­lated.

P(“wet out­side”) >= P(“wet out­side” and “rain­ing”) = P(“wet out­side” | “rain­ing”) P(“rain­ing”) = 0.5

Where the in­equal­ity is ac­tu­ally an equal­ity be­cause our prior was P(“wet out­side”) = 0.5. Once the proof p that “rain­ing->wet out­side” is ob­tained, we can up­date this to

P(“wet out­side” | p) >= P(“wet out­side” and “rain­ing” | p) = P(“wet out­side” | “rain­ing”, p) P(“rain­ing” | p) = 1

But there is still no product rule vi­o­la­tion because

P(“wet out­side” | p) = P(“wet out­side” | “rain­ing”, p) P(“rain­ing” | p) + P(“wet out­side” | not “rain­ing”, p) P(not “rain­ing” | p) = P(“wet out­side” | “rain­ing”, p) P(“rain­ing” | p) = 1.

In a nut­shell: you need three pieces of in­for­ma­tion to ap­ply this clas­si­cal chain of rea­son­ing; A, B, and A->B. All three of these propo­si­tions should have pri­ors. Then ev­ery­thing seems fine to me. It seems to me you are ne­glect­ing the propo­si­tion “A->B”, or rather as­sum­ing its truth value to be known, when we are ex­plic­itly say­ing that the robot does not know this.

edit: I just re­al­ised that I was lucky for my first in­equal­ity to work out; I as­sumed I was free to choose any prior for P(“wet out­side”), but it turns out I am not. My pri­ors for “rain­ing” and “rain­ing->wet out­side” de­ter­mine the cor­re­spond­ing prior for “wet out­side”, in or­der to be com­pat­i­ble with the product rule. I just hap­pened to choose the cor­rect one by ac­ci­dent.

• It seems to me you are ne­glect­ing the propo­si­tion “A->B”

Do you know what truth ta­bles are? The state­ment “A->B” can be rep­re­sented on a truth table. A and B can be pos­si­ble. not-A and B can be pos­si­ble. Not-A and not-B can be pos­si­ble. But A and not-B is im­pos­si­ble.

A->B and the four state­ments about the truth table are in­ter­change­able. Even though when I talk about the truth table, I never need to use the “->” sym­bol. They con­tain the same con­tent be­cause A->B says that A and not-B is im­pos­si­ble, and say­ing that A and not-B is im­pos­si­ble says that A->B. For ex­am­ple, “it rain­ing but not be­ing wet out­side is im­pos­si­ble.”

In the lan­guage of prob­a­bil­ity, say­ing that P(B|A)=1 means that A and not-B is im­pos­si­ble, while leav­ing the other pos­si­bil­ities able to vary freely. The product rule says P(A and not-B) = P(A) * P(not-B | A). What’s P(not-B | A) if P(B | A)=1? It’s zero, be­cause it’s the nega­tion of our as­sump­tion.

Writ­ing out things in clas­si­cal logic doesn’t just mean putting P() around the same sym­bols. It means mak­ing things be­have the same way.

• ‘They con­tain the same con­tent be­cause A->B says that A and not-B is im­pos­si­ble, and say­ing that A and not-B is im­pos­si­ble says that A->B. For ex­am­ple, “it rain­ing but not be­ing wet out­side is im­pos­si­ble.”’

If you’re talk­ing about stan­dard propo­si­tional logic here, with­out bring­ing in prob­a­bil­is­tic stuff, then this is just wrong or at best very mis­lead­ingly put. All ‘A->B’ says is that it is not the case that A and not-B—noth­ing modal.

• Ok sure, so you can go through my rea­son­ing leav­ing out the im­pli­ca­tion sym­bol, but re­tain­ing the de­pen­dence on the proof “p”, and it all works out the same. The point is only that the robot doesn’t know that A->B, there­fore it doesn’t set P(B|A)=1 ei­ther.

You had “Sup­pose our robot knows that P(wet out­side | rain­ing) = 1. And it ob­serves that it’s rain­ing, so P(rain)=1. But it’s hav­ing trou­ble figur­ing out whether it’s wet out­side within its time limit, so it just gives up and says P(wet out­side)=0.5. Has it vi­o­lated the product rule? Yes. P(wet out­side) >= P(wet out­side and rain­ing) = P(wet out­side | rain) * P(rain) = 1.”

But you say it is do­ing P(wet out­side)=0.5 as an ap­prox­i­ma­tion. This isn’t true though, be­cause it knows that it is rain­ing, so it is set­ting P(wet out­side|rain) = 0.5, which was the crux of my calcu­la­tion any­way. There­fore when it calcu­lates P(wet out­side and rain­ing) = P(wet out­side | rain) * P(rain) it gets the an­swer 0.5, not 1, so it is still be­ing con­sis­tent.

• I’m just go­ing to give up and hope you figure it on your own.

• You haven’t been very spe­cific about what you think I’m do­ing in­cor­rectly so it is kind of hard to figure out what you are ob­ject­ing to. I cor­rected your ex­am­ple to what I think it should be so that it satis­fies the product rule; where’s the prob­lem? How do you pro­pose that the robot can pos­si­bly set P(“wet out­side”|”rain”)=1 when it can’t do the calcu­la­tion?

• In your ex­am­ple, it can’t. Be­cause the ax­ioms you picked do not de­ter­mine the an­swer. Be­cause you are in­cor­rectly trans­lat­ing clas­si­cal logic into prob­a­bil­is­tic logic. And then, as one would ex­pect, your trans­la­tion of clas­si­cal logic doesn’t re­pro­duce clas­si­cal logic.

• It was your ex­am­ple, not mine. But you made the con­tra­dic­tory pos­tu­late that P(“wet out­side”|”rain”)=1 fol­lows from the robots prior knowl­edge and the prob­a­bil­ity ax­ioms, and si­mul­ta­neously that the robot was un­able to com­pute this. To cor­rect this I al­ter the robots prob­a­bil­ities such that P(“wet out­side”|”rain”)=0.5 un­til such time as it has ob­tained a proof that “rain” cor­re­lates 100% with “wet out­side”. Of course the ax­ioms don’t de­ter­mine this; it is part of the robots prior, which is not de­ter­mined by any ax­ioms.

You haven’t con­vinced nor shown me that this vi­o­lates Cox’s the­o­rem. I ad­mit I have not tried to fol­low the proof of this the­o­rem my­self, but my un­der­stand­ing was that the re­quire­ment you speak of is that the prob­a­bil­is­tic logic re­pro­duces clas­si­cal logic in the limit of cer­tainty. Here, the robot is not in the limit of cer­tainty be­cause it can­not com­pute the re­quired proof. So we should not ex­pect to get the clas­si­cal logic un­til up­dat­ing on the proof and achiev­ing said cer­tainty.

• It was your ex­am­ple, not mine.

No, you butchered it into a differ­ent ex­am­ple. In­tro­duced the Lewis Car­roll Para­dox, even.

You haven’t con­vinced nor shown me that this vi­o­lates Cox’s the­o­rem.

He showed you. You weren’t pay­ing at­ten­tion.

Here, the robot is not in the limit of cer­tainty be­cause it can­not com­pute the re­quired proof.

It can com­pute the proof. The laws of in­fer­ence are ax­ioms; P(A|B) is nec­es­sar­ily known a pri­ori.

such that P(“wet out­side”|”rain”)=0.5 un­til such time as it has ob­tained a proof that “rain” cor­re­lates 100% with “wet out­side”.

There is no such time. Either it’s true ini­tially, or it will never be es­tab­lished with cer­tainty. If it’s true ini­tially, that’s be­cause it is an ax­iom. Which was the whole point.

• The laws of in­fer­ence are ax­ioms; P(A|B) is nec­es­sar­ily known a pri­ori.

It does not fol­low that be­cause some­one knows some state­ments they also know the log­i­cal con­se­quences of those state­ments.

• When the some­one is an ideal­ized sys­tem of logic, it does. And we’re dis­cussing an ideal­ized sys­tem of logic here. So it does.

• No we aren’t, we’re dis­cussing a robot with finite re­sources. I ob­vi­ously agree that an om­nipo­tent god of logic can skip these prob­lems.

• The limi­ta­tion im­posed by the bounded re­sources are the next en­try in the se­quence. For this, we’re still dis­cussing the un­bounded case.

• Very well, then i will wait for the next en­try. But i thought the fact that we were ex­plic­itly dis­cussing things the robot could not com­pute made it clear that re­sources were limited. There is clearly no such thing as log­i­cal un­cer­tainty to the magic logic god of the ideal­ised case.

• Liked this post. One sug­ges­tion to im­prove read­abil­ity would be for the first men­tion of a con­cept in this post (eg sav­ages the­o­rem) to hy­per­link to the pre­vi­ous post that de­scribed it, or to a wiki ar­ti­cle with de­tails.

• Thanks! I’ll do that now.

• Nev­er­mind. Would be nice if you could ac­tu­ally delete com­ments in the first minute af­ter post­ing them.