# Bayesian Probability is for things that are Space-like Separated from You

First, I should ex­plain what I mean by space-like sep­a­rated from you. Imag­ine a world that looks like a Bayesian net­work, and imag­ine that you are a node in that Bayesian net­work. If there is a path from you to an­other node fol­low­ing edges in the net­work, I will say that node is time-like sep­a­rated from you, and in your fu­ture. If there is a path from an­other node to you, I will say that node is time-like sep­a­rated from you, and in your past. Other­wise, I will say that the node is space-like sep­a­rated from you.

Nodes in your past can be thought of as things that you ob­serve. When you think about physics, it sure does seem like there are a lot of things in your past that you do not ob­serve, but I am not think­ing about physics-time, I am think­ing about log­i­cal-time. If some­thing is in your past, but has no effect on what al­gorithm you are run­ning on what ob­ser­va­tions you get, then it might as well be con­sid­ered as space-like sep­a­rated from you. If you com­pute how ev­ery­thing in the uni­verse eval­u­ates, the space-like sep­a­rated things are the things that can be eval­u­ated ei­ther be­fore or af­ter you, since their out­put does not change yours or vice-versa. If you par­tially ob­serve a fact, then I want to say you can de­com­pose that fact into the part that you ob­served and the part that you didn’t, and say that the part you ob­served is in your past, while the part you didn’t ob­serve is space-like sep­a­rated from you. (Whether or not you ac­tu­ally can de­com­pose things like this is com­pli­cated, and re­lated to whether or not you can use the tickle defense is the smok­ing le­sion prob­lem.)

Nodes in your fu­ture can be thought of as things that you con­trol. Th­ese are not always things that you want to con­trol. For ex­am­ple, you con­trol the out­put of “You as­sign prob­a­bil­ity less than 12 to this sen­tence,” but per­haps you wish you didn’t. Again, if you par­tially con­trol a fact, I want to say that (maybe) you can break that fact into mul­ti­ple nodes, some of which you con­trol, and some of which you don’t.

So, you know the things in your past, so there is no need for prob­a­bil­ity there. You don’t know the things in your fu­ture, or things that are space-like sep­a­rated from you. (Maybe. I’m not sure that talk­ing about know­ing things you con­trol is not just a type er­ror.) You may have cached that you should use Bayesian prob­a­bil­ity to deal with things you are un­cer­tain about. You may have this jus­tified by the fact that if you don’t use Bayesian prob­a­bil­ity, there is a Pareto im­prove­ment that will cause you to pre­dict bet­ter in all wor­lds. The prob­lem is that the stan­dard jus­tifi­ca­tions of Bayesian prob­a­bil­ity are in a frame­work where the facts that you are un­cer­tain about are not in any way af­fected by whether or not you be­lieve them! There­fore, our rea­sons for lik­ing Bayesian prob­a­bil­ity do not ap­ply to our un­cer­tainty about the things that are in our fu­ture! Note that many things in our fu­ture (like our fu­ture ob­ser­va­tions) are also in the fu­ture of things that are space-like sep­a­rated from us, so we want to use Bayes to rea­son about those things in or­der to have bet­ter be­liefs about our ob­ser­va­tions.

I claim that log­i­cal in­duc­tors do not feel en­tirely Bayesian, and this might be why. They can’t if they are able to think about sen­tences like “You as­sign prob­a­bil­ity less than 12 to this sen­tence.”

No nominations.
No reviews.
• The prob­lem is that the stan­dard jus­tifi­ca­tions of Bayesian prob­a­bil­ity are in a frame­work where the facts that you are un­cer­tain about are not in any way af­fected by whether or not you be­lieve them!

I want to point out that this is not an es­o­teric ab­stract prob­lem but a con­crete is­sue that ac­tual hu­mans face all the time. There’s a large class of propo­si­tions whose truth value is heav­ily af­fected by how much you be­lieve (and by “be­lieve” I mean “alieve”) them—e.g. propo­si­tions about your­self like “I am con­fi­dent” or even “I am at­trac­tive”—and I think the LW zeit­geist doesn’t re­ally en­gage with this. Your be­liefs about your­self ex­press them­selves in mus­cle ten­sion which has real effects on your body, and from there leak out in your body lan­guage to af­fect how other peo­ple treat you; you are al­most always in the state Harry de­scribes in HPMoR of hav­ing your cog­ni­tion con­strained by the di­rect effects of be­liev­ing things on the world as op­posed to just by the effects of ac­tions you take on the ba­sis of your be­liefs.

There’s an amus­ing tie-in here to one of the stan­dard ways to break the pre­dic­tion mar­ket game we used to play at CFAR work­shops. At the be­gin­ning we claim “the best strat­egy is to always write down your true prob­a­bil­ity at any time,” but the ar­gu­ment that’s sup­posed to es­tab­lish this has a hid­den as­sump­tion that the act of do­ing so doesn’t af­fect the situ­a­tion the pre­dic­tion mar­ket is about, and it’s easy to write down pre­dic­tion mar­kets vi­o­lat­ing this as­sump­tion, e.g. “the last bet on this pre­dic­tion mar­ket will be un­der 50%.”

• I think the LW zeit­geist doesn’t re­ally en­gage with this.

Really? I feel quite the op­po­site, un­less you’re say­ing we could do still more. I think LW is ac­tu­ally one of the few com­mu­ni­ties that take this sort of non-du­al­ism/​nat­u­ral­ism in ar­riv­ing at a prob­a­bil­is­tic judge­ment (and all its meta lev­els) se­ri­ously. We’ve been re­peat­edly ex­posed to the fact that New­comblike prob­lems are ev­ery­where since a long time ago, and then rel­a­tively re­cently, with Sim­ler’s won­der­ful post on crony be­liefs (and now, his even more delight­ful book with Han­son, of course).

ETA: I’m miss­ing quite a few posts that were even older (Wei Dai’s? Drescher’s? yvain had some­thing too IIRC), it’d be nice if some­one else who does re­mem­ber posted them here.

• I think your links are a good in­di­ca­tion of the way that LW has en­gaged with a rel­a­tively nar­row as­pect of this, and with a some­what bi­ased man­ner. “Crony be­liefs” is a good ex­am­ple—start­ing right from the ti­tle, it sets up a di­chotomy of “merit be­liefs” ver­sus “crony be­liefs”, with the not-par­tic­u­larly-sub­tle-con­no­ta­tion of “merit be­liefs are this great thing that mod­els re­al­ity and in an ideal world we’d only have merit be­liefs, but in the real world, we also have to deal with the fact that it’s use­ful to have crony be­liefs for the pur­pose of ma­nipu­lat­ing oth­ers and se­cur­ing so­cial al­li­ances”.

Which… yes, that is one as­pect of this. But the more gen­eral point of the origi­nal post is that there are a wide va­ri­ety of be­liefs which are un­der­de­ter­mined by ex­ter­nal re­al­ity. It’s not that you in­ten­tion­ally have fake be­liefs which out of al­ign­ment with the world, it’s that some be­liefs are to some ex­tent self-fulfilling, and their truth value just is what­ever you de­cide to be­lieve in. If your deep-level alief is that “I am con­fi­dent”, then you will be con­fi­dent; if your deep-level alief is that “I am un­con­fi­dent”, then you will be that.

Another way of putting it: what is the truth value of the be­lief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is de­ter­mined by the ac­tions of the agent, rather than the en­vi­ron­ment.

The pre­dic­tive pro­cess­ing thing could be said to take this even fur­ther: it hy­poth­e­sizes that all ac­tion is caused by these kinds of self-fulfilling be­liefs; on some level our brain be­lieves that we’ll take an ac­tion, and then it ends up fulfilling that pre­dic­tion:

About a third of Sur­fing Uncer­tainty is on the mo­tor sys­tem, it mostly didn’t seem that in­ter­est­ing to me, and I don’t have time to do it jus­tice here (I might make an­other post on one es­pe­cially in­ter­est­ing point). But this has been kind of ig­nored so far. If the brain is mostly just in the busi­ness of mak­ing pre­dic­tions, what ex­actly is the mo­tor sys­tem do­ing?
Based on a bunch of re­ally ex­cel­lent ex­per­i­ments that I don’t have time to de­scribe here, Clark con­cludes: it’s pre­dict­ing ac­tion, which causes the ac­tion to hap­pen.
This part is al­most funny. Re­mem­ber, the brain re­ally hates pre­dic­tion er­ror and does its best to min­i­mize it. With failed pre­dic­tions about eg vi­sion, there’s not much you can do ex­cept change your mod­els and try to pre­dict bet­ter next time. But with pre­dic­tions about pro­pri­o­cep­tive sense data (ie your sense of where your joints are), there’s an easy way to re­solve pre­dic­tion er­ror: just move your joints so they match the pre­dic­tion. So (and I’m as­sert­ing this, but see Chap­ters 4 and 5 of the book to hear the sci­en­tific case for this po­si­tion) if you want to lift your arm, your brain just pre­dicts re­ally re­ally strongly that your arm has been lifted, and then lets the lower lev­els’ drive to min­i­mize pre­dic­tion er­ror do the rest.
Un­der this model, the “pre­dic­tion” of a move­ment isn’t just the idle thought that a move­ment might oc­cur, it’s the ac­tual mo­tor pro­gram. This gets un­packed at all the var­i­ous lay­ers – joint sense, pro­pri­o­cep­tion, the ex­act ten­sion level of var­i­ous mus­cles – and fi­nally ends up in a par­tic­u­lar fluid movement

Now, I’ve mostly been talk­ing about cases where the truth of a be­lief is de­ter­mined purely by our choices. But as the OP sug­gests, there are of­ten com­plex in­ter­plays be­tween the agent and the en­vi­ron­ment. For in­stance, if you be­lieve that “I will be ad­mit­ted to Ex­am­ple Univer­sity if I study hard enough to get in”, then that be­lief may be­come self-fulfilling in that it causes you to study hard enough to get in. But at the same time, you may sim­ply not be good enough, so the truth value of this be­lief is de­ter­mined both by whether you be­lieve in it, and by whether you ac­tu­ally are good enough.

With re­gard to the thing about con­fi­dence; peo­ple usu­ally aren’t just con­fi­dent in gen­eral, they are con­fi­dent about some­thing in par­tic­u­lar. I’m much more con­fi­dent in my abil­ity to write on a key­board, than I am in my abil­ity to do brain surgery. You could say that my con­fi­dence in my abil­ity to do X, is the prob­a­bil­ity that I as­sign to do­ing X suc­cess­fully.

And it’s of­ten im­por­tant that I’m not over­con­fi­dent. Yes, if I’m re­ally con­fi­dent in my abil­ity to do some­thing, then other peo­ple will give me more re­spect. But the rea­son why they do that, is that con­fi­dence is ac­tu­ally a bit of a costly sig­nal. So far I’ve said that an agent’s de­ci­sions de­ter­mine the truth-val­ues of many be­liefs, but it’s also the other way around: the agent’s be­liefs de­ter­mine the agent’s ac­tions. If I be­lieve my­self to be re­ally good at brain surgery even when I’m not, I may be able to talk my­self into a situ­a­tion where I’m al­lowed to do brain surgery, but the re­sult will be a dead pa­tient. And it’s not go­ing to take many dead pa­tients be­fore peo­ple re­al­ize I’m a fraud and put me in prison. But if I’m com­pletely de­luded and firmly be­lieve my­self to be a mas­ter brain sur­geon, that be­lief will cause me to con­tinue car­ry­ing out brain surg­eries, even when it would be bet­ter from a self-in­ter­ested per­spec­tive to stop do­ing that.

So there’s a com­pli­cated thing where be­liefs have sev­eral effects: they de­ter­mine your pre­dic­tions about the world and they de­ter­mine your fu­ture ac­tions and they de­ter­mine the sub­con­scious sig­nals that you send to oth­ers. You have an in­ter­est in be­ing over­con­fi­dent for the sake of per­suad­ing oth­ers, and for the sake of get­ting your­self to do things, but also in be­ing just-ap­pro­pri­ately-con­fi­dent for the sake of be­ing able to pre­dict the con­se­quences of your own fu­ture ac­tions bet­ter.

An im­por­tant fram­ing here is “your be­liefs de­ter­mine your ac­tions, so how do you get the be­liefs which cause the best ac­tions”. There have been some posts offer­ing tools for be­lief-mod­ifi­ca­tion which had the goal of caus­ing change, but this mostly hasn’t been stated ex­plic­itly, and even some of the posts which have offered tools for this (e.g. Nate’s “Dark Arts of Ra­tion­al­ity”) have still talked about it be­ing a “Dark Art” thing which is kinda dirty to en­gage in. Which I think is dan­ger­ous, be­cause get­ting an epistem­i­cally cor­rect map is only half of what you need for suc­cess, with the “have be­liefs which cause you to take the ac­tions that you need to suc­ceed” be­ing the other half that’s just as im­por­tant to get right. (Ex­cept, as noted, they are not two in­de­pen­dent things but in­ter­twined with each other in com­pli­cated ways.)

• Yes, this.

There’s a thing MIRI peo­ple talk about, about the dis­tinc­tion be­tween “carte­sian” and “nat­u­ral­ized” agents: a carte­sian agent is some­thing like AIXI that has a “carte­sian bound­ary” sep­a­rat­ing it­self from the en­vi­ron­ment, so it can try to have ac­cu­rate be­liefs about the en­vi­ron­ment, then try to take the best ac­tions on the en­vi­ron­ment given those be­liefs. But a nat­u­ral­ized agent, which is what we ac­tu­ally are and what any AI we build ac­tu­ally is, is part of the en­vi­ron­ment; there is no carte­sian bound­ary. Among other things this means that the en­vi­ron­ment is too big to fully model, and it’s much less clear what it even means for the agent to con­tem­plate tak­ing differ­ent ac­tions. Scott Garrabrant has said that he does not un­der­stand what nat­u­ral­ized agency means; among other things this means we don’t have a toy model that de­serves to be called “nat­u­ral­ized AIXI.”

There’s a way in which I think the LW zeit­geist treats hu­mans as carte­sian agents, and I think fully in­ter­nal­iz­ing that you’re a nat­u­ral­ized agent looks very differ­ent, al­though my con­cepts and words around this are still rel­a­tively neb­u­lous.

• I’m con­fused by this. Sure, your body has in­vol­un­tary mechanisms that truth­fully sig­nal your be­liefs to oth­ers. But the only rea­son these mechanisms could ex­ist is to help your genes! Yours speci­fi­cally! That means you shouldn’t try to over­ride them when your in­ter­ests co­in­cide with those of your genes. In par­tic­u­lar, you shouldn’t force your­self to be­lieve that you’re at­trac­tive. Am I miss­ing some­thing?

• In par­tic­u­lar, you shouldn’t force your­self to be­lieve that you’re at­trac­tive.

And I never said this.

But there’s a thing that can hap­pen when some­one else gaslights you into be­liev­ing that you’re unattrac­tive, which makes it true, and you might be in­ter­ested in un­do­ing that dam­age, for ex­am­ple.

• It seems pretty easy for such mechanisms to be adapted for max­i­miz­ing re­pro­duc­tion in some an­ces­tral ex­cite­ment but mal­adapted for max­i­miz­ing your prefer­ences in the mod­ern en­vi­ron­ment.

I think I agree that your point is gen­er­ally un­der-con­sid­ered, es­pe­cially by the sort of peo­ple who com­pul­sively tear down Ch­ester­ton’s fences.

• What rossry said, but also, why do you ex­pect to be “win­ning” all arms races here? Genes in other peo­ple may have led to de­vel­op­ment of meme-hacks that you don’t know are ac­tu­ally giv­ing some­one else an edge in a zero sum game.

In par­tic­u­lar, they might call you fat or stupid or in­com­pe­tent and you might end up be­liev­ing it.

• I’m not try­ing to be mean here, but this post is com­pletely wrong at all lev­els. No, Bayesian prob­a­bil­ity is not just for things that are space-like. None of the the­o­rems from which it de­rived even re­fer to time.

So, you know the things in your past, so there is no need for prob­a­bil­ity there.

This sim­ply is not true. There would be no need of de­tec­tives or his­tor­i­cal re­searchers if it were true.

If you par­tially ob­serve a fact, then I want to say you can de­com­pose that fact into the part that you ob­served and the part that you didn’t, and say that the part you ob­served is in your past, while the part you didn’t ob­serve is space-like sep­a­rated from you.

You can say it, but it’s not even ap­prox­i­mately true. If some­one flips a coin in front of me but cov­ers it up just be­fore it hits the table, I ob­serve that a coin flip has oc­curred, but not whether it was heads or tails—and that sec­ond even is definitely within my past light-cone.

You may have cached that you should use Bayesian prob­a­bil­ity to deal with things you are un­cer­tain about.

No, I cached noth­ing. I first spent a con­sid­er­able amount of time un­der­stand­ing Cox’s The­o­rem in de­tail, which de­rives prob­a­bil­ity the­ory as the uniquely de­ter­mined ex­ten­sion of clas­si­cal propo­si­tional logic to a logic that han­dles un­cer­tainty. There is some con­tro­versy about some of its as­sump­tions, so I later proved and pub­lished my own the­o­rem that ar­rives at the same con­clu­sion (and more) us­ing purely log­i­cal as­sump­tions/​re­quire­ments, all of the form, “our ex­tended logic should re­tain this ex­ist­ing prop­erty of clas­si­cal propo­si­tional log­i­cal.”

The prob­lem is that the stan­dard jus­tifi­ca­tions of Bayesian prob­a­bil­ity are in a frame­work where the facts that you are un­cer­tain about are not in any way af­fected by whether or not you be­lieve them!

1) It’s not clear this is re­ally true. It seems to me that any situ­a­tion that is af­fected by an agent’s be­liefs can be han­dled within Bayesian prob­a­bil­ity the­ory by mod­el­ing the agent.

2) So what?

There­fore, our rea­sons for lik­ing Bayesian prob­a­bil­ity do not ap­ply to our un­cer­tainty about the things that are in our fu­ture!

This is a com­plete non se­quitur. Even if I grant your premise, most things in my fu­ture are un­af­fected by my be­liefs. The date on which the Sun will ex­pand and en­gulf the Earth is in no way af­fected by any of my be­liefs. Whether you will get luck with that woman at the bar next Fri­day is in no way af­fected by any of my be­liefs. And so on,

• I think you are cor­rect that I can­not cleanly sep­a­rate the things that are in my past that I know and the things that are in my post that I do not know. For ex­am­ple, if a prob­a­bil­ity is cho­sen uniformly at ran­dom in the unit in­ter­val, then a coin with that prob­a­bil­ity is flipped a large num­ber of times, then I see some of the re­sults, I do not know the true prob­a­bil­ity, but the coin flips that I see re­ally should come af­ter the thing that de­ter­mines the prob­a­bil­ity in my Bayes’ net.

• [META] As a gen­eral heuris­tic, when you en­counter a post from some­one oth­er­wise rep­utable that seems com­pletely non­sen­si­cal to you, it may be worth at­tempt­ing to find some re­fram­ing of it that causes it to make sense—or at the very least, make more sense than be­fore—in­stead of ad­dress­ing your re­marks to the cur­rent (non­sen­si­cal-seem­ing) in­ter­pre­ta­tion. The prob­a­bil­ity that the writer of the post in ques­tion man­aged to com­pletely lose their mind while writ­ing said post is sig­nifi­cantly lower than both the prob­a­bil­ity that you have mis­in­ter­preted what they are say­ing, and the prob­a­bil­ity that they are say­ing some­thing non-ob­vi­ous which re­quires in­ter­pre­tive effort to be un­der­stood. To max­i­mize your chances of get­ting some­thing use­ful out of the post, there­fore, it is ad­vis­able to con­di­tion on the pos­si­bil­ity that the post is not say­ing some­thing triv­ially in­cor­rect, and see where that leads you. This tends to be how mu­tual un­der­stand­ing is built, and is a good model for how char­i­ta­ble com­mu­ni­ca­tion works. Your com­ment, to say the least, was nei­ther.

• This is the first thing I’ve read from Scott Garra­bant, so “oth­er­wise rep­utable” doesn’t ap­ply here. And I have fre­quently seen things writ­ten on LessWrong that dis­play pretty sig­nifi­cant mi­s­un­der­stand­ings of the philo­soph­i­cal ba­sis of Bayesian prob­a­bil­ity, so that gives me a high prior to ex­pect more of them.

• The “nodes in the fu­ture” part of this, is in part the point I keep try­ing to make with the rig­ging/​bias and in­fluence posts https://​​www.less­wrong.com/​​posts/​​b8HauRWr­jBd­nKEwM5/​​rig­ging-is-a-form-of-wireheading

• Of course, no ac­tual in­di­vi­d­ual or pro­gram is a pure Bayesian. Pure Bayesian up­dat­ing pre­sumes log­i­cal om­ni­science af­ter all. Rather, when we talk about Bayesian rea­son­ing we ideal­ize in­di­vi­d­u­als as ab­stract agents whose choices (po­ten­tially none) have a cer­tain prob­a­bil­is­tic effect on the world, i.e., ba­si­cally we ideal­ize the situ­a­tion as a 1 per­son game.

You ba­si­cally raise the ques­tion of what hap­pens in New­comb like cases where we al­low the agent’s in­ter­nal de­liber­a­tive state to af­fect out­comes in­de­pen­dent of ex­plicit choices made. But whole model breaks down the mo­ment you do this. It no longer even makes sense to ideal­ize a hu­man as this kind of agent and ask what should be done be­cause the mo­ment you bring the agent’s in­ter­nal de­liber­a­tive state into play it no longer makes sense to ideal­ize the situ­a­tion as one in which there is a choice to be made. At that point you might as well just shrug and say ‘you’ll choose what­ever the laws of physics says you’ll choose.’

Now, one can work around this prob­lem by in­stead pos­ing a ques­tion for a differ­ent agent who might ideal­ize a past self, e.g., if I imag­ine I have a free choice about which be­lief to com­mit to hav­ing in these sorts of situ­a­tions which be­lief/​be­lief func­tion should I pre­sume.

As an aside I would ar­gue that, while a perfectly valid math­e­mat­i­cal calcu­la­tion, there is some­thing wrong in ad­vo­cat­ing for time­less de­ci­sion the­ory or any other par­tic­u­lar de­ci­sion the­ory as the cor­rect way to make choices in these New­comb type sce­nar­ios. The model of choice mak­ing doesn’t even re­ally make sense in such situ­a­tions so any ar­gu­ment over which is the true/​cor­rect de­ci­sion the­ory must ul­ti­mately be a prag­matic one (when we sug­gest ac­tual peo­ple use X ver­sus Y they do bet­ter with X) but that’s never the sense of cor­rect­ness that is be­ing claimed.

• Non cen­tral nit: So, you know the things in your past, so there is no need for prob­a­bil­ity there Doesn’t seem true.

• I sup­pose you mean the fal­li­bil­ity of mem­ory. I think Garrabrant meant it tau­tolog­i­cally though (ie, as the defi­ni­tion of “past”).

• Pretty con­fi­dent they meant it that way:

I am not think­ing about physics-time, I am think­ing about log­i­cal-time. If some­thing is in your past, but has no effect on what al­gorithm you are run­ning on what ob­ser­va­tions you get, then it might as well be con­sid­ered as space-like sep­a­rated from you. If you com­pute how ev­ery­thing in the uni­verse eval­u­ates, the space-like sep­a­rated things are the things that can be eval­u­ated ei­ther be­fore or af­ter you, since their out­put does not change yours or vice-versa. If you par­tially ob­serve a fact, then I want to say you can de­com­pose that fact into the part that you ob­served and the part that you didn’t, and say that the part you ob­served is in your past, while the part you didn’t ob­serve is space-like sep­a­rated from you.

• One way I may be­gin to write a similar con­cept for­mally may be some­thing like:

An agent’s prob­a­bil­ity on a topic is “P(V|C)”, where V is some propo­si­tion and O rep­re­sents all con­di­tion­als.

There are cases where one of these con­di­tion­als will in­clude a state­ment such as “P(V|C) = f(n)”; whereby one must con­di­tion on the out­put of their to­tal es­ti­mate. If this “re­cur­sive” con­di­tional in­fluences P(V|C), then the prob­a­bil­is­tic as­sess­ment is not “state-like sep­a­rated.”

• I gen­er­ally agree with the main mes­sage, and am happy to see it be writ­ten up, but see this less as a failure of Bayes the­ory than a re­jec­tion of a com­mon mi­suse of Bayes the­ory. I be­lieve I’ve heard a similar ar­gu­ment a few times be­fore and have found it a bit frus­trat­ing for this rea­son. (Of course, I could be fac­tu­ally wrong in my un­der­stand­ing)

If one were to ap­ply some­thing other than a di­rect bayesian up­date, as could make the sense in a more com­pli­cated set­ting, they may as well do so in a pro­cess which in­cludes other kinds of bayesian up­dates. And the de­ci­sion pro­cess that they use to de­ter­mine the method of up­dat­ing in these cir­cum­stances may well in­volve bayesian up­dates.

• I’m not sure how to solve such an equa­tion, though do­ing it for sim­ple cases seems sim­ple enough. I’ll ad­mit I don’t un­der­stand log­i­cal in­duc­tion near as well as I would like, and mean to do so some time.

• What makes state­ments you con­trol im­por­tant?

“You as­sign prob­a­bil­ity less than 12 to this sen­tence,” but per­haps you wish you didn’t.

Why would you wish to as­sign a differ­ent prob­a­bil­ity to this state­ment?

• It’s a var­i­ant of the liar’s para­dox. If you say the state­ment is un­likely, you’re agree­ing with what it says. If you agree with it, you clearly don’t think it’s un­likely, so it’s wrong.

• [ ]
[deleted]