Radical Probabilism [Transcript]

(Talk given on Sun­day 21st June, over a zoom call with 40 at­ten­dees. Abram Dem­ski is re­spon­si­ble for the talk, Ben Pace is re­spon­si­ble for the tran­scrip­tion)


Abram Dem­ski: I want to talk about this idea that, for me, is an up­date from the log­i­cal in­duc­tion re­sult that came out of MIRI a while ago. I feel like it’s an up­date that I wish the en­tire LessWrong com­mu­nity had got­ten from log­i­cal in­duc­tion but it wasn’t com­mu­ni­cated that well, or it’s a sub­tle point or some­thing.

Abram Dem­ski: But hope­fully, this talk isn’t go­ing to re­quire any knowl­edge of log­i­cal in­duc­tion from you guys. I’m ac­tu­ally go­ing to talk about it in terms of philoso­phers who had a very similar up­date start­ing around, I think, the ’80s.

Abram Dem­ski: There’s this philos­o­phy called ‘rad­i­cal prob­a­bil­ism’ which is more or less the same in­sight that you can get from think­ing about log­i­cal in­duc­tion. Rad­i­cal prob­a­bil­ism is spear­headed by this guy Richard Jeffrey who I also like sep­a­rately for the Jeffrey-Bolker ax­ioms which I’ve writ­ten about on LessWrong.

Abram Dem­ski: But, af­ter the Jeffrey-Bolker ax­ioms he was like, well, we need to re­vise Bayesi­anism even more rad­i­cally than that. Speci­fi­cally he ze­roed in on the con­se­quences of Dutch book ar­gu­ments. So, the Dutch book ar­gu­ments which are for the Kol­mogorov ax­ioms, or al­ter­na­tively the Jeffrey-Bolker ax­ioms, are pretty solid. How­ever, you may not im­me­di­ately re­al­ize that this does not im­ply that Bayes’ rule should be an up­date rule.

Abram Dem­ski: You have Bayes’ rule as a fact about your static prob­a­bil­ities, that’s fine. As a fact about con­di­tional prob­a­bil­ities, Bayes’ rule is just as solid as all the other prob­a­bil­ity rules. But for some rea­son, Bayesi­ans take it that you start with these prob­a­bil­ities, you make an ob­ser­va­tion, and then you have now these prob­a­bil­ities. Th­ese prob­a­bil­ities should be up­dated by Bayes’ rule. And the ar­gu­ment for that is not su­per solid.

Abram Dem­ski: There are two im­por­tant flaws with the ar­gu­ment which I want to high­light. There is a Dutch book ar­gu­ment for us­ing Bayes’ rule to up­date your prob­a­bil­ities, but it makes two crit­i­cal as­sump­tions which Jeffrey wants to re­lax. As­sump­tion one is that up­dates are always and pre­cisely ac­counted for by propo­si­tions which you learn, and ev­ery­thing that you learn and moves your prob­a­bil­ities is ac­counted for in this propo­si­tion. Th­ese are usu­ally thought of as sen­sory data. Jeffrey said, wait a minute, my sen­sory data isn’t so cer­tain. When I see some­thing, we don’t have perfect in­tro­spec­tive ac­cess to even just our vi­sual field. It’s not like we get a pixel ar­ray and know ex­actly how ev­ery­thing is. So, I want to treat the things that I’m up­dat­ing on as, them­selves, un­cer­tain.

Abram Dem­ski: Difficulty two with the Dutch book ar­gu­ment for Bayes’ rule as an up­date rule, is that it as­sumes you know already how you would up­date, hy­po­thet­i­cally, given differ­ent propo­si­tions you might ob­serve. Then, given that as­sump­tion, you can get this ar­gu­ment that you need to use Bayes’ rule. Be­cause I can Dutch-book you based on my knowl­edge of how you’re go­ing to up­date. But if I don’t know how you’re up­dat­ing, if your up­date has some ran­dom el­e­ment, sub­jec­tively ran­dom, if I can’t pre­dict it, then we get this rad­i­cal treat­ment of how you’re up­dat­ing. We get this pic­ture where you be­lieve things one day and then you can just be­lieve differ­ent things the next day. And there’s no Dutch book I can make to say you’re ir­ra­tional for do­ing that. “I’ve thought about it more and I’ve changed my mind.”

Abram Dem­ski: This is very im­por­tant for log­i­cal un­cer­tainty (which Jeffrey didn’t re­al­ize be­cause he wasn’t think­ing about log­i­cal un­cer­tainty). That’s why we came up with this philos­o­phy, think­ing about log­i­cal un­cer­tainty. But Jeffrey came up with it just by think­ing about the foun­da­tions and what we can ar­gue a ra­tio­nal agent must be.

Abram Dem­ski: So, that’s the up­date I want to con­vey. I want to con­vey that Bayes’ rule is not the only way that a ra­tio­nal agent can up­date. You have this great free­dom of how you up­date.


Ben Pace: Thank you very much, Abram. You timed your­self ex­cel­lently.

Ben Pace: As I un­der­stand it, you need to have in­ex­ploita­bil­ity in your be­lief up­dates and so on, such that peo­ple can­not re­li­ably Dutch book you?

Abram Dem­ski: Yeah. I say rad­i­cal free­dom mean­ing, if you have be­lief X one day and you have be­liefs Y the next day, any pair of X and Y are jus­tifi­able, or po­ten­tially ra­tio­nal (as long as you don’t take some­thing that has prob­a­bil­ity zero and now give it pos­i­tive prob­a­bil­ity or some­thing like that).

Abram Dem­ski: There are ra­tio­nal­ity con­straints. It’s not that you can do any­thing at all. The most con­crete ex­am­ple of this is that you can’t change your mind back and forth for­ever on any one propo­si­tion, be­cause then I can money-pump you. Be­cause I know, even­tu­ally, your be­liefs are go­ing to drift up, which means I can buy low and even­tu­ally your be­liefs will drift up and then I can sell the bet back to you be­cause now you’re like, “That’s a bad bet,” and then I’ve made money off of you.

Abram Dem­ski: If I can pre­dict any­thing about how your be­liefs are go­ing to drift, then you’re in trou­ble. I can make money off of you by buy­ing low and sel­l­ing high. In par­tic­u­lar that means you can’t os­cillate for­ever, you have to even­tu­ally con­verge. And there’s lots of other im­pli­ca­tions.

Abram Dem­ski: But I can’t sum­ma­rize this in any nice rule is the thing. There’s just a bunch of ra­tio­nal­ity con­straints that come from non-Dutch-book-abil­ity. But there’s no nice sum­mary of it. There’s just a bunch of con­straints.

Ben Pace: I’m some­what sur­prised and shocked. So, I shouldn’t be able to be ex­ploited in any ob­vi­ous way, but this doesn’t con­strain me to the level of Bayes’ rule. It doesn’t con­strain me to clearly know­ing how my up­dates will be af­fected by fu­ture ev­i­dence.

Abram Dem­ski: Right. If you do know your up­dates, then you’re con­strained. He calls that the rigidity con­di­tion. And even that doesn’t im­ply Bayes’ rule, be­cause of the first prob­lem that I men­tioned. So, if you do know how you’re go­ing to up­date, then you don’t want to change your con­di­tional prob­a­bil­ities as a re­sult of ob­serv­ing some­thing, but you can still have these un­cer­tain ob­ser­va­tions where you move a prob­a­bil­ity but only par­tially. And this is called a Jeffrey up­date.

Ben Pace: Phil Hazel­den has a ques­tion. Phil, do you want to ask your ques­tion?

Phil Hazel­den: Yeah. So, you said if you don’t know how you’d up­date on an ob­ser­va­tion, then you get pure con­straints on your be­lief up­date. I’m won­der­ing, if some­one else knows how you’d up­date on an ob­ser­va­tion but you don’t, does that for ex­am­ple, give them the power to ex­tract money from you?

Abram Dem­ski: Yeah, so if some­body else knows, then they can ex­tract money if you’re not at least do­ing a Jeffrey up­date. In gen­eral, if a bookie knows some­thing that you don’t, then a bookie can ex­tract money from you by mak­ing bets. So this is not a proper Dutch book ar­gu­ment, be­cause what we mean by a Dutch book ar­gu­ment is that a to­tally ig­no­rant bookie can ex­tract money.

Phil Hazel­den: Thank you.

Ben Pace: I would have ex­pected that if I was con­strained to not be ex­ploitable then this would have re­sulted in Bayes’ rule, but you’re say­ing all it ac­tu­ally means is there are some very ba­sic ar­gu­ments about how you shouldn’t be ex­ploited but oth­er­wise you can move very freely be­tween. You can up­date up­wards on Mon­day, down on Tues­day, down again on Wed­nes­day, up on Thurs­day and then stay there and as long as I can’t pre­dict it in ad­vance, you get to do what­ever the hell you like with your be­liefs.

Abram Dem­ski: Yep, and that’s ra­tio­nal in the sense that I think ra­tio­nal should mean.

Ben Pace: I do some­times use Bayes’ rule in ar­gu­ments. In fact, I’ve done it not-ir­reg­u­larly. Do you ex­pect, if I fully prop­a­gate this ar­gu­ment I will stop us­ing Bayes’ rule in ar­gu­ments? I feel it’s very helpful for me to be able to say, all right, I was be­liev­ing X on Mon­day and not-X on Wed­nes­day, and let me show you the shape of my up­date that I made us­ing cer­tain prob­a­bil­is­tic up­dates.

Abram Dem­ski: Yeah, so I think that if you prop­a­gate this up­date you’ll no­tice cases where your shift sim­ply can­not be ac­counted for as Bayes’ rule. But, this rigidity con­di­tion, the con­di­tion of “I already know how I would up­date hy­po­thet­i­cally on var­i­ous pieces of in­for­ma­tion”, the way Jeffrey talks about this (or at least the way some Jeffrey-in­ter­preters talk about this), it’s like: if you have con­sid­ered this ques­tion ahead of time, of how you would up­date on this par­tic­u­lar piece of in­for­ma­tion, then your up­date had bet­ter be ei­ther a Bayes’ up­date or at least a Jeffrey up­date. In the cases where you think about it, it has this nar­row­ing effect where you do in­deed have to be look­ing more like Bayes.

Abram Dem­ski: As an ex­am­ple of some­thing that’s non-Bayesian that you might be­come more com­fortable with if you fully prop­a­gate this: you can no­tice that some­thing is amiss with your model be­cause the ev­i­dence is less prob­a­ble than you would have ex­pected, with­out hav­ing an al­ter­na­tive that you’re up­dat­ing to­wards. You up­date down your model with­out up­dat­ing it down be­cause of nor­mal­iza­tion con­straints of up­dat­ing some­thing else up. “I’m less con­fi­dent in this model now.” And some­body asks what Bayesian up­date did you do, and I’m like “No, it’s not a Bayesian up­date, it’s just that this model seems shak­ier.”.

Ben Pace: It’s like the thing where I have four pos­si­ble hy­pothe­ses here, X, Y, Z, and “I do not have a good hy­poth­e­sis here yet”. And some­times I just move prob­a­bil­ity into “the hy­poth­e­sis is not yet in my space of con­sid­er­a­tions”.

Abram Dem­ski: But it’s like, how do you do that if “I don’t have a good hy­poth­e­sis” doesn’t make any pre­dic­tions?

Ben Pace: In­ter­est­ing. Thanks, Abram.