Radical Probabilism [Transcript]

(Talk given on Sunday 21st June, over a zoom call with 40 attendees. Abram Demski is responsible for the talk, Ben Pace is responsible for the transcription)

Talk

Abram Demski: I want to talk about this idea that, for me, is an update from the logical induction result that came out of MIRI a while ago. I feel like it’s an update that I wish the entire LessWrong community had gotten from logical induction but it wasn’t communicated that well, or it’s a subtle point or something.

Abram Demski: But hopefully, this talk isn’t going to require any knowledge of logical induction from you guys. I’m actually going to talk about it in terms of philosophers who had a very similar update starting around, I think, the ’80s.

Abram Demski: There’s this philosophy called ‘radical probabilism’ which is more or less the same insight that you can get from thinking about logical induction. Radical probabilism is spearheaded by this guy Richard Jeffrey who I also like separately for the Jeffrey-Bolker axioms which I’ve written about on LessWrong.

Abram Demski: But, after the Jeffrey-Bolker axioms he was like, well, we need to revise Bayesianism even more radically than that. Specifically he zeroed in on the consequences of Dutch book arguments. So, the Dutch book arguments which are for the Kolmogorov axioms, or alternatively the Jeffrey-Bolker axioms, are pretty solid. However, you may not immediately realize that this does not imply that Bayes’ rule should be an update rule.

Abram Demski: You have Bayes’ rule as a fact about your static probabilities, that’s fine. As a fact about conditional probabilities, Bayes’ rule is just as solid as all the other probability rules. But for some reason, Bayesians take it that you start with these probabilities, you make an observation, and then you have now these probabilities. These probabilities should be updated by Bayes’ rule. And the argument for that is not super solid.

Abram Demski: There are two important flaws with the argument which I want to highlight. There is a Dutch book argument for using Bayes’ rule to update your probabilities, but it makes two critical assumptions which Jeffrey wants to relax. Assumption one is that updates are always and precisely accounted for by propositions which you learn, and everything that you learn and moves your probabilities is accounted for in this proposition. These are usually thought of as sensory data. Jeffrey said, wait a minute, my sensory data isn’t so certain. When I see something, we don’t have perfect introspective access to even just our visual field. It’s not like we get a pixel array and know exactly how everything is. So, I want to treat the things that I’m updating on as, themselves, uncertain.

Abram Demski: Difficulty two with the Dutch book argument for Bayes’ rule as an update rule, is that it assumes you know already how you would update, hypothetically, given different propositions you might observe. Then, given that assumption, you can get this argument that you need to use Bayes’ rule. Because I can Dutch-book you based on my knowledge of how you’re going to update. But if I don’t know how you’re updating, if your update has some random element, subjectively random, if I can’t predict it, then we get this radical treatment of how you’re updating. We get this picture where you believe things one day and then you can just believe different things the next day. And there’s no Dutch book I can make to say you’re irrational for doing that. “I’ve thought about it more and I’ve changed my mind.”

Abram Demski: This is very important for logical uncertainty (which Jeffrey didn’t realize because he wasn’t thinking about logical uncertainty). That’s why we came up with this philosophy, thinking about logical uncertainty. But Jeffrey came up with it just by thinking about the foundations and what we can argue a rational agent must be.

Abram Demski: So, that’s the update I want to convey. I want to convey that Bayes’ rule is not the only way that a rational agent can update. You have this great freedom of how you update.

Q&A

Ben Pace: Thank you very much, Abram. You timed yourself excellently.

Ben Pace: As I understand it, you need to have inexploitability in your belief updates and so on, such that people cannot reliably Dutch book you?

Abram Demski: Yeah. I say radical freedom meaning, if you have belief X one day and you have beliefs Y the next day, any pair of X and Y are justifiable, or potentially rational (as long as you don’t take something that has probability zero and now give it positive probability or something like that).

Abram Demski: There are rationality constraints. It’s not that you can do anything at all. The most concrete example of this is that you can’t change your mind back and forth forever on any one proposition, because then I can money-pump you. Because I know, eventually, your beliefs are going to drift up, which means I can buy low and eventually your beliefs will drift up and then I can sell the bet back to you because now you’re like, “That’s a bad bet,” and then I’ve made money off of you.

Abram Demski: If I can predict anything about how your beliefs are going to drift, then you’re in trouble. I can make money off of you by buying low and selling high. In particular that means you can’t oscillate forever, you have to eventually converge. And there’s lots of other implications.

Abram Demski: But I can’t summarize this in any nice rule is the thing. There’s just a bunch of rationality constraints that come from non-Dutch-book-ability. But there’s no nice summary of it. There’s just a bunch of constraints.

Ben Pace: I’m somewhat surprised and shocked. So, I shouldn’t be able to be exploited in any obvious way, but this doesn’t constrain me to the level of Bayes’ rule. It doesn’t constrain me to clearly knowing how my updates will be affected by future evidence.

Abram Demski: Right. If you do know your updates, then you’re constrained. He calls that the rigidity condition. And even that doesn’t imply Bayes’ rule, because of the first problem that I mentioned. So, if you do know how you’re going to update, then you don’t want to change your conditional probabilities as a result of observing something, but you can still have these uncertain observations where you move a probability but only partially. And this is called a Jeffrey update.

Ben Pace: Phil Hazelden has a question. Phil, do you want to ask your question?

Phil Hazelden: Yeah. So, you said if you don’t know how you’d update on an observation, then you get pure constraints on your belief update. I’m wondering, if someone else knows how you’d update on an observation but you don’t, does that for example, give them the power to extract money from you?

Abram Demski: Yeah, so if somebody else knows, then they can extract money if you’re not at least doing a Jeffrey update. In general, if a bookie knows something that you don’t, then a bookie can extract money from you by making bets. So this is not a proper Dutch book argument, because what we mean by a Dutch book argument is that a totally ignorant bookie can extract money.

Phil Hazelden: Thank you.

Ben Pace: I would have expected that if I was constrained to not be exploitable then this would have resulted in Bayes’ rule, but you’re saying all it actually means is there are some very basic arguments about how you shouldn’t be exploited but otherwise you can move very freely between. You can update upwards on Monday, down on Tuesday, down again on Wednesday, up on Thursday and then stay there and as long as I can’t predict it in advance, you get to do whatever the hell you like with your beliefs.

Abram Demski: Yep, and that’s rational in the sense that I think rational should mean.

Ben Pace: I do sometimes use Bayes’ rule in arguments. In fact, I’ve done it not-irregularly. Do you expect, if I fully propagate this argument I will stop using Bayes’ rule in arguments? I feel it’s very helpful for me to be able to say, all right, I was believing X on Monday and not-X on Wednesday, and let me show you the shape of my update that I made using certain probabilistic updates.

Abram Demski: Yeah, so I think that if you propagate this update you’ll notice cases where your shift simply cannot be accounted for as Bayes’ rule. But, this rigidity condition, the condition of “I already know how I would update hypothetically on various pieces of information”, the way Jeffrey talks about this (or at least the way some Jeffrey-interpreters talk about this), it’s like: if you have considered this question ahead of time, of how you would update on this particular piece of information, then your update had better be either a Bayes’ update or at least a Jeffrey update. In the cases where you think about it, it has this narrowing effect where you do indeed have to be looking more like Bayes.

Abram Demski: As an example of something that’s non-Bayesian that you might become more comfortable with if you fully propagate this: you can notice that something is amiss with your model because the evidence is less probable than you would have expected, without having an alternative that you’re updating towards. You update down your model without updating it down because of normalization constraints of updating something else up. “I’m less confident in this model now.” And somebody asks what Bayesian update did you do, and I’m like “No, it’s not a Bayesian update, it’s just that this model seems shakier.”.

Ben Pace: It’s like the thing where I have four possible hypotheses here, X, Y, Z, and “I do not have a good hypothesis here yet”. And sometimes I just move probability into “the hypothesis is not yet in my space of considerations”.

Abram Demski: But it’s like, how do you do that if “I don’t have a good hypothesis” doesn’t make any predictions?

Ben Pace: Interesting. Thanks, Abram.