If your decision process is not equivalent to one that uses the previously described procedure, there are situations where something like one of the following will happen.
I ask you if you want chocolate or vanilla ice cream and you don’t decide. Not just you don’t care which one you get or you would prefer not to have ice cream, but you don’t output anything and see nothing wrong with that.
You prefer chocolate to vanilla ice cream, so you would willingly pay 1c to have the vanilla ice cream that you have been promised upgraded to chocolate. You also happen to prefer strawberry to chocolate, so you are willing to pay 1c to exchange a promise of a chocolate ice cream for a promise of a strawberry ice cream. Furthermore, it turn out you prefer vanilla to strawberry, so whenever you are offered a strawberry ice cream, you gladly pay a single cent to change that to an offer of vanilla, ad infinitum.
N/A
You like chocolate ice cream more than vanilla ice cream. Nobody knows if you’ll get ice cream today, but you are asked for your choice just in case, so you pick vanilla.
Let’s consider (2). Suppose someone was in the process of getting Dutch Booked like this. It would not go on ad infinitum. They would quickly learn better. Right? So even if this happened, I think it would not be a big deal.
Let’s say they did learn better. How would they do this—changing their utility function? Someone with a utility function like this really does prefer B+1c to A, C+1c to B, and A+1c to C. Even if they did change their utility function, the new one would either have a new hole or it would obey the results of the VNM-theorem.
So Bayes teaches: do not disobey the laws of logic and math.
Still wondering where the assigning probabilities to truths of theories is.
OK. So what? There’s more to life than that. That’s so terribly narrow. I mean, that part of what you’re saying is right as far as it goes, but it doesn’t go all that far. And when you start trying to apply it to harder cases—what happens? Do you have some Bayesian argument about who to vote for for president? Which convinced millions of people? Or should have convinced them, and really answers the questions much better than other arguments?
Still wondering where the assigning probabilities to truths of theories is.
Well the Dutch books make it so you have to pick some probabilities. Actually getting the right prior is incomplete, though Solomonoff induction is most of the way there.
OK. So what? There’s more to life than that. That’s so terribly narrow. I mean, that part of what you’re saying is right as far as it goes, but it doesn’t go all that far.
Where else are you hoping to go?
And when you start trying to apply it to harder cases—what happens? Do you have some Bayesian argument about who to vote for for president? Which convinced millions of people? Or should have convinced them, and really answers the questions much better than other arguments?
In principle, yes. There’s actually a computer program called AIXItl that does it. In practice I use approximations to it. It probably could be done to a very higher degree of certainty. There are a lot of issues and a lot of relevant data.
Well the Dutch books make it so you have to pick some probabilities.
Can you give an example? Use the ice cream flavors. What probabilities do you have to pick to buy ice cream without being dutch booked?
Where else are you hoping to go?
Explanatory knowledge. Understanding the world. Philosophical knowledge. Moral knowledge. Non-scientific, non-emprical knowledge. Beyond prediction and observation.
In principle, yes.
How do you know if your approximations are OK to make or ruin things? How do you work out what kinds of approximations are and aren’t safe to make?
The way I would do that is by understanding the explanation of why something is supposed to work. In that way, I can evaluate proposed changes to see whether they mess up the main point or not.
Endo, I think you are making things more confusing by combining issues of Bayesianism with issues of utility. It might help to keep them more separate or to be clear when one is talking about one, the other, or some hybrid.
I use the term Bayesianism to include utility because (a) they are connected and (b) a philosophy of probabilities as abstract mathematical constructs with no applications doesn’t seem complete; it needs an explanation of why those specific objects are studied. How do you think that any of this caused or could cause confusion?
Well, it empirically seems to be causing confusion. See curi’s remarks about the ice cream example. Also, one doesn’t need Bayesianism to include utility and that isn’t standard (although it is true that they do go very well together).
Let’s consider (2). Suppose someone was in the process of getting Dutch Booked like this. It would not go on ad infinitum. They would quickly learn better. Right? So even if this happened, I think it would not be a big deal.
So the argument is now not that that suboptimal issues don’t exist but that they aren’t a big deal? Are you aware that the primary reason that this involves small amounts of ice cream is for convenience of the example? There’s no reason these couldn’t happen with far more serious issues (such as what medicine to use).
I know. I thought it was strange that you said “ad infinitum” when it would not go on forever. And that you presented this as dire but made your example non-dire.
But OK. You say we must consider probabilities, or this will happen. Well, suppose that if I do something it will happen. I could notice that, criticize it, and thus avoid it.
How can I notice? I imagine you will say that involves probabilities. But in your ice cream example I don’t see the probabilities. It’s just preferences for different ice creams, and an explanation of how you get a loop.
And what I definitely don’t see is probabilities that various theories are true (as opposed to probabilities about events which are ok).
But OK. You say we must consider probabilities, or this will happen. Well, suppose that if I do something it will happen. I could notice that, criticize it, and thus avoid it.
Yes, but the Bayesian avoids having this step. For any step you can construct a “criticism” that will duplicate what the Bayesian will do. This is connected to a number of issues, including the fact that what constitutes valid criticism in a Popperian framework is far from clear.
But in your ice cream example I don’t see the probabilities. It’s just preferences for different ice creams, and an explanation of how you get a loop.
Ice cream is an analogy. It might not be a great one since it is connected to preferences (which sometimes gets confused with Bayesianism). The analogy isn’t a great one. It might make more sense to just go read Cox’s theorem and translate to yourself what the assumptions mean about an approach.
what constitutes valid criticism in a Popperian framework is far from clear.
Anything which is not itself criticized.
Ice cream is an analogy.
Could you pick any real world example you like, where the probabilities needed to avoid dutch book aren’t obvious, and point them out? To help concretize the idea for me.
Could you pick any real world example you like, where the probabilities needed to avoid dutch book aren’t obvious, and point them out
Well, I’m not sure, in that I’m not convinced that Dutch Booking really does occur much in real life other than in the obvious contexts. But there are a lot of contexts it does occur in. For example, a fair number of complicated stock maneuvers can be thought of essentially as attempts to dutch book other players in the stock market.
If your decision process is not equivalent to one that uses the previously described procedure, there are situations where something like one of the following will happen.
I ask you if you want chocolate or vanilla ice cream and you don’t decide. Not just you don’t care which one you get or you would prefer not to have ice cream, but you don’t output anything and see nothing wrong with that.
You prefer chocolate to vanilla ice cream, so you would willingly pay 1c to have the vanilla ice cream that you have been promised upgraded to chocolate. You also happen to prefer strawberry to chocolate, so you are willing to pay 1c to exchange a promise of a chocolate ice cream for a promise of a strawberry ice cream. Furthermore, it turn out you prefer vanilla to strawberry, so whenever you are offered a strawberry ice cream, you gladly pay a single cent to change that to an offer of vanilla, ad infinitum.
N/A
You like chocolate ice cream more than vanilla ice cream. Nobody knows if you’ll get ice cream today, but you are asked for your choice just in case, so you pick vanilla.
Let’s consider (2). Suppose someone was in the process of getting Dutch Booked like this. It would not go on ad infinitum. They would quickly learn better. Right? So even if this happened, I think it would not be a big deal.
Let’s say they did learn better. How would they do this—changing their utility function? Someone with a utility function like this really does prefer B+1c to A, C+1c to B, and A+1c to C. Even if they did change their utility function, the new one would either have a new hole or it would obey the results of the VNM-theorem.
So Bayes teaches: do not disobey the laws of logic and math.
Still wondering where the assigning probabilities to truths of theories is.
OK. So what? There’s more to life than that. That’s so terribly narrow. I mean, that part of what you’re saying is right as far as it goes, but it doesn’t go all that far. And when you start trying to apply it to harder cases—what happens? Do you have some Bayesian argument about who to vote for for president? Which convinced millions of people? Or should have convinced them, and really answers the questions much better than other arguments?
Well the Dutch books make it so you have to pick some probabilities. Actually getting the right prior is incomplete, though Solomonoff induction is most of the way there.
Where else are you hoping to go?
In principle, yes. There’s actually a computer program called AIXItl that does it. In practice I use approximations to it. It probably could be done to a very higher degree of certainty. There are a lot of issues and a lot of relevant data.
Can you give an example? Use the ice cream flavors. What probabilities do you have to pick to buy ice cream without being dutch booked?
Explanatory knowledge. Understanding the world. Philosophical knowledge. Moral knowledge. Non-scientific, non-emprical knowledge. Beyond prediction and observation.
How do you know if your approximations are OK to make or ruin things? How do you work out what kinds of approximations are and aren’t safe to make?
The way I would do that is by understanding the explanation of why something is supposed to work. In that way, I can evaluate proposed changes to see whether they mess up the main point or not.
Endo, I think you are making things more confusing by combining issues of Bayesianism with issues of utility. It might help to keep them more separate or to be clear when one is talking about one, the other, or some hybrid.
I use the term Bayesianism to include utility because (a) they are connected and (b) a philosophy of probabilities as abstract mathematical constructs with no applications doesn’t seem complete; it needs an explanation of why those specific objects are studied. How do you think that any of this caused or could cause confusion?
Well, it empirically seems to be causing confusion. See curi’s remarks about the ice cream example. Also, one doesn’t need Bayesianism to include utility and that isn’t standard (although it is true that they do go very well together).
Yes I see what you mean.
I think it goes a bit beyond this. Utility considerations motivate the choice of definitions. I acknowledge that they are distinct things, though.
The consequences could easily be thousands of lives or more in case of sufficiently important decisions.
So the argument is now not that that suboptimal issues don’t exist but that they aren’t a big deal? Are you aware that the primary reason that this involves small amounts of ice cream is for convenience of the example? There’s no reason these couldn’t happen with far more serious issues (such as what medicine to use).
I know. I thought it was strange that you said “ad infinitum” when it would not go on forever. And that you presented this as dire but made your example non-dire.
But OK. You say we must consider probabilities, or this will happen. Well, suppose that if I do something it will happen. I could notice that, criticize it, and thus avoid it.
How can I notice? I imagine you will say that involves probabilities. But in your ice cream example I don’t see the probabilities. It’s just preferences for different ice creams, and an explanation of how you get a loop.
And what I definitely don’t see is probabilities that various theories are true (as opposed to probabilities about events which are ok).
I didn’t say that (I’m not endoself).
Yes, but the Bayesian avoids having this step. For any step you can construct a “criticism” that will duplicate what the Bayesian will do. This is connected to a number of issues, including the fact that what constitutes valid criticism in a Popperian framework is far from clear.
Ice cream is an analogy. It might not be a great one since it is connected to preferences (which sometimes gets confused with Bayesianism). The analogy isn’t a great one. It might make more sense to just go read Cox’s theorem and translate to yourself what the assumptions mean about an approach.
OK, my bad. So many people. I lose track.
Anything which is not itself criticized.
Could you pick any real world example you like, where the probabilities needed to avoid dutch book aren’t obvious, and point them out? To help concretize the idea for me.
Well, I’m not sure, in that I’m not convinced that Dutch Booking really does occur much in real life other than in the obvious contexts. But there are a lot of contexts it does occur in. For example, a fair number of complicated stock maneuvers can be thought of essentially as attempts to dutch book other players in the stock market.
Koth already had an amusing response to that.
Someone here told me it does. Maybe you can go argue with him for me ;-)
I agree.