Let’s say they did learn better. How would they do this—changing their utility function? Someone with a utility function like this really does prefer B+1c to A, C+1c to B, and A+1c to C. Even if they did change their utility function, the new one would either have a new hole or it would obey the results of the VNM-theorem.
So Bayes teaches: do not disobey the laws of logic and math.
Still wondering where the assigning probabilities to truths of theories is.
OK. So what? There’s more to life than that. That’s so terribly narrow. I mean, that part of what you’re saying is right as far as it goes, but it doesn’t go all that far. And when you start trying to apply it to harder cases—what happens? Do you have some Bayesian argument about who to vote for for president? Which convinced millions of people? Or should have convinced them, and really answers the questions much better than other arguments?
Still wondering where the assigning probabilities to truths of theories is.
Well the Dutch books make it so you have to pick some probabilities. Actually getting the right prior is incomplete, though Solomonoff induction is most of the way there.
OK. So what? There’s more to life than that. That’s so terribly narrow. I mean, that part of what you’re saying is right as far as it goes, but it doesn’t go all that far.
Where else are you hoping to go?
And when you start trying to apply it to harder cases—what happens? Do you have some Bayesian argument about who to vote for for president? Which convinced millions of people? Or should have convinced them, and really answers the questions much better than other arguments?
In principle, yes. There’s actually a computer program called AIXItl that does it. In practice I use approximations to it. It probably could be done to a very higher degree of certainty. There are a lot of issues and a lot of relevant data.
Well the Dutch books make it so you have to pick some probabilities.
Can you give an example? Use the ice cream flavors. What probabilities do you have to pick to buy ice cream without being dutch booked?
Where else are you hoping to go?
Explanatory knowledge. Understanding the world. Philosophical knowledge. Moral knowledge. Non-scientific, non-emprical knowledge. Beyond prediction and observation.
In principle, yes.
How do you know if your approximations are OK to make or ruin things? How do you work out what kinds of approximations are and aren’t safe to make?
The way I would do that is by understanding the explanation of why something is supposed to work. In that way, I can evaluate proposed changes to see whether they mess up the main point or not.
Endo, I think you are making things more confusing by combining issues of Bayesianism with issues of utility. It might help to keep them more separate or to be clear when one is talking about one, the other, or some hybrid.
I use the term Bayesianism to include utility because (a) they are connected and (b) a philosophy of probabilities as abstract mathematical constructs with no applications doesn’t seem complete; it needs an explanation of why those specific objects are studied. How do you think that any of this caused or could cause confusion?
Well, it empirically seems to be causing confusion. See curi’s remarks about the ice cream example. Also, one doesn’t need Bayesianism to include utility and that isn’t standard (although it is true that they do go very well together).
Let’s say they did learn better. How would they do this—changing their utility function? Someone with a utility function like this really does prefer B+1c to A, C+1c to B, and A+1c to C. Even if they did change their utility function, the new one would either have a new hole or it would obey the results of the VNM-theorem.
So Bayes teaches: do not disobey the laws of logic and math.
Still wondering where the assigning probabilities to truths of theories is.
OK. So what? There’s more to life than that. That’s so terribly narrow. I mean, that part of what you’re saying is right as far as it goes, but it doesn’t go all that far. And when you start trying to apply it to harder cases—what happens? Do you have some Bayesian argument about who to vote for for president? Which convinced millions of people? Or should have convinced them, and really answers the questions much better than other arguments?
Well the Dutch books make it so you have to pick some probabilities. Actually getting the right prior is incomplete, though Solomonoff induction is most of the way there.
Where else are you hoping to go?
In principle, yes. There’s actually a computer program called AIXItl that does it. In practice I use approximations to it. It probably could be done to a very higher degree of certainty. There are a lot of issues and a lot of relevant data.
Can you give an example? Use the ice cream flavors. What probabilities do you have to pick to buy ice cream without being dutch booked?
Explanatory knowledge. Understanding the world. Philosophical knowledge. Moral knowledge. Non-scientific, non-emprical knowledge. Beyond prediction and observation.
How do you know if your approximations are OK to make or ruin things? How do you work out what kinds of approximations are and aren’t safe to make?
The way I would do that is by understanding the explanation of why something is supposed to work. In that way, I can evaluate proposed changes to see whether they mess up the main point or not.
Endo, I think you are making things more confusing by combining issues of Bayesianism with issues of utility. It might help to keep them more separate or to be clear when one is talking about one, the other, or some hybrid.
I use the term Bayesianism to include utility because (a) they are connected and (b) a philosophy of probabilities as abstract mathematical constructs with no applications doesn’t seem complete; it needs an explanation of why those specific objects are studied. How do you think that any of this caused or could cause confusion?
Well, it empirically seems to be causing confusion. See curi’s remarks about the ice cream example. Also, one doesn’t need Bayesianism to include utility and that isn’t standard (although it is true that they do go very well together).
Yes I see what you mean.
I think it goes a bit beyond this. Utility considerations motivate the choice of definitions. I acknowledge that they are distinct things, though.