Do you really think there is no meaningful sense in which a fair coin toss has 1⁄2 probability for Heads? That we can’t talk about probabilities at all, without defining utility function first?
For me such claims are very weird. Yes, betting is an obvious application of probability theory, but it doesn’t mean that probability theory doesn’t exist without betting. Like the fact that computers are an application for binary algebra, doesn’t mean that we can’t talk about binary algebra without bringing up computers.
Kolmogorovs axioms do not require utility functions over the possible outcomes—that’s an extra entity. And, granted, this entity can be useful in some circumstances. But also it’s brings extra opportunity to make a mistake. And, oh boy, do people make them.
We can talk about numbers that obey the Kolmogorov axioms. But any real or imaginary-world problem depends on what you are trying to do, i.e. utility. The Kolmogorov axioms don’t specify how are you supposed to construct outcome space or decide what probabilities are relevant.
A real world problem, that can be approximated by probability theory
A mathematical model from probability theory, that approximates the real world problem
A betting scheme on the possible outcomes
Basically 1. is the territory, 2. - the map and 3. is the navigation, or even just a specific way of it.
We can meaningfully talk about the map and whether it’s correctly represents the territory, even if we are not currently navigating the territory with this map.
Betting is a way to check whether the probabilities are correct. But it’s not the only one.
For example, according to the law of large numbers we can check it just by running a simulation for some large number of times. Personally I find this way to be more descriptive, but also it doesn’t require to define utility functions or invoke decision theory.
I’d argue that #3 is a better map than #2. In the territory, all probabilities are 0 or 1, and probability theory is about an agent’s uncertainty of which of these will be experienced in the future.
The resolution mechanism of the betting scheme is a concrete operational definition of what the “real world problem” actually is.
In the territory, all probabilities are 0 or 1, and probability theory is about an agent’s uncertainty of which of these will be experienced in the future.
You can be quantitatively uncertain about things even if you are not betting on them. Saying I have probability 1⁄2 for an event is no less accurate than saying I’m accepting betting odds better than 1⁄2 for an event. Actually it’s a bit more on point: there may be reasons why you are not ready to bet at some odds, unrelated to the questions of probabilities. Maybe you do not have enough money. Or maybe you really hate betting as a thing, etc. And of course, as an extra bonus you do not need to bring up all the huge apparatus of decision theory just to talk about probabilities.
The resolution mechanism of the betting scheme is a concrete operational definition of what the “real world problem” actually is.
As I already said, Law of large numbers provides us with a way to test the map accuracy without betting. And as experimental resolution through betting will still require us to run the experiment multiple times, it doesn’t have any disadvantages.
I think I’m saying (probably badly), that events (and their impact on an agent, which are experiences) are in the territory, and probability is always and only in maps. It’s misleading to call it a “real-world problem” without noticing that probability is not in the real world.
To be quantitatively uncertain is isomorphic to making a (theoretical) bet. The resolution mechanism of the bet IS the “real-world problem” that you’re using probability to describe.
We can meaningfully talk about the map and whether it’s correctly represents the territory
In confusing anthropic situations, we shouldn’t. Correctness implies one-dimensional measure and objectivity and then people start arguing what is “correct” probability in Sleeping Beauty. You can invent some theory of subjective correctness, or label some mathematically isomorphic reasoning as incorrect but useful. Or you can use existing general framework for subjective problems that works every time—utility. Even if you want to know what would maximize correctness, you can just make you utility function care only about being correct—that still makes the necessity of the answer to “correct when?” obvious.
The technical justification for all of this is that the meaning of correctness for probability is not checked, but defined from it being useful—the law of large numbers is a value-laden bridge law. The need for any approximation is derived from it being useful.
Which is of course doesn’t mean that in practice we never can factor out and usefully talk only about correctness. But that’s a shortcut and if it leads to confusion, it can be solved by remembering what was the point of using probability from the start.
I’d say the opposite. The more confusing the case the more important is to make it as simple as possible in order not to multiply possible sources of consufion.
Correctness implies one-dimensional measure and objectivity and then people start arguing what is “correct” probability in Sleeping Beauty.
Well, yes. Sleeping Beauty is actually a great example why you should be more careful with invoking betting, while trying to solve probability theory problems, as you may stumble into a case that doesn’t satisfy Kolmogorov Axioms without noticing it. I’ll talk more about it after I’ll have finished all the prerequisite posts. For now, it’s suffice to say that we can easily talk about different probabilities for Heads: on average awakening and on average experiment, just as we can talk about different betting schemes and the addition of betting scheme doesn’t make the problem clear in any way.
the law of large numbers is a value-laden bridge law.
I’m not sure what you mean by it. Law of large numbers is just a fact from probability theory, it doesn’t require utility functions or betting.
I’m not sure what you mean by it. Law of large numbers is just a fact from probability theory, it doesn’t require utility functions or betting.
I meant that the law is just a statement about probability, not about simulations confirming it. To conclude anything from simulations or any observations you need something more than just probability theory.
For now, it’s suffice to say that we can easily talk about different probabilities for Heads: on average awakening and on average experiment
Or on average odd awakening, if you only value half your days. Or on whatever awakening you need to define to minimize product of squared errors. I feel like the question confused people want answered is more like “can you get new knowledge about coin by awakening?”. But ok, looking forward to your next posts.
Do you really think there is no meaningful sense in which a fair coin toss has 1⁄2 probability for Heads? That we can’t talk about probabilities at all, without defining utility function first?
For me such claims are very weird. Yes, betting is an obvious application of probability theory, but it doesn’t mean that probability theory doesn’t exist without betting. Like the fact that computers are an application for binary algebra, doesn’t mean that we can’t talk about binary algebra without bringing up computers.
Kolmogorovs axioms do not require utility functions over the possible outcomes—that’s an extra entity. And, granted, this entity can be useful in some circumstances. But also it’s brings extra opportunity to make a mistake. And, oh boy, do people make them.
We can talk about numbers that obey the Kolmogorov axioms. But any real or imaginary-world problem depends on what you are trying to do, i.e. utility. The Kolmogorov axioms don’t specify how are you supposed to construct outcome space or decide what probabilities are relevant.
There are three distinct entities here:
A real world problem, that can be approximated by probability theory
A mathematical model from probability theory, that approximates the real world problem
A betting scheme on the possible outcomes
Basically 1. is the territory, 2. - the map and 3. is the navigation, or even just a specific way of it.
We can meaningfully talk about the map and whether it’s correctly represents the territory, even if we are not currently navigating the territory with this map.
Betting is a way to check whether the probabilities are correct. But it’s not the only one.
For example, according to the law of large numbers we can check it just by running a simulation for some large number of times. Personally I find this way to be more descriptive, but also it doesn’t require to define utility functions or invoke decision theory.
I’d argue that #3 is a better map than #2. In the territory, all probabilities are 0 or 1, and probability theory is about an agent’s uncertainty of which of these will be experienced in the future.
The resolution mechanism of the betting scheme is a concrete operational definition of what the “real world problem” actually is.
I don’t see how this:
Follows from this:
You can be quantitatively uncertain about things even if you are not betting on them. Saying I have probability 1⁄2 for an event is no less accurate than saying I’m accepting betting odds better than 1⁄2 for an event. Actually it’s a bit more on point: there may be reasons why you are not ready to bet at some odds, unrelated to the questions of probabilities. Maybe you do not have enough money. Or maybe you really hate betting as a thing, etc. And of course, as an extra bonus you do not need to bring up all the huge apparatus of decision theory just to talk about probabilities.
As I already said, Law of large numbers provides us with a way to test the map accuracy without betting. And as experimental resolution through betting will still require us to run the experiment multiple times, it doesn’t have any disadvantages.
I think I’m saying (probably badly), that events (and their impact on an agent, which are experiences) are in the territory, and probability is always and only in maps. It’s misleading to call it a “real-world problem” without noticing that probability is not in the real world.
To be quantitatively uncertain is isomorphic to making a (theoretical) bet. The resolution mechanism of the bet IS the “real-world problem” that you’re using probability to describe.
In confusing anthropic situations, we shouldn’t. Correctness implies one-dimensional measure and objectivity and then people start arguing what is “correct” probability in Sleeping Beauty. You can invent some theory of subjective correctness, or label some mathematically isomorphic reasoning as incorrect but useful. Or you can use existing general framework for subjective problems that works every time—utility. Even if you want to know what would maximize correctness, you can just make you utility function care only about being correct—that still makes the necessity of the answer to “correct when?” obvious.
The technical justification for all of this is that the meaning of correctness for probability is not checked, but defined from it being useful—the law of large numbers is a value-laden bridge law. The need for any approximation is derived from it being useful.
Which is of course doesn’t mean that in practice we never can factor out and usefully talk only about correctness. But that’s a shortcut and if it leads to confusion, it can be solved by remembering what was the point of using probability from the start.
I’d say the opposite. The more confusing the case the more important is to make it as simple as possible in order not to multiply possible sources of consufion.
Well, yes. Sleeping Beauty is actually a great example why you should be more careful with invoking betting, while trying to solve probability theory problems, as you may stumble into a case that doesn’t satisfy Kolmogorov Axioms without noticing it. I’ll talk more about it after I’ll have finished all the prerequisite posts. For now, it’s suffice to say that we can easily talk about different probabilities for Heads: on average awakening and on average experiment, just as we can talk about different betting schemes and the addition of betting scheme doesn’t make the problem clear in any way.
I’m not sure what you mean by it. Law of large numbers is just a fact from probability theory, it doesn’t require utility functions or betting.
I meant that the law is just a statement about probability, not about simulations confirming it. To conclude anything from simulations or any observations you need something more than just probability theory.
Or on average odd awakening, if you only value half your days. Or on whatever awakening you need to define to minimize product of squared errors. I feel like the question confused people want answered is more like “can you get new knowledge about coin by awakening?”. But ok, looking forward to your next posts.