I’d argue that #3 is a better map than #2. In the territory, all probabilities are 0 or 1, and probability theory is about an agent’s uncertainty of which of these will be experienced in the future.
The resolution mechanism of the betting scheme is a concrete operational definition of what the “real world problem” actually is.
In the territory, all probabilities are 0 or 1, and probability theory is about an agent’s uncertainty of which of these will be experienced in the future.
You can be quantitatively uncertain about things even if you are not betting on them. Saying I have probability 1⁄2 for an event is no less accurate than saying I’m accepting betting odds better than 1⁄2 for an event. Actually it’s a bit more on point: there may be reasons why you are not ready to bet at some odds, unrelated to the questions of probabilities. Maybe you do not have enough money. Or maybe you really hate betting as a thing, etc. And of course, as an extra bonus you do not need to bring up all the huge apparatus of decision theory just to talk about probabilities.
The resolution mechanism of the betting scheme is a concrete operational definition of what the “real world problem” actually is.
As I already said, Law of large numbers provides us with a way to test the map accuracy without betting. And as experimental resolution through betting will still require us to run the experiment multiple times, it doesn’t have any disadvantages.
I think I’m saying (probably badly), that events (and their impact on an agent, which are experiences) are in the territory, and probability is always and only in maps. It’s misleading to call it a “real-world problem” without noticing that probability is not in the real world.
To be quantitatively uncertain is isomorphic to making a (theoretical) bet. The resolution mechanism of the bet IS the “real-world problem” that you’re using probability to describe.
I’d argue that #3 is a better map than #2. In the territory, all probabilities are 0 or 1, and probability theory is about an agent’s uncertainty of which of these will be experienced in the future.
The resolution mechanism of the betting scheme is a concrete operational definition of what the “real world problem” actually is.
I don’t see how this:
Follows from this:
You can be quantitatively uncertain about things even if you are not betting on them. Saying I have probability 1⁄2 for an event is no less accurate than saying I’m accepting betting odds better than 1⁄2 for an event. Actually it’s a bit more on point: there may be reasons why you are not ready to bet at some odds, unrelated to the questions of probabilities. Maybe you do not have enough money. Or maybe you really hate betting as a thing, etc. And of course, as an extra bonus you do not need to bring up all the huge apparatus of decision theory just to talk about probabilities.
As I already said, Law of large numbers provides us with a way to test the map accuracy without betting. And as experimental resolution through betting will still require us to run the experiment multiple times, it doesn’t have any disadvantages.
I think I’m saying (probably badly), that events (and their impact on an agent, which are experiences) are in the territory, and probability is always and only in maps. It’s misleading to call it a “real-world problem” without noticing that probability is not in the real world.
To be quantitatively uncertain is isomorphic to making a (theoretical) bet. The resolution mechanism of the bet IS the “real-world problem” that you’re using probability to describe.