(I expect that Scott, Abram or some others have already pointed this out, but somehow this clicked for me only recently. Pointers to existing discussions appreciated.)
A Bayesian update can be seen as a special case of a prediction market resolution.
Specifically, a Bayesian update is the case where each “hypothesis” has bet all their wealth across some combination of outcomes, and then the pot is winner-takes-all (or split proportionally when there are multiple winners).
The problem with Bayesianism is then obvious: what happens when there are no winners? Your epistemology is “bankrupt”, the money vanishes into the ether, and bets on future propositions are undefined.
So why would a hypothesis go all-in like that? Well, that’s actually the correct “cooperative” strategy in a setting where you’re certain that at least one of them is exactly correct.
To generalize Bayesianism, we want to instead talk about what the right “cooperative” strategy is when a) you don’t think any of them are exactly correct, and b) when each hypothesis has goals too, not just beliefs.
Yep, see this paper—Bayesian updating is the same as having all the hypotheses in your head be agents in a prediction market using the Kelly criterion, where their inital wealths are your prior probabilities on those hypotheses, the market price of some observation is your marginal probability over that observation, and the wealths after the bets resolve are the posterior distributions.
To generalize Bayesianism, we want to instead talk about what the right “cooperative” strategy is when a) you don’t think any of them are exactly correct, and b) when each hypothesis has goals too, not just beliefs.
Unclear to me how (b) connects to the rest of this post. Is it about each hypothesis being cautious not to bet all of their wealth, because they care about other stuff than winning the market?
The most obvious/naive/hacky solution is something like sub-probability (adds up to ≤1, so the truth might lie beyond your hypothesis space) with Jeffrey updating or updating via virtual evidence (which handles the “none of them are exactly correct” part).
Someone somewhere connected sub-probability measures with intuitionistic logic, where a market, instead of resolving at exactly one of the options, may just fail to resolve, or not resolve in a relevant time frame.
Indeed in algorithmic information theory, the lower semicomputable semimeasures are an example of “subprobabilities.” Much has been written about updating in this context.
(I expect that Scott, Abram or some others have already pointed this out, but somehow this clicked for me only recently. Pointers to existing discussions appreciated.)
A Bayesian update can be seen as a special case of a prediction market resolution.
Specifically, a Bayesian update is the case where each “hypothesis” has bet all their wealth across some combination of outcomes, and then the pot is winner-takes-all (or split proportionally when there are multiple winners).
The problem with Bayesianism is then obvious: what happens when there are no winners? Your epistemology is “bankrupt”, the money vanishes into the ether, and bets on future propositions are undefined.
So why would a hypothesis go all-in like that? Well, that’s actually the correct “cooperative” strategy in a setting where you’re certain that at least one of them is exactly correct.
To generalize Bayesianism, we want to instead talk about what the right “cooperative” strategy is when a) you don’t think any of them are exactly correct, and b) when each hypothesis has goals too, not just beliefs.
Yep, see this paper—Bayesian updating is the same as having all the hypotheses in your head be agents in a prediction market using the Kelly criterion, where their inital wealths are your prior probabilities on those hypotheses, the market price of some observation is your marginal probability over that observation, and the wealths after the bets resolve are the posterior distributions.
Unclear to me how (b) connects to the rest of this post. Is it about each hypothesis being cautious not to bet all of their wealth, because they care about other stuff than winning the market?
The most obvious/naive/hacky solution is something like sub-probability (adds up to ≤1, so the truth might lie beyond your hypothesis space) with Jeffrey updating or updating via virtual evidence (which handles the “none of them are exactly correct” part).
Someone somewhere connected sub-probability measures with intuitionistic logic, where a market, instead of resolving at exactly one of the options, may just fail to resolve, or not resolve in a relevant time frame.
Indeed in algorithmic information theory, the lower semicomputable semimeasures are an example of “subprobabilities.” Much has been written about updating in this context.
Yep: https://www.lesswrong.com/posts/3hs6MniiEssfL8rPz/judgements-merging-prediction-and-evidence