Again, if there’s a mistake, it would be helpful if you could explain exactly what that mistake is. You’re sort of stating that the conclusion is mistaken and then giving a parallel argument for a different conclusion. It would be great (for multiple reasons) if you could explain exactly where my argument fails.
It might be helpful to focus on this example, which is pretty self-contained:
Suppose there’s a conditional prediction market for two coins. After a week of bidding, the markets will close, whichever coin had contracts trading for more money will be flipped and $1 paid to contract-holders for head. The other market is cancelled.
Suppose you’re sure that coin A, has a bias of 60%. If you flip it lots of times, 60% of the flips will be heads. But you’re convinced coin B, is a trick coin. You think there’s a 59% chance it always lands heads, and a 41% chance it always lands tails. You’re just not sure which.
We want you to pay more for a contract for coin A, since that’s the coin you think is more likely to be heads (60% vs 59%). But if you like money, you’ll pay more for a contract on coin B. You’ll do that because other people might figure out if it’s an always-heads coin or an always-tails coin. If it’s always heads, great, they’ll bid up the market, it will activate, and you’ll make money. If it’s always tails, they’ll bid down the market, and you’ll get your money back.
You’ll pay more for coin B contracts, even though you think coin A is better in expectation. Order is not preserved. Things do not work out.
Are you claiming that this is mistaken, or rather that this is correct but it’s not a problem? (Of course, if this example is not central to what you see as a mistake, it could be the wrong thing to focus on.)
I’ve seen one argument which seems related to the one you’re making and I do agree with. Namely, right before the market closes the final bidder has an incentive to bid their true beliefs, provided they know they will be the final bidder. I certainly accept that this is true. If you know the final closing price, then Y is no longer a random variable, and you’re essentially just bidding in a non-conditional prediction market. I don’t think this is completely reassuring on its own, though, because there’s a great deal of tension with the whole idea of having a market equilibrium that reflects collective beliefs. I think you might be able to generalize this into some kind of an argument that as you get closer to closing, there’s less randomness in Y and so you have more of an incentive to be honest. But this worries me because it would appear to lead to weird dynamics where people wait until the last second to bid. Of course, this might be a totally different direction from what you’re thinking.
Are you claiming that this is mistaken, or rather that this is correct but it’s not a problem?
mistaken.
But if you like money, you’ll pay more for a contract on coin B.
this is an invalid step. it’s true in some cases but not others, depending on how the act of paying for a contract on coin B (with no additional knowledge of whether it’s double-headed) affects the chance that the market tosses coin B.
you’ll pay more for a contract on coin B. You’ll do that because other people might figure out if it’s an always-heads coin or an always-tails coin. If it’s always heads, great, they’ll bid up the market, it will activate, and you’ll make money. If it’s always tails, they’ll bid down the market, and you’ll get your money back.
So8res seems to be arguing that this reasoning only holds if your own purchase decision can’t affect the market (say, if you’re making a private bet on the side and both you and your counter-party are sworn to Bayesian secrecy). If your own bet could possibly change which contract activates, then you need to worry that contract B activates because you bid more than your true belief on it, in which case you lose money in expectation.
(Easy proof: Assume all market participants have precisely the same knowledge as you, and all follow your logic; what happens?)
I think dynomight’s reasoning doesn’t quite hold even when your own bet is causally isolated, because:
In order for you to pay more than $.59, you need to believe that the market is at least correlated with reality; that it’s more likely to execute contract B if contract B actually is more valuable. (This is a pretty weak assumption, but still an important one.)
In order for you to pay more than $.60 (not merely $.59 + epsilon), you not only need to believe that the market is correlated with reality, you need a quantitative belief that the correlation has at least a certain strength (enough to outweigh $.01). It’s not enough for it to be theoretically possible that someone has better info than you; it needs to be plausible at a certain quantitative threshold of plausibility.
You can sort-of eliminate assumption #2 if you rework the example so that your true beliefs about A and B are essentially tied, but if they’re essentially tied then it doesn’t pragmatically matter if we get the order wrong. Assumption #2 places a quantitative bound on how wrong they can be based on how plausible it is that the market outperforms your own judgment.
Again, if there’s a mistake, it would be helpful if you could explain exactly what that mistake is. You’re sort of stating that the conclusion is mistaken and then giving a parallel argument for a different conclusion. It would be great (for multiple reasons) if you could explain exactly where my argument fails.
It might be helpful to focus on this example, which is pretty self-contained:
Are you claiming that this is mistaken, or rather that this is correct but it’s not a problem? (Of course, if this example is not central to what you see as a mistake, it could be the wrong thing to focus on.)
I’ve seen one argument which seems related to the one you’re making and I do agree with. Namely, right before the market closes the final bidder has an incentive to bid their true beliefs, provided they know they will be the final bidder. I certainly accept that this is true. If you know the final closing price, then Y is no longer a random variable, and you’re essentially just bidding in a non-conditional prediction market. I don’t think this is completely reassuring on its own, though, because there’s a great deal of tension with the whole idea of having a market equilibrium that reflects collective beliefs. I think you might be able to generalize this into some kind of an argument that as you get closer to closing, there’s less randomness in Y and so you have more of an incentive to be honest. But this worries me because it would appear to lead to weird dynamics where people wait until the last second to bid. Of course, this might be a totally different direction from what you’re thinking.
mistaken.
this is an invalid step. it’s true in some cases but not others, depending on how the act of paying for a contract on coin B (with no additional knowledge of whether it’s double-headed) affects the chance that the market tosses coin B.
So8res seems to be arguing that this reasoning only holds if your own purchase decision can’t affect the market (say, if you’re making a private bet on the side and both you and your counter-party are sworn to Bayesian secrecy). If your own bet could possibly change which contract activates, then you need to worry that contract B activates because you bid more than your true belief on it, in which case you lose money in expectation.
(Easy proof: Assume all market participants have precisely the same knowledge as you, and all follow your logic; what happens?)
I think dynomight’s reasoning doesn’t quite hold even when your own bet is causally isolated, because:
In order for you to pay more than $.59, you need to believe that the market is at least correlated with reality; that it’s more likely to execute contract B if contract B actually is more valuable. (This is a pretty weak assumption, but still an important one.)
In order for you to pay more than $.60 (not merely $.59 + epsilon), you not only need to believe that the market is correlated with reality, you need a quantitative belief that the correlation has at least a certain strength (enough to outweigh $.01). It’s not enough for it to be theoretically possible that someone has better info than you; it needs to be plausible at a certain quantitative threshold of plausibility.
You can sort-of eliminate assumption #2 if you rework the example so that your true beliefs about A and B are essentially tied, but if they’re essentially tied then it doesn’t pragmatically matter if we get the order wrong. Assumption #2 places a quantitative bound on how wrong they can be based on how plausible it is that the market outperforms your own judgment.