the trick is that the argument stops working for conditions that start to look like they might trigger. so the argument doesn’t disrupt the idea that conditional prediction markets put the highest price on the best choice, but it does disrupt the idea that the pricings for unlikely conditions are counterfactually accurate.
for intuition, suppose there’s a conditional prediction market for medical treatments for cancer. one of the treatments is “cut off the left leg.” if certain scans and tests come back just the right way (1% chance likely) then cutting off the left leg is the best possible treatment, but otherwise, it’s a dumb idea. if that condition is trading as if it’s very unlikely to be a good idea, you can buy it up at very low risk—most likely, the contract is canceled and you get your money back. but on the off-chance that the scans and tests come back in just the right way, you make a killing.
however, this incentive only exists insofar as “cut off the left leg” is not at risk of winning (before the tests and scans come back). if you thought the leg was going to be cut off simply because everyone else was buying the price up, and that it wouldn’t actually heal the cancer, you’d sell off your shares.
this argument implies that you can’t trust conditional prediction markets with cardinal ranking. however, afaict you’d need other arguments to imply that you can’t trust their choice of “best action.” such arguments probably exist, but this one isn’t sufficient for it. (off the top of my head, i’d consider the case where option A is better for society and option B is better for some malicious actor, and the malicious actor is rich enough to convince the market to take option B. my intuition without thinking further is that this should ‘obviously’ work if the malicious actor is rich enough, in a fashion that’s disanalogous to prediction markets in that it’s not solved automatically with time (because the counterfactuals never settle and so the more-correct traders can’t drain the rich-manipulator’s money), but i haven’t actually thought about it.)
the trick is that the argument stops working for conditions that start to look like they might trigger.
Can you give an argument for this claim? You’re stating that there’s an error in my argument, but you don’t really engage with the argument or explain where exactly you think the error is.
short version: the analogy between a conditional prediction market and the laser-scanner-simulation setup only holds for bids that don’t push the contract into execution. (similarly: i agree that, in conditional prediction markets, you sometimes wish to pay more for a contract that is less valuable in counterfactual expectation; but again, this happens only insofar as your bids do not cause the relevant condition to become true.)
longer version:
suppose there’s a coin that you’re pretty sure is biased such that it comes up heads 40% of the time, and a contract that pays out $1 if the market decides to toss the coin and it comes up heads, and suppose any money you pay for the contract gets refunded if the market decides not to toss the coin. suppose the market will toss the coin if the contracts are selling for more than 50¢.
your argument (as i understand it) correctly points out that it’s worth buying the contract from 40¢ to 45¢, because conditional on the market deciding to toss the coin, probably the market figured out that the coin actually isn’t biased away from heads (e.g. via their laser-scanner and simulator). and so either your 45¢ gets refunded or the contract is worth more than 45¢, either way you don’t lose (aside from the opportunity cost of money). but note that this argument depends critically on the step “the contract going above 50¢ is evidence that the market has determined that the coin is biased towards heads.” but that argument only holds insofar as the people bidding the contracts from (say) 49¢ to 51¢ have actually worked out the coin’s real bias (e.g., have actually run a laser-scanner or whatever).
intuition pump: suppose you bid the coin straight from 40¢ to 51¢ yourself, by accident, while still believing that the coin was very likely 40% likely to come up heads. the market closes in 5 minutes. what should you do? surely the answer is not “reason that, because the market price is above 50¢, somebody must have figured out that the coin is actually biased towards heads;” that’d be madness. sell.
more generally, nobody should bid a conditional branch into the top position unless they personally believe that it’s worth it in counterfactual expectation. (or in other words: the conditional stops meaning “somebody else determined this was worth it” when you’re the one pushing it over the edge; so when it comes to the person pushing the contract into the execution zone, from their perspective, the conditional matches the counterfactual.)
Again, if there’s a mistake, it would be helpful if you could explain exactly what that mistake is. You’re sort of stating that the conclusion is mistaken and then giving a parallel argument for a different conclusion. It would be great (for multiple reasons) if you could explain exactly where my argument fails.
It might be helpful to focus on this example, which is pretty self-contained:
Suppose there’s a conditional prediction market for two coins. After a week of bidding, the markets will close, whichever coin had contracts trading for more money will be flipped and $1 paid to contract-holders for head. The other market is cancelled.
Suppose you’re sure that coin A, has a bias of 60%. If you flip it lots of times, 60% of the flips will be heads. But you’re convinced coin B, is a trick coin. You think there’s a 59% chance it always lands heads, and a 41% chance it always lands tails. You’re just not sure which.
We want you to pay more for a contract for coin A, since that’s the coin you think is more likely to be heads (60% vs 59%). But if you like money, you’ll pay more for a contract on coin B. You’ll do that because other people might figure out if it’s an always-heads coin or an always-tails coin. If it’s always heads, great, they’ll bid up the market, it will activate, and you’ll make money. If it’s always tails, they’ll bid down the market, and you’ll get your money back.
You’ll pay more for coin B contracts, even though you think coin A is better in expectation. Order is not preserved. Things do not work out.
Are you claiming that this is mistaken, or rather that this is correct but it’s not a problem? (Of course, if this example is not central to what you see as a mistake, it could be the wrong thing to focus on.)
I’ve seen one argument which seems related to the one you’re making and I do agree with. Namely, right before the market closes the final bidder has an incentive to bid their true beliefs, provided they know they will be the final bidder. I certainly accept that this is true. If you know the final closing price, then Y is no longer a random variable, and you’re essentially just bidding in a non-conditional prediction market. I don’t think this is completely reassuring on its own, though, because there’s a great deal of tension with the whole idea of having a market equilibrium that reflects collective beliefs. I think you might be able to generalize this into some kind of an argument that as you get closer to closing, there’s less randomness in Y and so you have more of an incentive to be honest. But this worries me because it would appear to lead to weird dynamics where people wait until the last second to bid. Of course, this might be a totally different direction from what you’re thinking.
Are you claiming that this is mistaken, or rather that this is correct but it’s not a problem?
mistaken.
But if you like money, you’ll pay more for a contract on coin B.
this is an invalid step. it’s true in some cases but not others, depending on how the act of paying for a contract on coin B (with no additional knowledge of whether it’s double-headed) affects the chance that the market tosses coin B.
you’ll pay more for a contract on coin B. You’ll do that because other people might figure out if it’s an always-heads coin or an always-tails coin. If it’s always heads, great, they’ll bid up the market, it will activate, and you’ll make money. If it’s always tails, they’ll bid down the market, and you’ll get your money back.
So8res seems to be arguing that this reasoning only holds if your own purchase decision can’t affect the market (say, if you’re making a private bet on the side and both you and your counter-party are sworn to Bayesian secrecy). If your own bet could possibly change which contract activates, then you need to worry that contract B activates because you bid more than your true belief on it, in which case you lose money in expectation.
(Easy proof: Assume all market participants have precisely the same knowledge as you, and all follow your logic; what happens?)
I think dynomight’s reasoning doesn’t quite hold even when your own bet is causally isolated, because:
In order for you to pay more than $.59, you need to believe that the market is at least correlated with reality; that it’s more likely to execute contract B if contract B actually is more valuable. (This is a pretty weak assumption, but still an important one.)
In order for you to pay more than $.60 (not merely $.59 + epsilon), you not only need to believe that the market is correlated with reality, you need a quantitative belief that the correlation has at least a certain strength (enough to outweigh $.01). It’s not enough for it to be theoretically possible that someone has better info than you; it needs to be plausible at a certain quantitative threshold of plausibility.
You can sort-of eliminate assumption #2 if you rework the example so that your true beliefs about A and B are essentially tied, but if they’re essentially tied then it doesn’t pragmatically matter if we get the order wrong. Assumption #2 places a quantitative bound on how wrong they can be based on how plausible it is that the market outperforms your own judgment.
the trick is that the argument stops working for conditions that start to look like they might trigger. so the argument doesn’t disrupt the idea that conditional prediction markets put the highest price on the best choice, but it does disrupt the idea that the pricings for unlikely conditions are counterfactually accurate.
for intuition, suppose there’s a conditional prediction market for medical treatments for cancer. one of the treatments is “cut off the left leg.” if certain scans and tests come back just the right way (1% chance likely) then cutting off the left leg is the best possible treatment, but otherwise, it’s a dumb idea. if that condition is trading as if it’s very unlikely to be a good idea, you can buy it up at very low risk—most likely, the contract is canceled and you get your money back. but on the off-chance that the scans and tests come back in just the right way, you make a killing.
however, this incentive only exists insofar as “cut off the left leg” is not at risk of winning (before the tests and scans come back). if you thought the leg was going to be cut off simply because everyone else was buying the price up, and that it wouldn’t actually heal the cancer, you’d sell off your shares.
this argument implies that you can’t trust conditional prediction markets with cardinal ranking. however, afaict you’d need other arguments to imply that you can’t trust their choice of “best action.” such arguments probably exist, but this one isn’t sufficient for it. (off the top of my head, i’d consider the case where option A is better for society and option B is better for some malicious actor, and the malicious actor is rich enough to convince the market to take option B. my intuition without thinking further is that this should ‘obviously’ work if the malicious actor is rich enough, in a fashion that’s disanalogous to prediction markets in that it’s not solved automatically with time (because the counterfactuals never settle and so the more-correct traders can’t drain the rich-manipulator’s money), but i haven’t actually thought about it.)
Can you give an argument for this claim? You’re stating that there’s an error in my argument, but you don’t really engage with the argument or explain where exactly you think the error is.
For example, can you tell me what’s incorrect in my example of two coins where you think one has a 60% probability and the other 59%, yet you’d want to pay more for a contract on the 59% coin? https://www.lesswrong.com/posts/vqzarZEczxiFdLE39/futarchy-s-fundamental-flaw#No__order_is_not_preserved (If you believe something is incorrect there.)
short version: the analogy between a conditional prediction market and the laser-scanner-simulation setup only holds for bids that don’t push the contract into execution. (similarly: i agree that, in conditional prediction markets, you sometimes wish to pay more for a contract that is less valuable in counterfactual expectation; but again, this happens only insofar as your bids do not cause the relevant condition to become true.)
longer version:
suppose there’s a coin that you’re pretty sure is biased such that it comes up heads 40% of the time, and a contract that pays out $1 if the market decides to toss the coin and it comes up heads, and suppose any money you pay for the contract gets refunded if the market decides not to toss the coin. suppose the market will toss the coin if the contracts are selling for more than 50¢.
your argument (as i understand it) correctly points out that it’s worth buying the contract from 40¢ to 45¢, because conditional on the market deciding to toss the coin, probably the market figured out that the coin actually isn’t biased away from heads (e.g. via their laser-scanner and simulator). and so either your 45¢ gets refunded or the contract is worth more than 45¢, either way you don’t lose (aside from the opportunity cost of money). but note that this argument depends critically on the step “the contract going above 50¢ is evidence that the market has determined that the coin is biased towards heads.” but that argument only holds insofar as the people bidding the contracts from (say) 49¢ to 51¢ have actually worked out the coin’s real bias (e.g., have actually run a laser-scanner or whatever).
intuition pump: suppose you bid the coin straight from 40¢ to 51¢ yourself, by accident, while still believing that the coin was very likely 40% likely to come up heads. the market closes in 5 minutes. what should you do? surely the answer is not “reason that, because the market price is above 50¢, somebody must have figured out that the coin is actually biased towards heads;” that’d be madness. sell.
more generally, nobody should bid a conditional branch into the top position unless they personally believe that it’s worth it in counterfactual expectation. (or in other words: the conditional stops meaning “somebody else determined this was worth it” when you’re the one pushing it over the edge; so when it comes to the person pushing the contract into the execution zone, from their perspective, the conditional matches the counterfactual.)
Again, if there’s a mistake, it would be helpful if you could explain exactly what that mistake is. You’re sort of stating that the conclusion is mistaken and then giving a parallel argument for a different conclusion. It would be great (for multiple reasons) if you could explain exactly where my argument fails.
It might be helpful to focus on this example, which is pretty self-contained:
Are you claiming that this is mistaken, or rather that this is correct but it’s not a problem? (Of course, if this example is not central to what you see as a mistake, it could be the wrong thing to focus on.)
I’ve seen one argument which seems related to the one you’re making and I do agree with. Namely, right before the market closes the final bidder has an incentive to bid their true beliefs, provided they know they will be the final bidder. I certainly accept that this is true. If you know the final closing price, then Y is no longer a random variable, and you’re essentially just bidding in a non-conditional prediction market. I don’t think this is completely reassuring on its own, though, because there’s a great deal of tension with the whole idea of having a market equilibrium that reflects collective beliefs. I think you might be able to generalize this into some kind of an argument that as you get closer to closing, there’s less randomness in Y and so you have more of an incentive to be honest. But this worries me because it would appear to lead to weird dynamics where people wait until the last second to bid. Of course, this might be a totally different direction from what you’re thinking.
mistaken.
this is an invalid step. it’s true in some cases but not others, depending on how the act of paying for a contract on coin B (with no additional knowledge of whether it’s double-headed) affects the chance that the market tosses coin B.
So8res seems to be arguing that this reasoning only holds if your own purchase decision can’t affect the market (say, if you’re making a private bet on the side and both you and your counter-party are sworn to Bayesian secrecy). If your own bet could possibly change which contract activates, then you need to worry that contract B activates because you bid more than your true belief on it, in which case you lose money in expectation.
(Easy proof: Assume all market participants have precisely the same knowledge as you, and all follow your logic; what happens?)
I think dynomight’s reasoning doesn’t quite hold even when your own bet is causally isolated, because:
In order for you to pay more than $.59, you need to believe that the market is at least correlated with reality; that it’s more likely to execute contract B if contract B actually is more valuable. (This is a pretty weak assumption, but still an important one.)
In order for you to pay more than $.60 (not merely $.59 + epsilon), you not only need to believe that the market is correlated with reality, you need a quantitative belief that the correlation has at least a certain strength (enough to outweigh $.01). It’s not enough for it to be theoretically possible that someone has better info than you; it needs to be plausible at a certain quantitative threshold of plausibility.
You can sort-of eliminate assumption #2 if you rework the example so that your true beliefs about A and B are essentially tied, but if they’re essentially tied then it doesn’t pragmatically matter if we get the order wrong. Assumption #2 places a quantitative bound on how wrong they can be based on how plausible it is that the market outperforms your own judgment.