https://www.lesswrong.com/tag/functional-decision-theory argues for choosing as if you’re choosing the policy you’d follow in some situation before you learnt any of the relevant infortmation. In many games, having a policy of making certain choicese (that others could perhaps predict, and adjust their own choices accordingly) gets you better outcomes then just always doing what seems like a good idea ta the time. For example if someone credibly threatens you might be better off paying them to go away, but before you got the threat you would’ve prefered to commit yourself to never pay up so that people don’t threaten you in the first place.
A problem with arguments of the form “I expect that predictably not paying up will cause them not to threaten me” is that at the time you recieve the threat, you now know that argument to be wrong. They’ve proven to be somebody who still threatens you even though you do FDT, at which point you can simultaneously prove that refusing the threat doesn’t work and so you should pay up (because you’ve already seen the threat) and that you shouldn’t pay up for whatever FDT logic you were using before. Behaviour of agents who can prove a contradiction that is directly relevant to their decision function seems undefined. There needs to be some logical structure that lets you pick which information causes your choice, despite having enough in total to derive contradictions.
My alternative solution is that you aren’t convinced by the information you see, that they’ve actually already threatened you. It’s also possible you’re still inside their imagination as they decide whether to issue the threat. Whenever something is conditional on your actions in an epistemic state without being conditional on that epistemic state actually being valid (such as if someone predicts how you’d respond to a hypothetical threat before they issue it, knowing you’ll know it’s too late to stop when you get it) then there’s a ghost being lied to and you should think maybe you’re that ghost to justify ignoring the threat, rather than try to make decisions during a logically impossible situation.
Valid, I’m still working on writing up properly the version with full math which is much more complicated, without that math and without payment it consists of people telling their beliefs and being mysteriously believed about them, because everyone knows everyones incentivised to be honest and sincere and the Agreement Theorem says that means they’ll agree when they all know everyone elses reasoning.
Possible Example that I think is the minimum case for any kind of market information system like this:
weather.com wants accurate predictions 7 days in advance for a list of measurements that will happen at weather measurement stations around the world, to inform its customers.
It proposes a naive prior, something like every measurement being a random sample from the past history.
It offers to pay $1 million in reward per single expected bit of information about the average sensor which it uses to assess the outcome, predicted before the weekly deadline. That means that if the rain-sensors are all currently estimated at 10% chance of rain, and you move half of them to 15% and the other half to 5%, you should expect 1 million in profit for improving the rain predictions (conditional on your update actually being legitimate).
The market consists of many meteorologists looking at past data and other relevant information they can find elsewhere, and sharing the beliefs they reach about what the measurements will be in the future, in the form of a statistical model / distribution over possible sensor values. After making their own models, they can compare them and consider the ideas others thought of that they didn’t, until the Agreement Theorem says that should reach a common agreed prediction about the likelihood of combinations of outcomes.
How they reach agreement is up to them, but to prevent overconfidence you’ve got the threat that others can just bet against you and if you’re wrong you’ll lose, and to prevent underconfidence you’ve got the offer from the customer that they’ll pay out for higher information predictions.
That distribution becomes the output of the information market, and the customer pays for the information in terms of how much information it contains over their naive prior, according to their agreed returns.
How payment works is basically that everyone is kept honest by being paid in carefully shaped bets designed to be profitable in expectation if their claims are true and losing in expectation if their claims are false or if the information is made up. If the market knows you’re making it up they can call your bluff before the prediction goes out by strongly betting against you, but there doesn’t need to be another trader willing to do that: If the difference in prediction caused by you is not a step towards more accuracy then your bet will on-average lose and you’d be better off not playing.
This is insanely high-risk for something like a single boolean market, where often your bet will lose by simple luck, but with a huge array of mostly uncorrelated features to predict anyone actually adding information can expect to win enough bets on average to get their earned profit.