I’m doing mechanism design for eliciting information without money. Most people here are aware of scoring rules and prediction markets, which reward participants according to the accuracy of their predictions. Drazen Prelec’s Bayesian truth serum (BTS) is an alternate mechanism that rewards predictions relative to the answers of others instead of the actual event. Since verification is done internally, the mechanism works for questions that would be difficult or impossible to evaluate on a prediction market, e.g. “Will super-human AI be built in the next 100 years?” or “Which of these ten novels was the most innovative and ground-breaking?”.
All three types of mechanisms assume the participants want to maximize their score from the mechanism. In many circumstances though, people care much more about influencing the outcome of the mechanism than their score or payment. Consider a committee making a high stakes decision, like whether to fire an executive officer. Paying committee members based on their predictions would be gauche. Scores could be ignored if it meant getting a favored outcome, so BTS is easily manipulated without money. The usual fallback of majority vote is non-manipulable, but can fail to uncover the correct answer if participants are biased. BTS outputs the right answer with enough participants, even with bias. To ensure truth telling in Nash equilibrium, BTS does depends on participants having a common prior, although the mechanism operator doesn’t have to know what it is.
So far, I have mechanisms that encourage honesty without money, don’t depend on a common prior or specific belief formation processes, and capture ~80% of the potential gains over majority vote in simulations. The operation of the mechanism is fairly straightforward, although why is works is another question. I’m still trying to grasp what makes one mechanism estimate the state better than another, what the optimal mechanism is, or whether an optimal mechanism even exists given my constraints.
My primary focus is writing this up. At some point, I want to deploy a web app for polls on LW. I suspect this would be trivial for someone with actual development experience. I’m open to collaboration on the econ/stats or development side, so PM me if interested.
Individual participants don’t want to manipulate their own vote between two candiates, absent external incentives. Incentive compatible is more accurate than non-manipulable. Votes are manipulable through sybil attacks—voting many times under false identities.
I’m treating majority vote as the status quo to improve upon, with incentive compatibility the basic standard for new mechanisms. Also eliminating vulnerability to sybil attacks would be great, but not high priority.
I’m doing mechanism design for eliciting information without money. Most people here are aware of scoring rules and prediction markets, which reward participants according to the accuracy of their predictions. Drazen Prelec’s Bayesian truth serum (BTS) is an alternate mechanism that rewards predictions relative to the answers of others instead of the actual event. Since verification is done internally, the mechanism works for questions that would be difficult or impossible to evaluate on a prediction market, e.g. “Will super-human AI be built in the next 100 years?” or “Which of these ten novels was the most innovative and ground-breaking?”.
All three types of mechanisms assume the participants want to maximize their score from the mechanism. In many circumstances though, people care much more about influencing the outcome of the mechanism than their score or payment. Consider a committee making a high stakes decision, like whether to fire an executive officer. Paying committee members based on their predictions would be gauche. Scores could be ignored if it meant getting a favored outcome, so BTS is easily manipulated without money. The usual fallback of majority vote is non-manipulable, but can fail to uncover the correct answer if participants are biased. BTS outputs the right answer with enough participants, even with bias. To ensure truth telling in Nash equilibrium, BTS does depends on participants having a common prior, although the mechanism operator doesn’t have to know what it is.
So far, I have mechanisms that encourage honesty without money, don’t depend on a common prior or specific belief formation processes, and capture ~80% of the potential gains over majority vote in simulations. The operation of the mechanism is fairly straightforward, although why is works is another question. I’m still trying to grasp what makes one mechanism estimate the state better than another, what the optimal mechanism is, or whether an optimal mechanism even exists given my constraints.
My primary focus is writing this up. At some point, I want to deploy a web app for polls on LW. I suspect this would be trivial for someone with actual development experience. I’m open to collaboration on the econ/stats or development side, so PM me if interested.
When you have a draft, can you post it to the discussion section of LW? I am very interested in these things.
Will do.
Why? Isn’t it just more expensive to manipulate? More people to bribe and all?
Individual participants don’t want to manipulate their own vote between two candiates, absent external incentives. Incentive compatible is more accurate than non-manipulable. Votes are manipulable through sybil attacks—voting many times under false identities.
I’m treating majority vote as the status quo to improve upon, with incentive compatibility the basic standard for new mechanisms. Also eliminating vulnerability to sybil attacks would be great, but not high priority.