Capturing Uncertainty in Prediction Markets

I’ve been trying to make sense of what a 50% prediction means. Does it convey different meanings based on the question asked, and is it really as uninformative as it’s made out to be. As a part of that discussion, I believe there is a possible solution to two big problems prediction markets currently face.

  1. How to extract useful signal from 50% predictions

  2. How to incentivize those with low certainty to participate

Predictions tend to be comprised of two steps:

  1. What do I believe is going to happen?

  2. How certain am I?

There are many cases where even if one has a sense of what is going to happen, there is little to no certainty that it will. The options here are skipping the question (not placing a bet), or putting in 50% if you’re forced to choose.

I think it’s generally understood that using 50% to represent “I don’t know” is problematic. Picking 50% also holds an entirely different meaning for the question “Will this coin toss result in heads?” vs. “Will China invade Taiwan this year?”.

Similarly, as an observer of the market, does seeing the market at 50% really represent 1:1 odds? Or is it an indication that the market is extremely uncertain. There seems to be a useful signal missing here.

Finally, given that some of the most interesting questions in prediction markets are also about the most uncertain events, many people avoid participating or betting altogether. This is the opposite of what we want if these markets are to be useful.

A Possible Solution

There are two types of uncertainty, or “I don’t know”:

  1. I don’t know enough about this, so I won’t bet

  2. I’ve studied and researched this subject, and I still don’t know, so I won’t bet

Incentivizing the the first type to place bets would add noisy signal. But capturing the second type of “I don’t know” seems to be a pretty important signal about the studied uncertainty of the event.

There have been some ideas about how to encourage more participation, like by providing interest free loans as incentive. But I think that suffers from a few of problems. First, it doesn’t distinguish between noisy vs. studied signals. Second, it incentivizes picking the option with more upside. And third, it encourages exactly the most horrible and awful kind of punditry: Picking an opinion with certainty even though you have no idea.

So, is there a way to capture studied uncertainty as useful signal, AND incentivize the participation of those who are highly uncertain?

I think this could be accomplished with an Uncertainty Index coupled with every question, on which one can place bets. The index moves up and down based on what percentage of people/​money interacting with the question place a bet on the price/​outcome vs the Uncertainty Index itself.

If more people are placing bets on uncertainty than on making a prediction, I make money.

In some ways, it would represent a kind of volatility index, but not exactly. As someone only cursorily familiar with derivitives, this seems like it would only partially be tied to the existing price, and the direction the price takes. And to whatever extent it is tied, it would present a way to hedge bets on the original question.

There is at least some evidence that this would work based on few of the meta questions on Manifold. Here is an example related to Russia-Ukraine:

To implement this, Manifold could make their options ‘YES’, ‘NO’, ‘UNCERTAIN’ which make it quite intuitive to place the different types of bets.

The ability to place bets on an Uncertainty Index, or something similar that captures the core concept behind it, has the potential to encourage a lot more participation while also capturing an important signal of predictions: doubt that a meaningful prediction is even possible.