Estimate Stability

I’ve been trying to get clear on something you might call “estimate stability.” Steven Kaas recently posted my question to StackExchange, but we might as well post it here as well:

I’m trying to reason about something I call “estimate stability,” and I’m hoping you can tell me whether there’s some relevant technical language...
What do I mean by “estimate stability?” Consider these three different propositions:
  1. We’re 50% sure that a coin (known to be fair) will land on heads.

  2. We’re 50% sure that Matt will show up at the party.

  3. We’re 50% sure that Strong AI will be invented by 2080.

These estimates feel different. One reason they feel different is that the estimates have different degrees of “stability.” In case (1) we don’t expect to gain information that will change our probability estimate. But for cases (2) and (3), we may well come upon some information that causes us to adjust the estimate either up or down.
So estimate (1) is more “stable,” but I’m not sure how this should be quantified. Should I think of it in terms of running a Monte Carlo simulation of what future evidence might be, and looking at something like the variance of the distribution of the resulting estimates? What happens when it’s a whole probability distribution for e.g. the time Strong AI is invented? (Do you do calculate the stability of the probability density for every year, then average the result?)
Here are some other considerations that would be useful to relate more formally to considerations of estimate stability:
  • If we’re estimating some variable, having a narrow probability distribution (prior to future evidence with respect to which we’re trying to assess the stability) corresponds to having a lot of data. New data, in that case, would make less of a contribution in terms of changing the mean and reducing the variance.

  • There are differences in model uncertainty between the three cases. I know what model to use when predicting a coin flip. My method of predicting whether Matt will show up at a party is shakier, but I have some idea of what I’m doing. With the Strong AI case, I don’t really have any good idea of what I’m doing. Presumably model uncertainty is related to estimate stability, because the more model uncertainty we have, the more we can change our estimate by reducing our model uncertainty.

  • Another difference between the three cases is the degree to which our actions allow us to improve our estimates, increasing their stability. For example, we can reduce the uncertainty and increase the stability of our estimate about Matt by calling him, but we don’t really have any good ways to get better estimates of Strong AI timelines (other than by waiting).

  • Value-of-information affects how we should deal with delay. Estimates that are unstable in the face of evidence we expect to get in the future seem to imply higher VoI. This creates a reason to accept delays in our actions. Or if we can easily gather information that will make our estimates more accurate and stable, that means we have more reason to pay the cost of gathering that information. If we expect to forget information, or expect our future selves not to take information into account, dynamic inconsistency becomes important. This is another reason why estimates might be unstable. One possible strategy here is to precommit to have our estimates regress to the mean.

Thanks for any thoughts!