Estimate Stability

I’ve been try­ing to get clear on some­thing you might call “es­ti­mate sta­bil­ity.” Steven Kaas re­cently posted my ques­tion to Stack­Ex­change, but we might as well post it here as well:

I’m try­ing to rea­son about some­thing I call “es­ti­mate sta­bil­ity,” and I’m hop­ing you can tell me whether there’s some rele­vant tech­ni­cal lan­guage...
What do I mean by “es­ti­mate sta­bil­ity?” Con­sider these three differ­ent propo­si­tions:
  1. We’re 50% sure that a coin (known to be fair) will land on heads.

  2. We’re 50% sure that Matt will show up at the party.

  3. We’re 50% sure that Strong AI will be in­vented by 2080.

Th­ese es­ti­mates feel differ­ent. One rea­son they feel differ­ent is that the es­ti­mates have differ­ent de­grees of “sta­bil­ity.” In case (1) we don’t ex­pect to gain in­for­ma­tion that will change our prob­a­bil­ity es­ti­mate. But for cases (2) and (3), we may well come upon some in­for­ma­tion that causes us to ad­just the es­ti­mate ei­ther up or down.
So es­ti­mate (1) is more “sta­ble,” but I’m not sure how this should be quan­tified. Should I think of it in terms of run­ning a Monte Carlo simu­la­tion of what fu­ture ev­i­dence might be, and look­ing at some­thing like the var­i­ance of the dis­tri­bu­tion of the re­sult­ing es­ti­mates? What hap­pens when it’s a whole prob­a­bil­ity dis­tri­bu­tion for e.g. the time Strong AI is in­vented? (Do you do calcu­late the sta­bil­ity of the prob­a­bil­ity den­sity for ev­ery year, then av­er­age the re­sult?)
Here are some other con­sid­er­a­tions that would be use­ful to re­late more for­mally to con­sid­er­a­tions of es­ti­mate sta­bil­ity:
  • If we’re es­ti­mat­ing some vari­able, hav­ing a nar­row prob­a­bil­ity dis­tri­bu­tion (prior to fu­ture ev­i­dence with re­spect to which we’re try­ing to as­sess the sta­bil­ity) cor­re­sponds to hav­ing a lot of data. New data, in that case, would make less of a con­tri­bu­tion in terms of chang­ing the mean and re­duc­ing the var­i­ance.

  • There are differ­ences in model un­cer­tainty be­tween the three cases. I know what model to use when pre­dict­ing a coin flip. My method of pre­dict­ing whether Matt will show up at a party is shak­ier, but I have some idea of what I’m do­ing. With the Strong AI case, I don’t re­ally have any good idea of what I’m do­ing. Pre­sum­ably model un­cer­tainty is re­lated to es­ti­mate sta­bil­ity, be­cause the more model un­cer­tainty we have, the more we can change our es­ti­mate by re­duc­ing our model un­cer­tainty.

  • Another differ­ence be­tween the three cases is the de­gree to which our ac­tions al­low us to im­prove our es­ti­mates, in­creas­ing their sta­bil­ity. For ex­am­ple, we can re­duce the un­cer­tainty and in­crease the sta­bil­ity of our es­ti­mate about Matt by call­ing him, but we don’t re­ally have any good ways to get bet­ter es­ti­mates of Strong AI timelines (other than by wait­ing).

  • Value-of-in­for­ma­tion af­fects how we should deal with de­lay. Es­ti­mates that are un­sta­ble in the face of ev­i­dence we ex­pect to get in the fu­ture seem to im­ply higher VoI. This cre­ates a rea­son to ac­cept de­lays in our ac­tions. Or if we can eas­ily gather in­for­ma­tion that will make our es­ti­mates more ac­cu­rate and sta­ble, that means we have more rea­son to pay the cost of gath­er­ing that in­for­ma­tion. If we ex­pect to for­get in­for­ma­tion, or ex­pect our fu­ture selves not to take in­for­ma­tion into ac­count, dy­namic in­con­sis­tency be­comes im­por­tant. This is an­other rea­son why es­ti­mates might be un­sta­ble. One pos­si­ble strat­egy here is to pre­com­mit to have our es­ti­mates regress to the mean.

Thanks for any thoughts!