Here are three statements I believe with a probability of about 1/9:
The two 6-sided dice on my desk, when rolled, will add up to 5.
An AI system will kill at least 10% of humanity before the year 2100.
Starvation was a big concern in ancient Rome’s prime (claim borrowed from Elizabeth’s Epistemic Spot Check post).
Except I have some feeling that the “true probability” of the 6-sided die question is pretty much bang on exactly 1⁄9, but that the “true probability” of the Rome and AI xrisk questions could be quite far from 1⁄9 and to say the probability is precisely 1⁄9 seems… overconfident?
From a straightforward Bayesian point of view, there is no true probability. It’s just my subjective degree of belief! I’d be willing to make a bet at 8⁄1 odds on any of these, but not at worse odds, and that’s all there really is to say on the matter. It’s the number I multiply by the utilities of the outcomes to make decisions.
One thing you could do is imagine a set of hypotheses that I have that involve randomness, and then I have a probability distribution over which of these hypotheses is the true one, and by mapping each hypothesis to the probability it assigns to the outcome my probability distribution over hypotheses becomes a probability distribution over probabilities. This is sharply around 1⁄9 for the dice rolls, and widely around 1⁄9 for AI xrisk, as expected, so I can report 50% confidence intervals just fine. Except sensible hypotheses about historical facts probably wouldn’t be random, because either starvation was important or it wasn’t, that’s just a true thing that happens to exist in my past, maybe.
I like jacobjacob’s interpretation of a probability distribution over probabilities as an estimate of what your subjective degree of belief would be if you thought about the problem for longer (e.g. 10 hours). The specific time horizon seems a bit artificial (extreme case: I’m going to chat with an expert historian in 10 hours and 1 minute) but it does work and gives me the kind of results that makes sense. The advantage of this is that you can quite straightforwardly test your calibration (there really is a ground truth) - write down your 50% confidence interval, then actually do the 10 hours of research, and see how often the degree of belief you end up with lies inside the interval.
Here are three statements I believe with a probability of about 1/9:
The two 6-sided dice on my desk, when rolled, will add up to 5.
An AI system will kill at least 10% of humanity before the year 2100.
Starvation was a big concern in ancient Rome’s prime (claim borrowed from Elizabeth’s Epistemic Spot Check post).
Except I have some feeling that the “true probability” of the 6-sided die question is pretty much bang on exactly 1⁄9, but that the “true probability” of the Rome and AI xrisk questions could be quite far from 1⁄9 and to say the probability is precisely 1⁄9 seems… overconfident?
From a straightforward Bayesian point of view, there is no true probability. It’s just my subjective degree of belief! I’d be willing to make a bet at 8⁄1 odds on any of these, but not at worse odds, and that’s all there really is to say on the matter. It’s the number I multiply by the utilities of the outcomes to make decisions.
One thing you could do is imagine a set of hypotheses that I have that involve randomness, and then I have a probability distribution over which of these hypotheses is the true one, and by mapping each hypothesis to the probability it assigns to the outcome my probability distribution over hypotheses becomes a probability distribution over probabilities. This is sharply around 1⁄9 for the dice rolls, and widely around 1⁄9 for AI xrisk, as expected, so I can report 50% confidence intervals just fine. Except sensible hypotheses about historical facts probably wouldn’t be random, because either starvation was important or it wasn’t, that’s just a true thing that happens to exist in my past, maybe.
I like jacobjacob’s interpretation of a probability distribution over probabilities as an estimate of what your subjective degree of belief would be if you thought about the problem for longer (e.g. 10 hours). The specific time horizon seems a bit artificial (extreme case: I’m going to chat with an expert historian in 10 hours and 1 minute) but it does work and gives me the kind of results that makes sense. The advantage of this is that you can quite straightforwardly test your calibration (there really is a ground truth) - write down your 50% confidence interval, then actually do the 10 hours of research, and see how often the degree of belief you end up with lies inside the interval.