My experience has been that in practice it almost always suffices to express second-order knowledge qualitatively rather than quantitatively. Granted, it requires some common context and social trust to be adequately calibrated on “50%, to make up a number” < “50%, just to say a number” < “let’s say 50%” < “something in the ballpark of 50%” < “plausibly 50%” < “probably 50%” < “roughly 50%” < “actually just 50%” < “precisely 50%” (to pick syntax that I’m used to using with people I work with), but you probably don’t actually have good (third-order!) calibration of your second-order knowledge, so why bother with the extra precision?
The only other thing I’ve seen work when you absolutely need to pin down levels of second-order knowledge is just talking about where your uncertainty is coming from, what the gears of your epistemic model are, or sometimes how much time of concerted effort it might take you to resolve X percentage points of uncertainty in expectation.
I have some answers (for some guesses about what your question is, based on your comments) below.
Suppose I estimate the probability for event X at 50%. It’s possible that this is just my prior and if you give me any amount of evidence, I’ll update dramatically. Or it’s possible that this number is the result of a huge amount of investigation and very strong reasoning, such that even if you give me a bunch more evidence, I’ll barely shift the probability at all. In what way can I quantify the difference between these two things?
This sounds like Bayes’ Theorem, but the actual question about how you generate numbers given a hypothesis...I don’t know. There’s stuff around here about a good scoring rule I could dig up. Personally, I just make up numbers to give me an idea.
specify the function that you’ll shift to if a randomly chosen domain expert told you that yours was a certain amount too high/low.
I found this on higher order probabilities. (It notes the rule “for any x, x = PR[E given that Pr(E) = x]”.) Google also turned up some papers on the subject I haven’t read yet.
My experience has been that in practice it almost always suffices to express second-order knowledge qualitatively rather than quantitatively. Granted, it requires some common context and social trust to be adequately calibrated on “50%, to make up a number” < “50%, just to say a number” < “let’s say 50%” < “something in the ballpark of 50%” < “plausibly 50%” < “probably 50%” < “roughly 50%” < “actually just 50%” < “precisely 50%” (to pick syntax that I’m used to using with people I work with), but you probably don’t actually have good (third-order!) calibration of your second-order knowledge, so why bother with the extra precision?
The only other thing I’ve seen work when you absolutely need to pin down levels of second-order knowledge is just talking about where your uncertainty is coming from, what the gears of your epistemic model are, or sometimes how much time of concerted effort it might take you to resolve X percentage points of uncertainty in expectation.
That makes sense to me, and what I’d do in practice too, but it still feels odd that there’s no theoretical solution to this question.
What’s your question?
I have some answers (for some guesses about what your question is, based on your comments) below.
This sounds like Bayes’ Theorem, but the actual question about how you generate numbers given a hypothesis...I don’t know. There’s stuff around here about a good scoring rule I could dig up. Personally, I just make up numbers to give me an idea.
This sounds like Inadequate Equilibria.
I found this on higher order probabilities. (It notes the rule “for any x, x = PR[E given that Pr(E) = x]”.) Google also turned up some papers on the subject I haven’t read yet.