I ran into the same problem a while back and became frustrated that there wasn’t an elegant answer. It should at least be possible to unambiguously spot under- and over-confidence, but even this is not clear.
I guess we need to define exactly what we’re trying to measure and then treat it as an estimation problem, where each response is a data point stochastically generated from some hidden “calibration” parameter. But this is rife with mind-projection-fallacy pitfalls because the respondents’ reasoning processes and probability assignments need to be treated as objective parts of the world (which they are, but it’s just hard not to keep it all from becoming confused).
I ran into the same problem a while back and became frustrated that there wasn’t an elegant answer. It should at least be possible to unambiguously spot under- and over-confidence, but even this is not clear.
I guess we need to define exactly what we’re trying to measure and then treat it as an estimation problem, where each response is a data point stochastically generated from some hidden “calibration” parameter. But this is rife with mind-projection-fallacy pitfalls because the respondents’ reasoning processes and probability assignments need to be treated as objective parts of the world (which they are, but it’s just hard not to keep it all from becoming confused).