It may not be a very good test, in many cases. Perhaps modifying it to gauge confidence could be better?
One can imagine there are near infinite sets of things that might be true for whatever the secret knowledge is regarding. Only a subset actually is a true. What is the most probable prior may be wildly divergent from what is actually true. If you judge purely on how they confirm to your secret set that doesn’t tell you how good they are at forecasting in general, just that they happen to be wrong on that set.
If you gauge confidence, that might be better. If they are very confident about something you know to be wrong, it is unlikely that the prior probability lined up with reality. If they are only moderately confident, or believe it best explains the evidence they have but are fully aware it may be incomplete or not explain other evidence they lack, then it seems unreasonable to strongly hold a view based on them.
It may not be a very good test, in many cases. Perhaps modifying it to gauge confidence could be better?
One can imagine there are near infinite sets of things that might be true for whatever the secret knowledge is regarding. Only a subset actually is a true. What is the most probable prior may be wildly divergent from what is actually true. If you judge purely on how they confirm to your secret set that doesn’t tell you how good they are at forecasting in general, just that they happen to be wrong on that set.
If you gauge confidence, that might be better. If they are very confident about something you know to be wrong, it is unlikely that the prior probability lined up with reality. If they are only moderately confident, or believe it best explains the evidence they have but are fully aware it may be incomplete or not explain other evidence they lack, then it seems unreasonable to strongly hold a view based on them.