Can someone who’s done calibration training comment on whether it really seems to represent the ability to “judge how much evidence you have on a given issue”, as opposed to accurately translate brain-based probability estimates in to numerical probability estimates?
As I interpret it, the two are distinct but calibration training does both. That is, there’s both a “subjective feeling of certainty”->”probability number” model that’s being trained, and that model probably ought to be trained for every field independently (that is, determining how much subjective feeling of certainty you should have in different cases). There appears to be some transfer but I don’t think it’s as much as Yvain seems to be postulating.
Have you done calibration training? Do you recommend it? I think I remember someone from CFAR saying that it was kind of a niche skill, and to my knowledge it hasn’t been incorporated in to their curriculum (although Andrew Critch created a calibration android app?)
I’ve done a moderate amount of training. I think that the credence game is fun enough to put an hour or two into, but I think the claim that it’s worth putting serious effort into rests on the claim that it transfers (or that probabilistic judgments are common enough in your professional field that it makes sense to train those sorts of judgments).
I tried the game for a while… many of the questions are pretty hard IMO (especially the “which of these top-10 ranked things was ranked higher” ones), which makes it a bit difficult to learn to differentiate easy & hard questions.
I think that’s necessary? You want to have questions you can answer at every credence band between 99% and 50%, and that implies questions you’re only 60% sure of the answer of.
As I interpret it, the two are distinct but calibration training does both. That is, there’s both a “subjective feeling of certainty”->”probability number” model that’s being trained, and that model probably ought to be trained for every field independently (that is, determining how much subjective feeling of certainty you should have in different cases). There appears to be some transfer but I don’t think it’s as much as Yvain seems to be postulating.
Have you done calibration training? Do you recommend it? I think I remember someone from CFAR saying that it was kind of a niche skill, and to my knowledge it hasn’t been incorporated in to their curriculum (although Andrew Critch created a calibration android app?)
I’ve done a moderate amount of training. I think that the credence game is fun enough to put an hour or two into, but I think the claim that it’s worth putting serious effort into rests on the claim that it transfers (or that probabilistic judgments are common enough in your professional field that it makes sense to train those sorts of judgments).
CFAR calibration games
I tried the game for a while… many of the questions are pretty hard IMO (especially the “which of these top-10 ranked things was ranked higher” ones), which makes it a bit difficult to learn to differentiate easy & hard questions.
Other calibration quizzes
I think that’s necessary? You want to have questions you can answer at every credence band between 99% and 50%, and that implies questions you’re only 60% sure of the answer of.