Take teaching the Bayes equation as if that’s what you’d actually use. Sure, some general ideas (consider the prior, always update on your observations) are useful, but the equation itself? Noone at CFAR walks into a supermarket and then continuously inputs actual numbers into Bayes equations in their head.
Agreed! I teach the Bayes class at workshops, and it’s not a math drill class. It’s on how to get the habit of paying attention to the components of Bayes theorem in everyday life. For example, we usually ask just ask “Would I be likely to see Y if X were true?” and skip the question “Would I be likely to see Y if X were not true?” So we practice ways to trigger this thought so you don’t get tricked by base rates or other pitfalls.
Concrete example: Someone you’re interviewing for a job flubs one question and your first thought is that your shouldn’t hire them, because people who aren’t qualified flub questions. But pause and ask how often you expect qualified people to miss one question in an hour long interview. Your answer will vary based on the kinds of questions your asking, but you may be treated the evidence as a stronger signal than it is.
Concrete example: Someone you’re interviewing for a job flubs one question and your first thought is that your shouldn’t hire them, because people who aren’t qualified flub questions. But pause and ask how often you expect qualified people to miss one question in an hour long interview. Your answer will vary based on the kinds of questions your asking, but you may be treated the evidence as a stronger signal than it is.
Or maybe you treated it as a weaker signal than it is. This is a strawman anyway, people who never in their life heard of Bayes do compare it to their hypothetical idea of how a competent person would do on their exam, and soon thereafter, to their actual knowledge of how a competent person does, remedying all sorts of miscalibrations.
If anything, in practice the actual problem with interviews is generally that incompetents get through, because incompetents are being interviewed so much more than anyone else. Diligent ability to never flunk anything (conscientiousness) is, at least, something very useful in workplace that you can’t fake by preparing specifically for interviews.
Then there’s also this enormous utility disparity between the minor dis-utility of perhaps running the interviews for a little longer and ending up hiring the best when no one passes, and major dis-utility of hiring an incompetent.
It’s not really an advice, but I can see how its likeable—there’s people who didn’t get hired because they flunked “maybe one question”, and these folks will get a fix of their endorphins when they rationalize it as the HR person being irrational.
More generally, a very good test will result in a uniform distribution of scores (rather than a bell curve), maximizing the information content of the score.
Agreed! I teach the Bayes class at workshops, and it’s not a math drill class. It’s on how to get the habit of paying attention to the components of Bayes theorem in everyday life. For example, we usually ask just ask “Would I be likely to see Y if X were true?” and skip the question “Would I be likely to see Y if X were not true?” So we practice ways to trigger this thought so you don’t get tricked by base rates or other pitfalls.
Concrete example: Someone you’re interviewing for a job flubs one question and your first thought is that your shouldn’t hire them, because people who aren’t qualified flub questions. But pause and ask how often you expect qualified people to miss one question in an hour long interview. Your answer will vary based on the kinds of questions your asking, but you may be treated the evidence as a stronger signal than it is.
Or maybe you treated it as a weaker signal than it is. This is a strawman anyway, people who never in their life heard of Bayes do compare it to their hypothetical idea of how a competent person would do on their exam, and soon thereafter, to their actual knowledge of how a competent person does, remedying all sorts of miscalibrations.
If anything, in practice the actual problem with interviews is generally that incompetents get through, because incompetents are being interviewed so much more than anyone else. Diligent ability to never flunk anything (conscientiousness) is, at least, something very useful in workplace that you can’t fake by preparing specifically for interviews.
Then there’s also this enormous utility disparity between the minor dis-utility of perhaps running the interviews for a little longer and ending up hiring the best when no one passes, and major dis-utility of hiring an incompetent.
It’s not really an advice, but I can see how its likeable—there’s people who didn’t get hired because they flunked “maybe one question”, and these folks will get a fix of their endorphins when they rationalize it as the HR person being irrational.
A good test will include a few questions almost no-one can answer. That avoids the problem of having more than one 10⁄10 score.
More generally, a very good test will result in a uniform distribution of scores (rather than a bell curve), maximizing the information content of the score.