if their estimates became more accurate, we might see more smokers, not less
...insofar as their current and future estimates of health costs are well calibrated with their actual smoking behavior, at least. Sure.
It’s possible that hiring departments are actually less biased against people with obviously black names than they should be.
Well, it’s odd to use “bias” to describe using observations as evidence in ways that reliably allow more accurate predictions, but leaving the language aside, yes, I agree that it’s possible that hiring departments are not weighting names as much as they should be for maximum accuracy in isolation… in other words, that names are more reliable evidence than they are given credit for being.
That said, if I’m right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.
Now, it might be that the evidential weight of names is so great that the error due to not granting it enough weight overshadows the error due to double-counting, and it may be that the signs are such that double-counting leads to more accurate results than not double-couting. Here again, I agree that this is possible.
But even if that’s true, continuing to erroneously double-count in the hopes that our errors keep cancelling each other out isn’t as reliable a long-term strategy as starting to correctly use all the evidence we have.
That said, if I’m right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.
Agreed. Any sort of decision process which uses multiple pieces of information should be calibrated on all of those pieces of information together whenever possible.
...insofar as their current and future estimates of health costs are well calibrated with their actual smoking behavior, at least. Sure.
Well, it’s odd to use “bias” to describe using observations as evidence in ways that reliably allow more accurate predictions, but leaving the language aside, yes, I agree that it’s possible that hiring departments are not weighting names as much as they should be for maximum accuracy in isolation… in other words, that names are more reliable evidence than they are given credit for being.
That said, if I’m right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.
Now, it might be that the evidential weight of names is so great that the error due to not granting it enough weight overshadows the error due to double-counting, and it may be that the signs are such that double-counting leads to more accurate results than not double-couting. Here again, I agree that this is possible.
But even if that’s true, continuing to erroneously double-count in the hopes that our errors keep cancelling each other out isn’t as reliable a long-term strategy as starting to correctly use all the evidence we have.
Agreed. Any sort of decision process which uses multiple pieces of information should be calibrated on all of those pieces of information together whenever possible.