So, this “only question” formulation is a little awkward and I’m not really sure what it means.
ChristianKI brought up the proposition “(name)>(grades)” where > means that the prediction accuracy is higher, but the truth or falsity of that proposition is irrelevant to whether or not it’s epistemically legitimate to include name in a decision, which is determined by “(name+grades)>(grades)”.
I doubt that doing so is at all common when it comes to socially marked names
Doing things correctly is, in general, uncommon. But the shift implied by moving from ‘current’ to ‘correct’ is not always obvious. For example, both nonsmokers and smokers overestimate the health costs of smoking, which suggests that if their estimates became more accurate, we might see more smokers, not less. It’s possible that hiring departments are actually less biased against people with obviously black names than they should be.
if their estimates became more accurate, we might see more smokers, not less
...insofar as their current and future estimates of health costs are well calibrated with their actual smoking behavior, at least. Sure.
It’s possible that hiring departments are actually less biased against people with obviously black names than they should be.
Well, it’s odd to use “bias” to describe using observations as evidence in ways that reliably allow more accurate predictions, but leaving the language aside, yes, I agree that it’s possible that hiring departments are not weighting names as much as they should be for maximum accuracy in isolation… in other words, that names are more reliable evidence than they are given credit for being.
That said, if I’m right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.
Now, it might be that the evidential weight of names is so great that the error due to not granting it enough weight overshadows the error due to double-counting, and it may be that the signs are such that double-counting leads to more accurate results than not double-couting. Here again, I agree that this is possible.
But even if that’s true, continuing to erroneously double-count in the hopes that our errors keep cancelling each other out isn’t as reliable a long-term strategy as starting to correctly use all the evidence we have.
That said, if I’m right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.
Agreed. Any sort of decision process which uses multiple pieces of information should be calibrated on all of those pieces of information together whenever possible.
It’s even possible that if the costs of smoking are overestimated, more people should be smoking—part of the campaign against smoking is to underestimate the pleasures and social benefits of smoking.
For example, both nonsmokers and smokers overestimate the health costs of smoking, which suggests that if their estimates became more accurate, we might see more smokers, not less.
That in no way implies that it would be a good choice for people to smoke more. People don’t make those decisions through rational analysis.
ChristianKI brought up the proposition “(name)>(grades)” where > means that the prediction accuracy is higher, but the truth or falsity of that proposition is irrelevant to whether or not it’s epistemically legitimate to include name in a decision, which is determined by “(name+grades)>(grades)”.
Doing things correctly is, in general, uncommon. But the shift implied by moving from ‘current’ to ‘correct’ is not always obvious. For example, both nonsmokers and smokers overestimate the health costs of smoking, which suggests that if their estimates became more accurate, we might see more smokers, not less. It’s possible that hiring departments are actually less biased against people with obviously black names than they should be.
...insofar as their current and future estimates of health costs are well calibrated with their actual smoking behavior, at least. Sure.
Well, it’s odd to use “bias” to describe using observations as evidence in ways that reliably allow more accurate predictions, but leaving the language aside, yes, I agree that it’s possible that hiring departments are not weighting names as much as they should be for maximum accuracy in isolation… in other words, that names are more reliable evidence than they are given credit for being.
That said, if I’m right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.
Now, it might be that the evidential weight of names is so great that the error due to not granting it enough weight overshadows the error due to double-counting, and it may be that the signs are such that double-counting leads to more accurate results than not double-couting. Here again, I agree that this is possible.
But even if that’s true, continuing to erroneously double-count in the hopes that our errors keep cancelling each other out isn’t as reliable a long-term strategy as starting to correctly use all the evidence we have.
Agreed. Any sort of decision process which uses multiple pieces of information should be calibrated on all of those pieces of information together whenever possible.
It’s even possible that if the costs of smoking are overestimated, more people should be smoking—part of the campaign against smoking is to underestimate the pleasures and social benefits of smoking.
That in no way implies that it would be a good choice for people to smoke more. People don’t make those decisions through rational analysis.