Consider a test followed by a re-test (which we are trying to predict). To calculate expected score on the re-test you need to apply regression to the mean. For a population where you measured lower mean or (in high range) smaller variance, you’ll have to regress more.
Of course, that mathematical fact doesn’t make it non racist or morally right to do such adjustment. You could add a couple simple extra questions to the test, to obtain similar improvement in the accuracy. Or you could use some other side data instead—weight, height, and blood type, for example, there’s a lot of other data you can use besides race, if the race is used but nothing else, that’s because of tradition of racism, not because of some awesome rationality. It’s fairly amusing to see how race realists justify racism with increased accuracy, but start complaining when you adjust your evaluation of them in much same manner using racism/non-racism as evidence...
edit: An important correction. The test-to-test variance may also differ between the groups. E.g. if we have some robots that always test the same, even if they have low mean, they’ll have smaller regression to the mean than humans.
Consider a test followed by a re-test (which we are trying to predict). To calculate expected score on the re-test you need to apply regression to the mean.
This is only true if you assume there is some component of luck or guesswork to the score. I admit that this may be a good model for the kinds of tests you get in American high schools. However, it is not clear to me that “black people” is the correct population to use for the regression, because by construction you have an untypical member. Why not “high-scoring people” or “all students”?
Perhaps it would be helpful to construct an example using something other than race as the difference between populations, to avoid emotional entanglements?
If there is no component of luck or guesswork or something that varies from test to test, then the retest will be exactly the same as the original test, but that’s not what we see in pretty much any test. or any measurement of anything.
Consider a test followed by a re-test (which we are trying to predict). To calculate expected score on the re-test you need to apply regression to the mean. For a population where you measured lower mean or (in high range) smaller variance, you’ll have to regress more.
Of course, that mathematical fact doesn’t make it non racist or morally right to do such adjustment. You could add a couple simple extra questions to the test, to obtain similar improvement in the accuracy. Or you could use some other side data instead—weight, height, and blood type, for example, there’s a lot of other data you can use besides race, if the race is used but nothing else, that’s because of tradition of racism, not because of some awesome rationality. It’s fairly amusing to see how race realists justify racism with increased accuracy, but start complaining when you adjust your evaluation of them in much same manner using racism/non-racism as evidence...
edit: An important correction. The test-to-test variance may also differ between the groups. E.g. if we have some robots that always test the same, even if they have low mean, they’ll have smaller regression to the mean than humans.
This is only true if you assume there is some component of luck or guesswork to the score. I admit that this may be a good model for the kinds of tests you get in American high schools. However, it is not clear to me that “black people” is the correct population to use for the regression, because by construction you have an untypical member. Why not “high-scoring people” or “all students”?
Perhaps it would be helpful to construct an example using something other than race as the difference between populations, to avoid emotional entanglements?
Try neuroskeptic.
If there is no component of luck or guesswork or something that varies from test to test, then the retest will be exactly the same as the original test, but that’s not what we see in pretty much any test. or any measurement of anything.