The fair approach is to have an entrance exam for better math classes, blind to the race.
Is it more important to be fair or accurate?
There are times where it’s more important to be fair. For example, punishing a person because he’s guilty discourages crime. Punishing someone because he’s black does not. Thus, using the fact that he’s black as evidence will mean more guilty people will go to jail, more innocents will avoid being jailed, and more people will commit crime.
Well, what do you think about losing points because your profile photo has atypical proportions, or atypical posture? Points adjustment for round face, or for relative finger lengths? For having too many or too few facebook friends, likes, and so on? Weight, height, and blood type?
I don’t think that really applies here, though.
Well, if you want to encourage education rather than encourage being white or having typical posture or other things like that, it does apply.
Well, if you want to encourage education rather than encourage being white or having typical posture or other things like that, it does apply.
If you’re giving prizes to the best students to encourage them, then it applies. If you’re trying to match the teaching style to the student, I don’t think it does.
One might say that the sanity waterline one has to cross to rationally handle test-score-based Bayesian predictions in a by-and-large rational way is much lower than the sanity waterline one has to cross to rationally handle relative-finger-length-based predictions, which itself is lower than the waterline for skin-color-based predictions.
What sanity? Everyone is pushing for measures that would be advantageous for themselves, opposing disadvantageous measures, and there’s nothing particularly insane about that, it’s just instinctive selfishness. The white ‘nerds’ for instance could be OK with adjustment for race, but very much not OK with adjustment for various looking odd metrics (which lump them together with the autistic). It’s only Bayesian when it’s someone else; when it’s you losing points, that’s you being lumped together with other people (on basis of some random trait that happens to be widely measured), which is of course bad and irrational and a bias (complete with examples of how it is inexact). Nothing insane about that either, it’s just selfishness.
Meanwhile, I’d dare to guess you can get considerably larger boost in accuracy from adding a couple more questions to a test, or using data from some other standardized test.
Is it more important to be fair or accurate?
There are times where it’s more important to be fair. For example, punishing a person because he’s guilty discourages crime. Punishing someone because he’s black does not. Thus, using the fact that he’s black as evidence will mean more guilty people will go to jail, more innocents will avoid being jailed, and more people will commit crime.
I don’t think that really applies here, though.
Well, what do you think about losing points because your profile photo has atypical proportions, or atypical posture? Points adjustment for round face, or for relative finger lengths? For having too many or too few facebook friends, likes, and so on? Weight, height, and blood type?
Well, if you want to encourage education rather than encourage being white or having typical posture or other things like that, it does apply.
If you’re giving prizes to the best students to encourage them, then it applies. If you’re trying to match the teaching style to the student, I don’t think it does.
One might say that the sanity waterline one has to cross to rationally handle test-score-based Bayesian predictions in a by-and-large rational way is much lower than the sanity waterline one has to cross to rationally handle relative-finger-length-based predictions, which itself is lower than the waterline for skin-color-based predictions.
What sanity? Everyone is pushing for measures that would be advantageous for themselves, opposing disadvantageous measures, and there’s nothing particularly insane about that, it’s just instinctive selfishness. The white ‘nerds’ for instance could be OK with adjustment for race, but very much not OK with adjustment for various looking odd metrics (which lump them together with the autistic). It’s only Bayesian when it’s someone else; when it’s you losing points, that’s you being lumped together with other people (on basis of some random trait that happens to be widely measured), which is of course bad and irrational and a bias (complete with examples of how it is inexact). Nothing insane about that either, it’s just selfishness.
Meanwhile, I’d dare to guess you can get considerably larger boost in accuracy from adding a couple more questions to a test, or using data from some other standardized test.