the main point of these ideas is to be able to demonstrate that a certain algorithm—which may be just a complicated messy black box—is not biased
If you’re looking to satisfy a legal criterion you need to talk to a lawyer who’ll tell you how that works. Notably, the way the law works doesn’t have to look reasonable or commonsensical. For example, EEOC likes to observe outcomes and cares little about the process which leads to what they think are biased outcomes.
Because many people treat variables like race as special … social pressure … more relevant than it is economically efficient for them to do so …
Sure, but then you are leaving the realm of science (aka epistemic rationality). You can certainly build models to cater to fads and prejudices of today, but all you’re doing is building deliberately inaccurate maps.
I am also not sure what’s the deal with “economically efficient”. No one said this is the pinnacle of all values and everything must be subservient to economic efficiency.
From the legal perspective, it’s probably quite simple.
I am pretty sure you’re mistaken about this.
the perception of fairness is probably going to be what’s important here
LOL.
I think this is a fundamentally misguided exercise and, moreover, one which you cannot win—in part because shitstorms don’t care about details of classifiers.
I feel this all is a category error. You’re trying to introduce terms from morality (‘fairness’) into statistics. That, I’m pretty sure, is a bad idea. And the word ‘bias’ already has a well-defined meaning in stats.
If you want to introduce moral judgement into your results, first construct a good map, and then adjust it according to taste. At least then you have a better chance of seeing the trade-offs you’re making.
If you’re looking to satisfy a legal criterion you need to talk to a lawyer who’ll tell you how that works. Notably, the way the law works doesn’t have to look reasonable or commonsensical. For example, EEOC likes to observe outcomes and cares little about the process which leads to what they think are biased outcomes.
Sure, but then you are leaving the realm of science (aka epistemic rationality). You can certainly build models to cater to fads and prejudices of today, but all you’re doing is building deliberately inaccurate maps.
I am also not sure what’s the deal with “economically efficient”. No one said this is the pinnacle of all values and everything must be subservient to economic efficiency.
I am pretty sure you’re mistaken about this.
LOL.
I think this is a fundamentally misguided exercise and, moreover, one which you cannot win—in part because shitstorms don’t care about details of classifiers.
Do you not feel my definition of fairness is a better one than the one proposed in the original paper?
I feel this all is a category error. You’re trying to introduce terms from morality (‘fairness’) into statistics. That, I’m pretty sure, is a bad idea. And the word ‘bias’ already has a well-defined meaning in stats.
If you want to introduce moral judgement into your results, first construct a good map, and then adjust it according to taste. At least then you have a better chance of seeing the trade-offs you’re making.