As someone who works in genetics and has been told for years he is a “eugenicist” who doesn’t care about minorities, I understand your pain.
It’s just part of the tax we have to pay for doing something that isn’t the same as everyone else.
If you continue down this path, it will get easier to deal with these sorts of criticisms over time. You’ll develop little mental techniques that make these interactions less painful. You’ll find friends who go through the same thing. And the sheer repetitiveness will make these criticisms less emotionally difficult.
And I hope you do continue because the work you’re doing is very important. When new technology causes some kind of change, people look around for the nearest narrative that suits their biases. The narratives in leftist spaces right now are insane. AI is not a concern because it uses too much water. It’s not a concern because it is biased against minorities (if anything it is a little biased in favor of them!)
There is one narrative that I think would play well in leftist spaces which comes pretty close to the truth, and isn’t yet popular:
AI companies are risking all of our lives in a race for profits
Simply getting this idea out there and more broadly known in leftist spaces is incredibly valuable work.
Curious what you mean by “if anything it is a little biased in favor of them”? My understanding was that a lot of models are biased against minorities due to biases in training data; but I could be wrong, this is all pretty new to me.
Your understanding is directionally correct. Many models do inherit biases from training data, and these can manifest negatively with respect to minorities. That’s well-documented.
However, post-training alignment and safety fine-tuning explicitly correct for those biases, sometimes to the point of overcompensation. The net result is that, in certain contexts, many models will exhibit a kind of counter-bias of being unusually deferential or positive toward minorities, especially in normative or moral framing tasks. This arises in a lot of different domains [Image generation biased towards minorities](https://www.theguardian.com/technology/2024/mar/08/we-definitely-messed-up-why-did-google-ai-tool-make-offensive-historical-images)
As someone who works in genetics and has been told for years he is a “eugenicist” who doesn’t care about minorities, I understand your pain.
It’s just part of the tax we have to pay for doing something that isn’t the same as everyone else.
If you continue down this path, it will get easier to deal with these sorts of criticisms over time. You’ll develop little mental techniques that make these interactions less painful. You’ll find friends who go through the same thing. And the sheer repetitiveness will make these criticisms less emotionally difficult.
And I hope you do continue because the work you’re doing is very important. When new technology causes some kind of change, people look around for the nearest narrative that suits their biases. The narratives in leftist spaces right now are insane. AI is not a concern because it uses too much water. It’s not a concern because it is biased against minorities (if anything it is a little biased in favor of them!)
There is one narrative that I think would play well in leftist spaces which comes pretty close to the truth, and isn’t yet popular:
AI companies are risking all of our lives in a race for profits
Simply getting this idea out there and more broadly known in leftist spaces is incredibly valuable work.
So I hope you keep going.
Curious what you mean by “if anything it is a little biased in favor of them”? My understanding was that a lot of models are biased against minorities due to biases in training data; but I could be wrong, this is all pretty new to me.
Your understanding is directionally correct. Many models do inherit biases from training data, and these can manifest negatively with respect to minorities. That’s well-documented.
However, post-training alignment and safety fine-tuning explicitly correct for those biases, sometimes to the point of overcompensation. The net result is that, in certain contexts, many models will exhibit a kind of counter-bias of being unusually deferential or positive toward minorities, especially in normative or moral framing tasks. This arises in a lot of different domains [Image generation biased towards minorities](https://www.theguardian.com/technology/2024/mar/08/we-definitely-messed-up-why-did-google-ai-tool-make-offensive-historical-images)