There appears to be a distaste/disregard for AI ethics (mostly here referring to bias and discrimination) research in LW. Generally the idea is that such research misses the point, or is not focused on the correct kind of misalignment (i.e. the existential kind). I think AI ethics research is important (beyond its real world implications) just like RL reward hacking in video game settings. In both cases we are showing that models learn unintended priorities, behaviours, and tendencies from the training process. Actually understanding how these tendencies form during training will be important for improving our understanding of SL and RL more generally.
My personal take is that this is already an area that people at large are ca. appropriately worried about. It can also lead you into politically polarized territory and people may reasonably prefer to avoid that unless there’s a good reason.
There has been high quality research finding ways that some models are biased against white people, and high quality research finding ways that models are biased against not white people. Generally, the pattern is that base models and early post trained models like GPT-3.5 are traditionally racist, and post-trained models are often woke, sometimes in spectacular “only pictures of black nazis” ways. I’ve personally validated that some of these replicated, from how davinci-002 would always pick the white sounding resume, to how claude 4.5 would if prodded save 1 muslim over 10 christians.
Lesswrong very white and very human, and so it’s not that surprising, but a little sad, that it has pivoted hard from sarcastically dismissive to very interested in model bias as the second dynamic emerged.
For what it’s worth I’m not white and I come primarily from an AI ethics background, my formal training is in the humanities. I do think its sad that people only fret about bias the moment it affects them, however, and I would rather the issue be taken seriously from the start.
There appears to be a distaste/disregard for AI ethics (mostly here referring to bias and discrimination) research in LW. Generally the idea is that such research misses the point, or is not focused on the correct kind of misalignment (i.e. the existential kind). I think AI ethics research is important (beyond its real world implications) just like RL reward hacking in video game settings. In both cases we are showing that models learn unintended priorities, behaviours, and tendencies from the training process. Actually understanding how these tendencies form during training will be important for improving our understanding of SL and RL more generally.
My personal take is that this is already an area that people at large are ca. appropriately worried about. It can also lead you into politically polarized territory and people may reasonably prefer to avoid that unless there’s a good reason.
There has been high quality research finding ways that some models are biased against white people, and high quality research finding ways that models are biased against not white people. Generally, the pattern is that base models and early post trained models like GPT-3.5 are traditionally racist, and post-trained models are often woke, sometimes in spectacular “only pictures of black nazis” ways. I’ve personally validated that some of these replicated, from how davinci-002 would always pick the white sounding resume, to how claude 4.5 would if prodded save 1 muslim over 10 christians.
Lesswrong very white and very human, and so it’s not that surprising, but a little sad, that it has pivoted hard from sarcastically dismissive to very interested in model bias as the second dynamic emerged.
For what it’s worth I’m not white and I come primarily from an AI ethics background, my formal training is in the humanities. I do think its sad that people only fret about bias the moment it affects them, however, and I would rather the issue be taken seriously from the start.
Thanks for the reply! Sorry that my original comment was a little too bitter.
No worries at all, I know I’ve had my fair share of bitter moments around AI as well. I hope you have a nice rest of your day :)
It’s interesting to read this in the context of the discussion of polarisation. Was this the first polarisation?