It’s not always fashionable around these parts to worry more about ML bias than safety, but this seems like a case where there’s essentially no safety risk, but potentially there’s bias risk.
So something around making sure that the impact is similar across different stakeholder groups, often demographic groups, might be in order.
I’m all over the bias issues. Because I can address them from my own practical experience, I’m happy working with what I know. The AI safety issues are way outside my practical experience, and I know it.
It’s not always fashionable around these parts to worry more about ML bias than safety, but this seems like a case where there’s essentially no safety risk, but potentially there’s bias risk.
So something around making sure that the impact is similar across different stakeholder groups, often demographic groups, might be in order.
I’m all over the bias issues. Because I can address them from my own practical experience, I’m happy working with what I know. The AI safety issues are way outside my practical experience, and I know it.