A possible reason to treat “this guy is racist in ways that both the broader culture and I agree is bad” more harshly than “this guy works on AI capabilities” is something like Be Nice Until You Can Coordinate Meanness—it makes sense to act differently when you’re enforcing an existing norm vs. trying to create a new one or just judging someone without engaging with norms.
A possible issue with that is that at least some broader-society norms about racism are bad actually and shouldn’t be enforced. I think a possible crux here is whether any norms against racism are just and worth enforcing, or whether the whole complex of such norms is unjust.
(For myself I take a meta-level stance approximately like yours but I also don’t really object to people taking stances more like eukaryote’s.)
A possible reason to treat “this guy is racist in ways that both the broader culture and I agree is bad” more harshly than “this guy works on AI capabilities” is something like Be Nice Until You Can Coordinate Meanness—it makes sense to act differently when you’re enforcing an existing norm vs. trying to create a new one or just judging someone without engaging with norms.
A possible issue with that is that at least some broader-society norms about racism are bad actually and shouldn’t be enforced. I think a possible crux here is whether any norms against racism are just and worth enforcing, or whether the whole complex of such norms is unjust.
(For myself I take a meta-level stance approximately like yours but I also don’t really object to people taking stances more like eukaryote’s.)
The “greater evil” may be worse, but the “more legible evil” is easier to coordinate against.