This is a significant effect in general, but I’m not sure how much epistemic cost it creates in this situation. Moderates working with AI companies mostly interact with safety researchers, who are not generally doing bad things. There may be a weaker second-order effect where the safety researchers at labs have some epistemic distortion from cooperating with capabilities efforts, and this can influence external people who are collaborating with them.
This is a significant effect in general, but I’m not sure how much epistemic cost it creates in this situation. Moderates working with AI companies mostly interact with safety researchers, who are not generally doing bad things. There may be a weaker second-order effect where the safety researchers at labs have some epistemic distortion from cooperating with capabilities efforts, and this can influence external people who are collaborating with them.
Fair point, that does seem like a moderating (heh) factor.