[Question] How might we make better use of AI capabilities research for alignment purposes?

When I check ArXiv for new AI alignment research papers, I see mostly capabilities research papers, presumably because most researchers are working on capabilities. I wonder if there’s alignment-related value to be extracted from all that capabilities research, and how we might get at it. Is anyone working on this, or does anyone have any good ideas?