Speaking from my experience, my sense is indeed that people who think it’s important and interesting and who are resilient to social change have been able to make the leap to doing alignment research and been incredibly impactful, but that it should be a red flag when people think it’s important without the understanding/curiosity or social resilience. They can take up a lot of resources while falling into simple error modes.
Strongly agree. Awareness of this risk is, I think, the reason for some of CFAR’s actions that most-often confuse people—not teaching AI risk at intro workshops, not scaling massively, etc.
Speaking from my experience, my sense is indeed that people who think it’s important and interesting and who are resilient to social change have been able to make the leap to doing alignment research and been incredibly impactful, but that it should be a red flag when people think it’s important without the understanding/curiosity or social resilience. They can take up a lot of resources while falling into simple error modes.
Strongly agree. Awareness of this risk is, I think, the reason for some of CFAR’s actions that most-often confuse people—not teaching AI risk at intro workshops, not scaling massively, etc.