I’m a generalist and open sourcerer that does a bit of everything, but perhaps nothing particularly well. I’m also the Co-Director of Kairos, an AI safety fieldbuilding org.
I was previously the AI Safety Group Support Lead at CEA and a Software Engineer in the Worldview Investigations Team at Rethink Priorities.
AI safety needs much more than AI safety researchers, which is partly why capacity-building helps. Right now a lot of capacity building effort is being redirected to policy researchers, operations people, generalists, middle management, recruiters, comms people, etc.
That said, yeah, there are people for which it indeed it won’t make sense to work on AI safety or policy, but I also think people that are very talented and capable often refuse to go into AI safety out of impostor syndrome, so I think messaging around talent is a bit of balancing act. For example, in the past, Kairos has tried to frame decision emails in a way that nudges people whose comparative advantage might be misplaced to consider other options, while also trying to encourage those who might overupdate from a rejection.
The example of public health experience is a pretty interesting one, because I know of two doctors who came from a (strong) public health background who are now working on risks from advanced AI, and both do some pretty exceptional work in the field. My take is not ‘everyone should pivot to AI safety’, but it seems to me like if you’re smart, adaptable and impact-oriented, this is something you should seriously consider, and you shouldn’t defer to whether your current work seems good enough, especially if there’s room for having a significantly larger amount of impact by pivoting. We’re in an emergency, and while not everyone should drop what they’re doing for this emergency, a lot of people should.