[Question] How to choose a PhD with AI Safety in mind

I’m about to begin doctoral studies in multiagent RL as applied to crowd simulation, but somewhere on the horizon, I see myself working on AI Safety-related topics. (I find the Value Alignment problem to be of particular interest)

Now, I’m asking myself the question—if my PhD is in a roughly related area of AI, but not really closely compatible with AI Safety, does that make anything more difficult further down the line? Or is it still perfectly fine?

No comments.