[Question] How can I help research Friendly AI?

I’ve just started my journey toward a PhD in machine learning. My first steps are in computer vision. But my long term goal is to be able to help in the effort to solve the alignment problem and produce Friendly AI before any non-friendly alternatives emerge.

What sort of trajectory should I aim for with my research? What sort of post-PhD jobs should I be aiming for? Who should I be making contacts with as I go?

No comments.