What is the transfer between working on near-to-medium term Applied AI Alignment problems (e.g. “How to make my chatbot not coercive” or “How to align my recommender system?”) and long term AI safety / reduction of AI X-risk?
On working in academic labs:
How important is it that my research advisor sees eye-to-eye with me vis-a-vis AI Safety? Is it possible to do good AI Safety research even if my advisor doesn’t care about long term risks?
On working in industrial software companies:
What is the transfer between working on near-to-medium term Applied AI Alignment problems (e.g. “How to make my chatbot not coercive” or “How to align my recommender system?”) and long term AI safety / reduction of AI X-risk?
On working in academic labs:
How important is it that my research advisor sees eye-to-eye with me vis-a-vis AI Safety? Is it possible to do good AI Safety research even if my advisor doesn’t care about long term risks?