I have a few questions to the subset of readers who:
Believe technical AI alignment research is both important and hard to make significant progress in
Have a personal connection with a person who doesn’t know much about AI alignment, but who you think would have a real chance to make valuable contributions to the field if they entered it (or perhaps you know someone who cares about AI risk and have such a personal connection, and you have enough knowledge to talk on their behalf). It may be your friend, colleague, supervisor, etc.
I would love to hear your thoughts on some of the following questions:
What reasons prevent you from introducing them to AI alignment by e.g. scheduling time with them and talking about some of the motivations and open problems in the field?
If you’ve tried something like this, how did it go?
What factors do you think would increase your willingness to bring AI alignment to their attention and/or the potential value resulting from it? Bonus points for reasonably low-hanging fruit here.