It’s great that you want to help! Here are some ways you can learn more about AI safety and start contributing:
Learn More:
Learning more about AI alignment will provide you with good foundations for helping. You could start by absorbing content and thinking about challenges or possible solutions.
Consider these options:
Keep exploring our website.
Complete an online course. AI Safety Fundamentals is a popular option that offers courses for both alignment and governance. There is also Intro to ML Safety which follows a more empirical curriculum. Getting into these courses can be competitive, but all the material is also available online for self-study. More in the follow-up question.
Help us write and edit the articles on this website so that other people can learn about AI alignment more easily. You can always ask on Discord for feedback on things you write.
If you don’t know where to start, consider signing up for a navigation call with AI Safety Quest to learn what resources are out there and to find social support.
If you’re overwhelmed, you could look at our other article that offers more bite-sized suggestions.
Not all EA groups focus on AI safety; contact your local group to find out if it’s a good match. ↩︎
AI Safety Info’s answer to “I want to help out AI Safety without making major life changes. What should I do?” is currently:
It’s great that you want to help! Here are some ways you can learn more about AI safety and start contributing:
Learn More:
Learning more about AI alignment will provide you with good foundations for helping. You could start by absorbing content and thinking about challenges or possible solutions.
Consider these options:
Keep exploring our website.
Complete an online course. AI Safety Fundamentals is a popular option that offers courses for both alignment and governance. There is also Intro to ML Safety which follows a more empirical curriculum. Getting into these courses can be competitive, but all the material is also available online for self-study. More in the follow-up question.
Learn more by reading books (we recommend The Alignment Problem), watching videos, or listening to podcasts.
Join the Community:
Joining the community is a great way to find friends who are interested and will help you stay motivated.
Join the local group for AI Safety, Effective Altruism[1] or LessWrong. You can also organize your own!
Join online communities such as Rob Miles’s Discord or the AI Alignment Slack.
Write thoughtful comments on platforms where people discuss AI safety, such as LessWrong.
Attend an EAGx conference for networking opportunities.
Here’s a list of existing AI safety communities.
Donate, Volunteer, and Reach Out:
Donating to organizations or individuals working on AI safety can be a great way to provide support.
Donate to AI safety projects.
Help us write and edit the articles on this website so that other people can learn about AI alignment more easily. You can always ask on Discord for feedback on things you write.
Write to local politicians about policies to reduce AI existential risk
If you don’t know where to start, consider signing up for a navigation call with AI Safety Quest to learn what resources are out there and to find social support.
If you’re overwhelmed, you could look at our other article that offers more bite-sized suggestions.
Not all EA groups focus on AI safety; contact your local group to find out if it’s a good match. ↩︎