AI Safety Discord community (requesting help!)

This week I started the Safe AI Community.

It’s an online community where individuals and groups can connect and support each other regarding AI safety work.

Twitter: https://​​twitter.com/​​SafeAICommunity

Discord: discord.gg/​FeZbFnAvve

Why?

  1. A multidisciplinary and open approach to AI safety will generate novel ideas and collaborations, and support the creation of AI that is informed by representative perspectives and moral views.

  2. Forums, private research institutes and student groups do create a lot of value, but I believe there’s a gap to be filled for a hub that anyone can access, meet others, and have live discussions. Learning about and contributing to AI safety should be open to all, and we can always do more to grow the community in numbers and diversity.

Primary goals:

  1. To create an open space that welcomes anyone, regardless of background or skill-level.

  2. To grow and promote AI safety from the ‘bottom-up’, to increase the diversity of thought and accessibility of AI safety.

What does the community provide?

We’re just getting started, and have already populated our Discord with a number of features, including:

  1. A resource database, with content in multiple formats (videos, podcasts, blogs, papers, etc.), including a beginners section that introduces high-level concepts.

  2. A job/​funding/​collaboration section, which we will keep up to date (and soon automate). We’re only a few days old, yet already have a professor offering to supervise applications on Future of List Institute research!

  3. A place for discussion. We’re starting with only a couple of channels, but as we grow we’ll create specific topic sections and weekly discussion threads.

I’ll soon be adding new features, such as:

  1. Automatically re-posting LessWrong and AI Alignment Forum posts to the community to facilitate and encourage discussion around them. This can also be done for news, research papers, etc.

  2. A notion workspace that mirrors the Discord resources and acts as an open AI safety wiki.

Please contribute!

This project is intended to be entirely community-driven, and needs your help to grow!

I’m looking for:

  • Anyone and everyone to join the community, we’re only going to be as valuable as our members!

  • Suggestions/​feedback. Obviously, I want to make this as valuable as possible. The project is inspired by making AI better through diverse perspectives and bottom-up contributions, and I believe the same about the community itself.

  • Connections. I want to connect with any individuals/​groups and find ways we can all help each other. Connect and engage with us on Twitter, or make direct introductions, to help us achieve our goals.

White paper: Once we have some momentum, and more contributors, I’d also love to put together a short white paper on our vision and purpose.

Funding: I’ll also be seeking funding to grow and add value to the community. If you’d like to help us get that funding, or just be paid to help once the funding is acquired, please join us!

Who am I?

A software developer and AI safety researcher, currently investigating decentralized AI, and it’s potential to mitigate risks from narrow and general AI. I’ll be posting more here in the months to come!