Stampy’s AI Safety Info soft launch

Stampy’s AI Safety Info is a project to create an interactive FAQ about existential risk from AI, started by Rob Miles. Our goal is to build a single resource aimed at informing all audiences, whether that means giving them a basic introduction to the concepts, addressing their objections, or onboarding them into research or other useful projects. We currently have 280 answers live on the site, and hundreds more as drafts.

After running two ‘Distillation Fellowships’, in which a small team of paid editors spent three months working to improve and expand the material, we think the site is ready for a soft launch. We’re making this post to invite the collective attention of LessWrong and the EA Forum, hoping that your feedback will help us prepare for a full launch that will use Rob’s YouTube channel to reach a large audience.

What we’d like to know

In roughly descending order of priority:

  • Where are our answers factually or logically wrong, especially in non-obvious ways?

  • Where are we leaving out key information from the answers?

  • What parts are hard to understand?

  • Where can we make the content more engaging?

  • Where have we made oversights?

  • What questions should we add?

We’re particularly interested in suggestions from experts on questions and answers related to their area of specialization – please let us know[1] if you’d be interested in having a call where you advise us on our coverage of your domain.

How to leave feedback

  • Click the edit button in the corner of any answer on aisafety.info to go to the corresponding Google doc:

  • Leave comments and suggestions on the doc.[2] We’ll process these to improve the answers.

  • To leave general feedback about the site as a whole, you can use this form, or comment on this post.

To discuss answers in more depth, or get involved with further volunteer writing and editing, you can join Rob Miles’s Discord or look at the ‘Get Involved’ guide on Coda.

Front end

When exploring the site, you may notice that the front end has room for improvement. We welcome feedback on our planned redesign. AIsafety.info is built by volunteer developers – we’re hoping to get a prototype for this redesign working, but if someone reading this is willing to step up and take the lead on that project, we’ll achieve this goal faster. There’s also a more in-depth user experience overhaul coming, with a more prominent place for a chatbot that specializes in AI alignment.

Our plans

Our future plans, depending on available funding and volunteer time, are:

  • Use your feedback to further improve our answers, then make a full launch to the wider public when we’re confident it’s ready.

  • Run future distillation fellowships (watch for an announcement about the third fellowship soon).

  • Run more write-a-thon events, including the third one, running from October 6th through 9th, so participants can add to the content and potentially join as Distillation Fellows.

  • Improve the front end, as detailed above.

  • Get the chatbot (which is currently in prototype) ready to be integrated into the main interface.

Thanks for helping us turn aisafety.info into the go-to reference for clear, reliable information about AI safety!

  1. ^

    E.g. in comments or direct messages here, or by posting on Discord or contacting stevenk3458 there.

  2. ^

    It’s not necessary, but using a Google account will make this a bit easier – that way, your comments will show up under your name.