[Question] Feedback request: Is the time right for an AI Safety stack exchange?

Feedback request: Is the time right for an AI Safety stack exchange?

Epistemic status: I think this is a really good idea, and most of the ~20 people I’ve asked in the community agree. I am uncertain of what exactly the best product would look like, and whether the community is big enough to make this a success quite yet—but I am confident that if we started it now it would become ‘widely regarded as a valuable resource by the community’ in the next two years[1].

I am a PhD student conducting AI Safety research from the Meridian office in Cambridge UK. Meridian has just spun out a new research org, Geodesic Research. This summer I’ve been mentoring a MARS project with Geodesic, and working as an external collaborator on a MATS stream.

These collaborations have inspired me to think about knowledge sharing systems that might scale well as the AI Safety community grows. Currently each different org has a different system for getting questions answered, mostly using 1-1 communication or technical_questions slack channels. These don’t scale well—the same technical questions get asked in multiple slack workspaces—leading to repeated effort—and making it hard for people to share information between orgs.

I think it would be great to have a StackExchange site for people to pose technical questions and receive answers—answers that everyone in the community can access and benefit from—and that this would complement the existing infrastructure on slack and LessWrong…

So I set up a proposal here.

Geodesic has already agreed to incubate the proposal and commit to its private beta phase.

Why might the AI Safety community want a new Q&A platform?

LessWrong’s ‘question’ feature is really great (thanks LightCone!). But I think an official StackExchange site would complement this and add further value to the community.

My core motivation is that there are many questions I personally would have liked answered in my research over the last few months, that have not felt appropriate for LW. Typically these are more technical (less conceptual/​philosophical) and shorter than the typical LW question. Things like:

  • I’d like to add a canary string to my new research project. What do I need to know to set up the right canary string?

  • What is the difference between scalable oversight and (online training for) low stakes control?

  • “I’m getting Policy Violation errors on [API] for this safety-relevant task—why?”

  • Can anyone recommend existing coding benchmarks that are meaningful, easy to run, and not yet saturated by frontier models (as of Sep 2025)?

These don’t need a full LW post, but benefit from expert answers that persist and can be referenced.

Additional benefits:

  • LW has strong ‘rationalist vibes’ that might be off-putting to newcomers to the field. By contrast a StackExchange would give an impression of authority/​solidity/​impartiality.

  • LW discussion might feel more intimidating to junior and mid-level researchers, whereas StackExchange is familiar and trusted, and comes with an understanding that there will be lots of dumb questions posted.

  • StackExchange provides clear incentives/​attribution for good answers, updatable permanence, and cross-org collaboration.

From speaking to folks around Meridian, I think there is a broader demand for this sort of resource, or culture. Since the tech-stack is already built, on-boarding such a resource is largely a cultural investment, and could have a really big impact as the field scales further.

StackExchange’s site creation process

StackExchange has a formal process for creating new sites through their Area51 platform (see their faq). This process has 4 stages:

  • Definition: questions are proposed and upvoted or downvoted; comments are not supported. Discussion occurs in the discussion zone.

  • Commitment: users ‘sign’ to commit themselves to be active in the early days of the proposal.

  • Private Beta: the site is live but not searchable/​listed on StackExchange. One needs sufficient engagement in the first 5 weeks to progress to public beta.

  • Public Beta: the site is live, listed, and fully functional. After 6 months of continued involvement Area51’s Community Team will consider removing the beta label.

We are currently in the Definition stage and need 60 followers and 40 questions with 10 upvotes to pass this phase. Since individual users can only contribute 5 questions and upvotes, this needs at least 80 engaged users. You can help with this now! (see below)

The thresholds for the subsequent stages aren’t specified in the FAQs, presumably these are at the discretion of the Area51 admins. I think Geodesic’s support should get us most of the way through the Commitment phase, enabling Private Beta to launch, at which point we can properly test for product market fit.

Results from beta are available for current and previous proposals (e.g. for Artificial Intelligence). From these, I think our main difficulty will be to get 150 users with 200+ rep. We’ll need to reach a significant fraction of the AI safety community to hit these numbers.

What do I need from you?

Share the proposal

Please mention this and send a link to colleagues and friends who might be interested!

Thoughts and feedback (2-10 minutes)

Would this be a useful piece of infrastructure for the AI Safety community? How could we make it as helpful as possible for you? What sorts of technical questions are you currently struggling to get answered? Is 150 engaged users feasible? How can we best reach a large audience?

Questions and upvotes (3-15 minutes)

If you think this is worthwhile, please

  1. Spend 3 minutes to log in to Area 51 and follow the proposal then quickly allocate all your 5 upvotes.

  2. Spend 12 further minutes brainstorming questions and propose your favourites.
    Aim for 3 questions: we would only need 10 more people to do this to hit the milestone of 40 questions.
    This could easily be the most impactful 15 minutes you spend today.

If you’re looking for inspiration, think about your most recent Google search, LLM query, or question you asked in a slack channel. I’ve also written up some further prompts and even given some ‘spare’ question-stubs on this google-doc.

Longer term commitment (hours to days)

If you are excited about the idea, please sign up for the Commitment phase and help us through Private Beta. I’d be particularly excited to hear from orgs who might be interested in piloting the site alongside Geodesic. I’d also be really excited to have more engagement from people and orgs with policy and governance backgrounds to ensure the site maximally caters to their needs.

Operational support (hours to weeks)

We would benefit from high quality and large scale comms to advertise the proposal and its progress in order to get sufficient engagement from the broader community. If you have connections to AI Safety groups, programs, or organisations that might benefit from this resource—particularly those with participants or graduates who could contribute to the definition process—I’d be grateful if you could share this proposal with them.

I think my comparative advantage is in research, rather than ops/​execution of this kind, so if anyone is interested in taking a more active role in shepherding this through the StackExchange process, please reach out! I’d be keen to support you and hand over the project. This could be a really impactful use of a couple of person-weeks for someone with the right ops background.


What do you think? How can we best develop this infrastructure for the AI Safety community?


  1. ^

    E.g. 85% confident that if someone with my level of ops/​sales (or better) put in 4 weeks FTE over the next 6 months that it would become “widely regarded as a valuable resource” by the community after two years, and would clearly have been worth the initial time investment.