AGI safety field building projects I’d like to see

This list of field building ideas is inspired by Akash Wasil’s and Ryan Kidd’s similar lists. And just as the projects on those lists, these projects rely on people with specific skills and field knowledge to be executed well.

None of these ideas are developed by me exclusively; they are a result of the CanAIries Winter Getaway, a 2-week-long, Unconference-style AGI safety retreat I organized in December 2022.

Events

Organize a global AGI safety conference

This should be self-explanatory: It is odd that we still don’t have an AGI safety conference that allows for networking and lends the field credibility.

There are a number of versions of this that might make sense:

  • an EAG-style conference for people already in the community to network

  • an academic-style conference engaging CS and adjacent academia

  • an industry-heavy conference (maybe sponsored by AI orgs?)

  • a virtual next-steps conference, e.g. for AGISF participants

Some people have tried this out at a local level: https://​​aisic2022.net.technion.ac.il

(If you decide to work on this: www.aisafety.global is available via EA domains, contact hello@alignment.dev)

Organize AGI safety professionals retreats

As far as I can see, most current AGI safety retreats are optimized for junior researchers: Networking and learning opportunities for students and young professionals. Conferences with their focus on talks and 1-on-1s are useful for transferring knowledge, but don’t offer the extensive ideation that a retreat focused on workshops and discussion rounds could.

Organizing a focused retreat for up to 60-80 senior researchers to debate the latest state of alignment research might be very valuable for memetic cross-pollination between approaches, organizations, and continents. It might also make sense to do this during work days, so that peoples’ employers can send them. I suspect that the optimal mix of participants would be around 80% researchers, and the rest funders, decisionmakers, and the most influential field builders.

Information infrastructure

Start an umbrella AGI safety non-profit organization in a country where there is none

This would make it easier for people to join AGI safety research, and could offer a central exchange hub. Some functions of such an org could include:

  • Serving as an employer of records for independent AGI safety researchers.

  • Providing a central point for discussions, coworking, publications. You probably want a virtual space to discuss, like a Discord or Slack, named after your country/​area and list it on https://​​coda.io/​​@alignmentdev/​​alignmentecosystemdevelopment, then make sure to promote this and have it be discoverable by people interested in the field. The Discord/​​Slack can then be used to host local language online or in person meetups.

A candidate for doing this mostly needs ops/​finance skill, not a comprehensive overview of the AGI safety field.

Mind that Form Follows Function: Try to do this with as little administrative and infrastructure overhead as possible. Find out whether other orgs already offer the relevant services (For example, AI Safety Support offer Ops infrastructure to other alignment projects, and national EA orgs like EA Germany offer employer of record-services). Build MVPs before going big and ambitious.

In general, the cheap minimum version of this would be becoming an AGI Safety Coordinator.

Become an AGI Safety Coordinator

It would be useful to have a known role and people filling the role of Coordinators. These people would not particularly have decision power or direct impact, but their job is to know what everyone is doing in AGI safety, to collect resources, organize them, publish them, to help people know who to work and collaborate with. Ideally, they would also serve as a bridge between the so far under-connected areas of AGI safety and AI policy.

Some of the members of AI Safety Support have been doing similar things, but they are mostly recognized by the community of new members, and might not be utilized by the established organizations and people. The role of the Coordinator is to also be known by the established organizations and people.

Create a virtual map of the world where Coordinators can add themselves

This would make it way easier for people to find each other. In https://​​eahub.org/​​, an attempt to gather *all* members of the EA community in one place failed due to buy-in being too costly for individuals.

Instead, we might want to have a database of key coordinators and information nodes in the community. A handful of people would be enough to maintain it, and it probably would never list more than ~200 people, grouped by location, as the go-to addresses for local knowledge.

Create and maintain a living document of AGIS field building ideas

The minimum version of this would be a maintained list of ideas like the ones in Akash’s, Ryan’s, and this post.

Useful functions:

  • Anyone can add new ideas

  • People can tag themselves as interested in working on/​funding a certain idea

  • A way to filter by expected quality of ideas. A tremendous and underexplored model for doing this is the EigenKarma system a handful of people are currently developing. See here for a draft.

  • a function for commenting on ideas in order to improve them, or to flag ineffective/​high-downside-risk ones

A sensible existing project to build this into would be Apart Research’s https://​​aisafetyideas.com/​​. While their interface is optimized for research proposals, the list under https://​​aisafetyideas.com/​​?categories=Field-Building might be a good minimum viable product for a field building version.

Other examples of living documents that might serve as inspiration for this: https://​​aisafety.world, https://​​aisafety.community

Funding

Make it easier for AGI safety endeavors to get funding from non-EA sources

Our primary funding sources have suffered last year, and there are numerous foundations and investors out there happy to invest into potentially world-saving and/​or profitable projects. Especially now, it might be high-leverage to collect knowledge and build infrastructure for tapping into these funds. I lack the local knowledge to give recommendations for how to tap into funding sources within academia. However, here are four potential routes for tapping into non-academic funding sources:

1. Offer a service to proofread grant applications and give feedback. That can be extremely valuable for relatively little effort. Many people don’t want to send their application to a random stranger, but maybe people know you from the EA forum? Or you can just offer giving feedback to people who already know you.

2. Identify more relevant funding sources and spread knowledge about them. https://​​www.futurefundinglist.com/​​ is a great example: It’s a list of dozens of longtermist-adjacent funds, both in and outside the community. (Though apparently, it is not kept up-to-date: The FTX Future Fund is still listed as of Jan 19 2023.)

Governments, political parties, and philanthropists often have nation-specific funds happy to subsidize projects. Expanding the Future Funding List further and finding/​building similar national lists might be extremely valuable. For example, there is a whole book with funding sources for charity work in German.

3. Become a professional grant writer. A version of this that is affordable for new/​small orgs and creates decent incentives and direct feedback for grantwriters might be a prize-based arrangement: Application writers get paid if and only if a grant gets through.

If you are interested in this and already bring an exceptional level of written communication skills, a reasonable starting point may be grant writing courses like https://​​philanthropyma.org/​​events/​​introduction-grant-writing-7.

4. Teach EAs the skills to communicate their ideas to grantmakers. Different grantmakers have different values and lingos. If you want to convince them to give you money, you have to convince them in their world. This is something many AGI safety field builders didn’t have to learn so far. Accordingly, a useful second step after becoming a grant writer yourself might be figuring out how to teach grant writing as effectively as possible to the relevant people. (A LessWrong/​EA Forum sequence? Short rainings in pitching and grantwriting?)

Write a guide for how to live more frugally, optimized to the needs of members of the AGI safety community

The more frugally people live, the more independent they are from requiring a day job. In addition, the same amount of grantmaker money could support a larger number of individuals. Accordingly, pushing the idea of frugality by writing an engaging guide for how to do efficient altruism might help us do more research per dollar earned and donated by community members.

Some resources that such a guide should contain:

Potential downside risk: This route may be particularly attractive to relatively junior people with few connections to the established orgs. Being well-connected in the community is crucial, both for developing good ideas, and for developing the necessary network to get employed later. Accordingly, a good version of this guide would discourage people to compromise too strongly on being close to other community members for the sake of frugality.

Outreach and onboarding

Run the Ops for more iterations of ML4Good

French AGI safety field building org Effiscience has run several iterations of the machine learning bootcamp ML4Good, which teaches technical knowledge as well as AGI safety fundamentals, so as to produce more AGI safety researchers or research engineers. It’s got a proven track record of getting people involved and motivated to do more AGI safety work (see the writeup for details), and can dispatch instructors to teach these bootcamps. Thus, the constraint to scale is having organizers run the operations work (promoting the event, getting inscriptions, getting an event location…) to run new iterations in various countries.

If interested, contact jonathan.claybrough[at]gmail.com

Set up a guest appearance of an AI safety researcher with exceptional outreach skills on a major Street Epistemology YouTube channel

For preventing a negative singularity, AGI safety research must move faster than capabilities research. Two attack routes for that are a) to speed up AGI safety research, and b) to slow down capabilities research. One way to do b) would be to get more capabilities researchers to be concerned about AGI safety. The general community consensus seems to be that successful outreach to capabilities researchers would be extremely valuable, and that unsuccessful outreach would be extremely dangerous. Accordingly, hardly anyone is working on this.

Street Epistemology is the atheist response to Christian street preachers. Street epistemologists use the socratic method to assist people in questioning why they believe what they believe, often leading to updates in confidence. More info on https://​​streetepistemology.com/​​

Bringing more SE skills into the AGI safety community, or more capable Street Epistemologists into AGI safety, might help us make it sufficiently safe to do outreach to capabilities researchers. Bonus: Street Epistemologists only need just enough object-level knowledge of the topic at hand to be able to follow their conversation partner, not to argue against them. Accordingly, a solid understanding of SE and a basic background in machine learning might be enough to have useful and low-risk SE-style conversations with capabilities researchers.

A core route of memetic exchange within the SE community are a number of YouTube channels where street epistemologists film themselves nudging strangers to examine their core beliefs. If an AGI safety researcher with great teaching skills were to appear as a conversation partner on one of these channels, that might get more Street Epistemologists concerned enough that they join the AGI safety community and spread their memeplex.

Do workshops/​outreach at good universities in EA-neglected and low/​middle income countries

(E.g India, China, Japan, Eastern Europe, South America, Africa, …)

Talent is spread way more evenly across the globe than our outreach and recruitment strategies. Expanding those to other countries might be a high-leverage opportunity to increase the talent inflow into AGI safety. For example, Morocco, Tunisia, and Algeria have good math unis.

One low-hanging fruit here might be to pay talented graduates for a fellowship at leading AGI Safety labs.

Improve the landscape of AGI safety curricula

Get an overview of the existing AGI safety curricula. Find out what’s missing, e.g. for particular learning styles/​levels of seniority. Make it exist.

Publishing mediocre curricula is probably net negative at this point, because it draws attention away from the already existing good ones. What is needed at this point of the alignment curricula landscape is careful vetting, identifying gaps, and filling them with new well-written, well-maintained curricula. Particularly, we might need more curricula on AI governance, or on foundational concepts for field building, with curated resources on topics like MVP-building, project management, the existing infrastructure, etc.

For hints on what specifically is missing, this LW post on the 3-books-technique for learning a new skill might be a useful framework. Also, mind that different people have different learning styles: Some learn best through videos, others through text, audio, or practical exercises.

Some great examples for curricula:

Other

Support AGI safety researchers with your skills

There are countless ways to help AGI safety researchers through non-AGI safety-related skills so that they have more time and energy for their work. Make yourself easily findable and approachable.

Bonus points for creating infrastructure to enable this. One version would be a Google form/​sheet where people can add their respective skills.

Existing services include:

Other skills that might be valuable:

  • Software support, e.g. python support, pair programming

  • Personal assistance

  • Productivity coaching

  • Tutoring (e.g. math topics, coding, neuroscience, …)

  • Visa support

  • Tax support

Do this now:

  • (1-5 min brainstorming) What skills do you have? Could they be used to support researchers?

Find new AGI safety community building bottlenecks

Survey people for what they need /​ what their biggest bottlenecks are. People coming out of seri mats etc, working researchers.

General Tips

  • Trust your abilities! You might feel like there are other people who would do a better job than you in organizing the project. But: If the project isn’t being done, it looks like whoever could do it is busy doing even more important things.

  • Get feedback! If people don’t coordinate, they might try the same thing twice or more often. In addition, especially outreach-related projects can have a negative impact. Things you might want to do if you consider working on outreach-related projects:

    • Ask on the AI Alignment Slack.

    • Write me a message here, and I’ll connect you to relevant people.

  • Cooperate! Launching projects aiming for global optima sometimes works differently than the intuitions we built in competitive settings.

    • Make use of the existing infrastructure: Building background infrastructure is costly. Instead of going freelancer/​founding a new org, consider reaching out to existing orgs whether it makes sense for them to incorporate your projects. Examples include AI Safety Support, A*PART Research, and Alignment Ecosystem Development, the team behind aisafety.info and other projects.

    • Make it easy for people to propose improvements and collaborations to your project: Have an “about”-page, “suggest”-button, admonymous-account, …

  • Delegate! As much as possible, as little as necessary.

    • If you develop more ideas than you can execute on, write up lists like this one. You could also ask junior researchers/​community builders whether they’d be up for picking up your dropped projects.

    • If you have the necessary funds, consider hiring a PA via https://​​pineappleoperations.org/​​ to do Ops work you don’t have the slack for.

  • Test your hypotheses! The Lean Startup approach offers a valuable framework for this. Consider reading some of the relevant literature. The 8020 version is grokking this artikle by Henrik Kniberg: Making sense of MVP (Minimum Viable Product).

  • “Ideas have no value; only execution and people have!” Mind the explore-exploit-tradeoff and actually do what is the best option you currently have available. Collating this list was fun, but if all of us just make lists all day...

Thanks to the following people for their contributions and comments: Jonathan Claybrough, Swante Scholz, Nico Hillbrand, Magdalena Wache, Jordan Pieters, Silvio Martin.

Crossposted to EA Forum (25 points, 2 comments)