Our plan for 2019-2020: consulting for AI Safety education

UPDATE: this plan received sizable criticism. We are reflecting on it, and working on a revision.

Tl;dr: a conversation with a grantmaker made us drop our long-held assumption that outputs needed to be concrete to be recognized. We decided to take a step back and approach the improvement of the AI Safety pipeline on a more abstract level, doing consulting and research to develop expertise in the area. This will be our focus in the next year.

Trial results

We have tested our course in April. We didn’t get a positive result. It looks like this was due to bad test design, with a high variance and low number of participants clouding any pattern that could have emerged. In hindsight, we should clearly have tested knowledge before the intervention as well as after it, though arguably this would have been nearly impossible given the one-month deadline that our funder imposed.

What we did learn is that we are greatly unaware of the extent to which our course is being used. This is mostly due to using software that is not yet mature enough to give this kind of data. If we want to continue building the course, we feel that our first priority ought to be to set up a feedback mechanism that gives us precise insights into how students are journeying through.

However, other developments have pointed our attention away from developing the course, and towards developing the question that the course is an answer to.

If funding wasn’t a problem

During the existence of RAISE, it’s runway has never been longer than about 2 months. This did cripple our ability to make long term decisions, in favor of dishing out some quick results to show value. Seen from a “quick feedback loops” paradigm, this may have been a healthy dynamic. It did also lead to sacrifices that we didn’t actually want to make.

Had we been tasked with our particular niche without any funding constraints, our first move would have been to do extensive study into what the field needs. We feel that EA is missing a management layer. There is a lot that a community-focused management consultant could do, simply by connecting all the dots and coordinating the many projects and initiatives that exist in the LTF space. We have identified 30 (!) small and large organisations that are involved in AI Safety. Not all of them are talking to each other, or even aware of each other.

Our niche being AI Safety education, we would have spent a good 6 months developing expertise and network in this area. We would have studied the scientific frontiers of relevant domains like education and the metasciences. We would have interviewed AIS organisations and asked them what they look for in employees. We would’ve studied existing alignment researchers and looked for patterns. Talk to grantmakers and consider their models.

Funding might not be a problem

After getting turned down by the LTF fund (which was especially meaningful because they didn’t seem to be constrained by funding), we had a conversation with one of their grantmakers. The premise of the conversation was something like “what version of RAISE would you be willing to fund?” The answer was pretty much what we just described. They thought pipeline improvement was important, but hard, and just going with the first idea that sounds good (an online course) would be a lucky shot if it worked. Instead, someone should be thinking about the bigger picture first.

The mistake we had been making from the beginning was to assume we needed concrete results to be taken seriously.

Our new direction

EA really does seem to be missing a management layer. People are thinking about their careers, starting organisations, doing direct work and research. Not many people are drawing up plans for coordination on a higher level and telling people what to do. Someone ought to be dividing up the big picture into roles for people to fill. You can see the demand for this by how seriously we take 80k. They’re the only ones doing this beyond the organisational level.

Much the same in the cause area we call AI Safety Education. Most AIS organisations are necessarily thinking about hiring and training, but no one is specializing in it. In the coming year, our aim is to fill this niche, building expertise and doing management consulting. We will aim to smarten up the coordination there. Concrete outputs might be:

  • Advice for grantmakers that want to invest in the AI Safety researcher pipeline

  • Advice for students that want to get up to speed and test themselves quickly

  • Suggesting interventions for entrepreneurs that want to fill up gaps in the ecosystem

  • Publishing thinkpieces that advance the discussion of the community, like this one

  • Creating and keeping wiki pages about subjects that are relevant to us

  • Helping AIS research orgs with their recruitment process

We’re hiring

Do you think this is important? Would you like to fast track your involvement with the Xrisk community? Do you have good google-fu, or would you like to conduct depth interviews with admirable people? Most importantly, are you not afraid to hack your own trail?

We think we could use one or two more people to join us in this effort. You’d be living for free in the EA Hotel. We can’t promise any salary in addition to that. Do ask us for more info!

Let’s talk

A large part of our work will involve talking to those involved in AI Safety. If you are working in this field, and interested in working on the pipeline, then we would like to talk to you.

If you have important information to share, have been plotting to do something in this area for a while, and want to compare perspectives, then we would like to talk to you.

And even if you would just like to have an open-ended chat about any of this, we would like to talk to you!

You can reach us at raise@aisafety.info