Current AI Safety Roles for Software Engineers

[Note: Please make sure to see the comments for other, newer information]

I’ve had several conversations over the last few months with engineers who were trying to enter the field of AI safety. It became evident that I was giving pretty much the same advice to all of them, so I finally decided to write it up.

Some more context; late last year it became evident to me that funding was becoming much less of an issue around AI safety and engineering expertise was becoming more of an issue. I decided to leave my job to work in the area. I spent some time consulting for Ought, then eventually came to a point where it seemed more useful to do some self-studying for a while. During that period I spoke to several people in the Bay Area about engineering needs at AI safety organizations.

The hiring situation still seems a bit confusing to me. There are a lot of EA engineers who seem to want to do direct EA work, but are not sure what jobs they could get. Most AI safety organizations seem to really desire to find more good employees (and the AI-oriented ones, engineers), but are still fairly selective in their choices. I think that these organizations have typically been able to be selective, would prefer to do so when possible, and also have special demands that come from being small, new, and theoretical /​ EA.

If you are an engineer desiring to work in an EA organization soon or in the future, I suggest either getting really good at a few skills particularly useful to EA organizations (reinforcement learning, functional programming, ML), getting really good at startup engineering skills, or getting good at non-engineering skills desired by EA organizations. From what I’ve seen, spending marginal years on “generic medium-large company backend skills” is often not that useful for future EA positions at this point or expected to be in the future.

The following list represents the main organizations I’ve considered for work around AI safety, starting as an engineer without particular ML experience. If you are interested in all eng. positions in EA, I recommend 80k’s job list. Also, 80,000 hours recently released an interview with two ea-aligned ML engineers, which I recommend if you are interested in more detail.

OpenAI Safety Team

I think the OpenAI safety initiatives may be some of the most visible AI-safety work at the moment. I believe the team has around 4-7 researchers and 2-5 research engineers. They are looking for more of both, but the research engineering position is likely more obtainable for people without a good amount of existing expertise. From what I understand, they believe they have many “shovel-ready” ideas that can be given to research engineers, and could absorb more research engineers for this purpose. They seem to intend to grow considerably in the next few years.

Their team is pretty focused on reinforcement learning, and this is the main unique requirement for new recruits. This is something that is very self-learnable; in fact, team members were quite friendly to me in recommending specific ways to self-study (mainly by replicating many of the main papers in reinforcement learning.) They just released a project to help people self-educate in Deep RL.

This educational effort seems like it would take around 2-8 months full-time work for an experienced programmer. If you don’t have the money to take time off to do this, I personally recommend reaching out to EA community members/​grant organizations to ask for it. If you are a good programmer without the time to study RL, you may want to get in contact with them anyway, I imagine they may be willing to take some non-RL people with enough general software experience. Link.

Also note that OpenAI is actively hiring a frontend engineer, who would work partly with the safety team.

Ought

Ought is currently looking for one full-stack web developer, with strong fundamentals in computer science and functional programming. One of the main next projects is about building a new system, likely from scratch, that appears to be quite demanding in terms of sophistication. Also, they are looking for a COO, and have a preference for people with programming experience, so if that sounds interesting to you I suggest reaching out. I’d personally be happy to chat about the organization, this is the one I have the most experience with.

If you are interested in reading about Ought, I recommend starting with their blog and then going through much of the rest of their website. While they are pretty new, they do have buy-in from OpenPhil, FHI, and Paul Christiano, and are respected within the main EA safety community. Right now they are relatively small; this could be good for someone who likes getting involved early on, but bad for people who like structure. Link.

MIRI

MIRI seems to be looking for software engineers who are generally very capable. Machine learning experience is nice but definitely not necessary. Similar to Ought they seem to be looking for people with strong fundamentals and functional programming experience/​interest, though less focus is on architecture experience. The work is very secretive, so be prepared to accept that aspect of it. Also, note that the culture is quite specific; the interview process is quite selective for culture fits, and I recommend taking that into consideration as to if it would be a long-term fit for you (for the right people it seems fantastic). I believe they would like to hire several engineers in the next few years. Their bar is high in my opinion, in part because there are some strong candidates. Of course, that also means that if you do join, you would be working with some pretty smart people. Link.

CHAI

I’ve had the privilege of spending some time this past summer attending a few CHAI events and similar, and have found the crowd fairly diverse and friendly. The organization is basically made up of several PHD candidates working on a variety of different projects around AI safety. This seemed like the youngest of the AI safety groups (in regards to the age of personnel, not the age of interest in the subject). They are hiring for Research Engineers to support their work (I think in the beginning they’re really looking for one good one to try out); I believe in this role you would basically be assisting a few of them on some work that’s particularly needing of engineering support. The work may be pretty diverse for this reason (a few months on one project, then a few on another), which comes with costs and benefits. I think this position is probably the most overlooked on this list, and as such, it may be the most available to engineers without much specialized experience. The position requires some ML experience, but not as much as I initially feared; I think it may be possible that with 1-3 online courses, your ML skill may be useful enough to be relevant there for introductory work. They also seem willing to help train candidates that would dedicate enough time afterward to the cause.

Deepmind Safety Team

I don’t know very much about Deepmind’s safety team, but I have heard that it too is trying to grow. One main differentiator is that it’s based in London.

General AI work at OpenAI /​ DeepMind /​ Google Brain

My general impression is that the direct-ai-safety approaches above are considered the most valuable, but there are lots of other ML /​ AI safety positions that could be good for career building or corporate influence. I have not done much research into these positions.