Our plan for 2019-2020: consulting for AI Safety education

UPDATE: this plan re­ceived siz­able crit­i­cism. We are re­flect­ing on it, and work­ing on a re­vi­sion.

Tl;dr: a con­ver­sa­tion with a grant­maker made us drop our long-held as­sump­tion that out­puts needed to be con­crete to be rec­og­nized. We de­cided to take a step back and ap­proach the im­prove­ment of the AI Safety pipeline on a more ab­stract level, do­ing con­sult­ing and re­search to de­velop ex­per­tise in the area. This will be our fo­cus in the next year.

Trial results

We have tested our course in April. We didn’t get a pos­i­tive re­sult. It looks like this was due to bad test de­sign, with a high var­i­ance and low num­ber of par­ti­ci­pants cloud­ing any pat­tern that could have emerged. In hind­sight, we should clearly have tested knowl­edge be­fore the in­ter­ven­tion as well as af­ter it, though ar­guably this would have been nearly im­pos­si­ble given the one-month dead­line that our fun­der im­posed.

What we did learn is that we are greatly un­aware of the ex­tent to which our course is be­ing used. This is mostly due to us­ing soft­ware that is not yet ma­ture enough to give this kind of data. If we want to con­tinue build­ing the course, we feel that our first pri­or­ity ought to be to set up a feed­back mechanism that gives us pre­cise in­sights into how stu­dents are jour­ney­ing through.

How­ever, other de­vel­op­ments have pointed our at­ten­tion away from de­vel­op­ing the course, and to­wards de­vel­op­ing the ques­tion that the course is an an­swer to.

If fund­ing wasn’t a problem

Dur­ing the ex­is­tence of RAISE, it’s run­way has never been longer than about 2 months. This did crip­ple our abil­ity to make long term de­ci­sions, in fa­vor of dish­ing out some quick re­sults to show value. Seen from a “quick feed­back loops” paradigm, this may have been a healthy dy­namic. It did also lead to sac­ri­fices that we didn’t ac­tu­ally want to make.

Had we been tasked with our par­tic­u­lar niche with­out any fund­ing con­straints, our first move would have been to do ex­ten­sive study into what the field needs. We feel that EA is miss­ing a man­age­ment layer. There is a lot that a com­mu­nity-fo­cused man­age­ment con­sul­tant could do, sim­ply by con­nect­ing all the dots and co­or­di­nat­ing the many pro­jects and ini­ti­a­tives that ex­ist in the LTF space. We have iden­ti­fied 30 (!) small and large or­gani­sa­tions that are in­volved in AI Safety. Not all of them are talk­ing to each other, or even aware of each other.

Our niche be­ing AI Safety ed­u­ca­tion, we would have spent a good 6 months de­vel­op­ing ex­per­tise and net­work in this area. We would have stud­ied the sci­en­tific fron­tiers of rele­vant do­mains like ed­u­ca­tion and the meta­sciences. We would have in­ter­viewed AIS or­gani­sa­tions and asked them what they look for in em­ploy­ees. We would’ve stud­ied ex­ist­ing al­ign­ment re­searchers and looked for pat­terns. Talk to grant­mak­ers and con­sider their mod­els.

Fund­ing might not be a problem

After get­ting turned down by the LTF fund (which was es­pe­cially mean­ingful be­cause they didn’t seem to be con­strained by fund­ing), we had a con­ver­sa­tion with one of their grant­mak­ers. The premise of the con­ver­sa­tion was some­thing like “what ver­sion of RAISE would you be will­ing to fund?” The an­swer was pretty much what we just de­scribed. They thought pipeline im­prove­ment was im­por­tant, but hard, and just go­ing with the first idea that sounds good (an on­line course) would be a lucky shot if it worked. In­stead, some­one should be think­ing about the big­ger pic­ture first.

The mis­take we had been mak­ing from the be­gin­ning was to as­sume we needed con­crete re­sults to be taken se­ri­ously.

Our new direction

EA re­ally does seem to be miss­ing a man­age­ment layer. Peo­ple are think­ing about their ca­reers, start­ing or­gani­sa­tions, do­ing di­rect work and re­search. Not many peo­ple are draw­ing up plans for co­or­di­na­tion on a higher level and tel­ling peo­ple what to do. Some­one ought to be di­vid­ing up the big pic­ture into roles for peo­ple to fill. You can see the de­mand for this by how se­ri­ously we take 80k. They’re the only ones do­ing this be­yond the or­gani­sa­tional level.

Much the same in the cause area we call AI Safety Ed­u­ca­tion. Most AIS or­gani­sa­tions are nec­es­sar­ily think­ing about hiring and train­ing, but no one is spe­cial­iz­ing in it. In the com­ing year, our aim is to fill this niche, build­ing ex­per­tise and do­ing man­age­ment con­sult­ing. We will aim to smarten up the co­or­di­na­tion there. Con­crete out­puts might be:

  • Ad­vice for grant­mak­ers that want to in­vest in the AI Safety re­searcher pipeline

  • Ad­vice for stu­dents that want to get up to speed and test them­selves quickly

  • Suggest­ing in­ter­ven­tions for en­trepreneurs that want to fill up gaps in the ecosystem

  • Pub­lish­ing think­pieces that ad­vance the dis­cus­sion of the com­mu­nity, like this one

  • Creat­ing and keep­ing wiki pages about sub­jects that are rele­vant to us

  • Helping AIS re­search orgs with their re­cruit­ment process

We’re hiring

Do you think this is im­por­tant? Would you like to fast track your in­volve­ment with the Xrisk com­mu­nity? Do you have good google-fu, or would you like to con­duct depth in­ter­views with ad­mirable peo­ple? Most im­por­tantly, are you not afraid to hack your own trail?

We think we could use one or two more peo­ple to join us in this effort. You’d be liv­ing for free in the EA Ho­tel. We can’t promise any salary in ad­di­tion to that. Do ask us for more info!

Let’s talk

A large part of our work will in­volve talk­ing to those in­volved in AI Safety. If you are work­ing in this field, and in­ter­ested in work­ing on the pipeline, then we would like to talk to you.

If you have im­por­tant in­for­ma­tion to share, have been plot­ting to do some­thing in this area for a while, and want to com­pare per­spec­tives, then we would like to talk to you.

And even if you would just like to have an open-ended chat about any of this, we would like to talk to you!

You can reach us at raise@aisafety.info