TAISU 2019 Field Report

Last sum­mer I de­liv­ered a “field re­port” af­ter at­tend­ing the Hu­man Level AI multi-con­fer­ence. In mid-Au­gust of this year I at­tended the Learn­ing-by-do­ing AI Safety Work­shop (LBDAISW? I’ll just call it “the work­shop” here­after) and the Tech­ni­cal AI Safety Un­con­fer­ence (TAISU) at the EA Ho­tel in Black­pool. So in a similar spirit to last year I offer you a field re­port of some high­lights and what I took away from the ex­pe­rience.

I’ll break it down into 3 parts: the work­shop, TAISU, and the EA Ho­tel.

The workshop

The learn­ing by do­ing work­shop was or­ga­nized by Linda Linse­fors and led by Linda and Davide Zagami. The ze­roth day (so la­beled be­cause it was op­tional) con­sisted of talks by Linda and Davide ex­plain­ing ma­chine learn­ing con­cepts. Although this day was op­tional I found it very in­for­ma­tive be­cause ma­chine learn­ing “snuck up” on me by be­com­ing rele­vant af­ter I earned my Masters in Com­puter Science so there have re­mained a num­ber of gaps in my knowl­edge about how mod­ern ML works. Hav­ing a day full of cov­er­ing ba­sics with lots of time for ques­tions and an­swers was very benefi­cial to me, as I think it was for many of the other par­ti­ci­pants. Most of us had lumpy ML knowl­edge, so it was worth­while to get us all on the same foot­ing so we could at least talk co­her­ently in the com­mon lan­guage of ma­chine learn­ing. As I said, though, it was op­tional, and I think it could have eas­ily been skipped for some­one happy with their level of fa­mil­iar­ity with ML.

The next three days were all about solv­ing AI safety. The ap­proach Linda took was to avoid load­ing peo­ple up with ex­ist­ing ideas, which was rele­vant be­cause some of the par­ti­ci­pants had not pre­vi­ously thought much about AI safety, and in­stead asked us to try to solve AI safety afresh. The first day we did an ex­er­cise of imag­in­ing differ­ent sce­nar­ios and how we would ad­dress AI safety un­der those sce­nar­ios. Linda called this “sketch­ing” solu­tions to AI safety, with the goal be­ing to de­velop one or more sketches of how AI safety might be solved by go­ing di­rectly at the prob­lem. For ex­am­ple, you might start out work­ing through your ba­sic as­sump­tions about how AI would be dan­ger­ous, and then see where that pointed to a need for solu­tions, then you’d do it again but choos­ing differ­ent as­sump­tions and see where it lead you. Once we had done that for a cou­ple hours we pre­sented our ideas about how to ad­dress AI safety. The ideas ranged from me talk­ing about de­vel­op­ing an ad­e­quate the­ory of hu­man val­ues as a nec­es­sary sub­prob­lem to oth­ers con­sid­er­ing multi-agent, value learn­ing, and de­ci­sion the­ory sub­prob­lems to more neb­u­lous ideas about “com­pas­sion­ate” AI.

The sec­ond day was for filling knowl­edge gaps. At first it was a lit­tle un­clear what this would look like—in­de­pen­dent study, group study, talks, some­thing else—but we quickly set­tled on do­ing a se­ries of talks. We iden­ti­fied sev­eral top­ics peo­ple felt they needed to know more about to ad­dress AI safety, and then the per­son who felt they un­der­stood that topic best gave a vol­un­tary, im­promptu talk on the sub­ject for 30 to 60 min­utes. This filled up the day as we talked about de­ci­sion the­ory, value learn­ing, math­e­mat­i­cal mod­el­ing, AI fore­cast­ing as it re­lates to x-risks, and ma­chine learn­ing.

The third and fi­nal day was a re­peat of the first day: we did the sketch­ing ex­er­cise again and then pre­sented our solu­tions in the af­ter­noon. Other par­ti­ci­pants may later want to share what they came up with, but I was sur­prised to find my­self drawn to the idea of “com­pas­sion­ate” AI, an idea put for­ward by two of the least ex­pe­rienced par­ti­ci­pants. I found it com­pel­ling for per­sonal rea­sons, but as I thought about what it would mean for an AI to be com­pas­sion­ate, I re­al­ized that meant it had to act com­pas­sion­ately, and be­fore I knew it I had red­erived much of the origi­nal rea­son­ing around Friendly AI and found my­self re­con­vinced of the value of do­ing MIRI-style de­ci­sion the­ory re­search to build safe AI. Neat!

Over­all I found the work­shop valuable even though I had the most years of ex­pe­rience think­ing about AI safety of any­one there (by my count nearly 20). I found it a fun and en­gag­ing way to get me to look at prob­lems I’ve been think­ing about for a long time with fresh eyes, and this was es­pe­cially helped by the in­clu­sion of par­ti­ci­pants with min­i­mal AI safety ex­pe­rience. I think the work­shop would be a valuable use of three days for any­one ac­tively work­ing in AI safety, even if they con­sider them­selves “se­nior” in the field: it offered a valuable space for re­con­sid­er­ing ba­sic as­sump­tions and re­dis­cov­er­ing the rea­sons why we’re do­ing what we’re do­ing.


TAISU was a 4 day long un­con­fer­ence. Linda or­ga­nized it as two 2 day un­con­fer­ences held back-to-back, and I think this was a good choice be­cause it forced us to sched­ule events with greater ur­gency and al­lowed us to eas­ily make the sec­ond 2 days re­spon­sive to what we learned from the first 2 days. At the start of each of the 2 day seg­ments, we met to plan out the sched­ule on a shared cal­en­dar where we could pin up events on pieces of pa­per. There were mul­ti­ple rooms for mul­ti­ple events to hap­pen at once and ses­sions were a mix of talks, dis­cus­sions, idea work­shops, one-on-ones, and so­cial events. All con­tent was cre­ated by and for the par­ti­ci­pants, with very lit­tle of it planned ex­ten­sively in ad­vance; mostly we just got to­gether, bounced ideas around, and talked about AI safety for 4 days.

Over­all TAISU was a lot of fun and it was mer­cifully less dense than a typ­i­cal un­con­fer­ence, mean­ing there were plenty of breaks, un­struc­tured pe­ri­ods, and times when the con­fer­ence sin­gle tracked. Per­son­ally I got a lot out of us­ing it as a space to work­shop ideas. I’d hold a dis­cus­sion pe­riod on a topic, peo­ple would show up, I’d talk for maybe 15 min­utes lay­ing out my idea, and then they’d ask ques­tions and dis­cuss. I found it a great way to make rapid progress on ideas and get the de­tails aired out, learn about ob­jec­tions and mis­takes, and learn new things that I could take back to evolve my ideas into some­thing bet­ter.

One of the ideas I work­shopped I think I’m go­ing to drop: AI safety via di­alec­tic, an ex­ten­sion of AI safety via de­bate. I think get­ting the de­tails worked out I was able to bet­ter re­al­ize why I’m not ex­cited about it be­cause I don’t think AI safety via de­bate will work for very gen­eral rea­sons, and the spe­cific things I thought I could do to im­prove it by re­plac­ing de­bate with di­alec­tic would not be enough to over­come the weak­nesses I see. Another was bet­ter work­ing out com­pas­sion­ate AI, fur­ther reaf­firm­ing my thought that it was a red­eriva­tion of Friendly AI. A third I just posted about: a pre­dic­tive cod­ing the­ory of hu­man val­ues.

The EA Hotel

It’s a bit hard to de­cide on how much de­tail to give about the EA Ho­tel. On the one hand, it was awe­some, full stop. On the other, it was awe­some for lots of lit­tle rea­sons I could never hope to fully re­count. I feel like their web­site fails to do them jus­tice. It’s an awe­some place filled with cool peo­ple try­ing their best to save the world. Most of the folks at the Ho­tel are do­ing work that is difficult to mea­sure, but spend­ing time with them I can tell they all have a pow­er­ful in­ten­tion to make the world a bet­ter place and to do so in ways that are effec­tive and im­pact­ful.

Black­pool is nice in the sum­mer (I hear the weather gets worse other times of year). The Ho­tel it­self is old and small but also big­ger than you would ex­pect from the out­side. Greg and the staff have done a great job ren­o­vat­ing and im­prov­ing the space to make it nice to stay in. Ja­cob, who here I’ll call “the cook” but he does a lot more, and Deni, the com­mu­nity man­ager, do a great job of mak­ing the EA Ho­tel feel like a home and bring­ing the folks in it to­gether. When I was there it was easy to imag­ine my­self stay­ing there for a few months to work on pro­jects with­out the dis­trac­tion of a day job.

I hope to be able to visit again, maybe next year for TAISU2!

Dis­clo­sure: I showed a draft of this to Linda to ver­ify facts. All mis­takes, opinions, and con­clu­sions are my own.