Revisiting SI’s 2011 strategic plan: How are we doing?

Progress updates are nice, but without a previously defined metric for success it’s hard to know whether an organization’s achievements are noteworthy or not. Is SI making good progress, or underwhelming progress?

Luckily, in August 2011 we published a strategic plan that outlined lots of specific goals. It’s now almost August 2012, so we can check our progress against the standard set nearly one year ago. The plan doesn’t specify a timeline for the stated goals, but I remember hoping that we could do most of them by the end of 2012, while understanding that we should list more goals than we could actually accomplish given current resources.

Let’s walk through the goals in that strategic plan, one by one. (Or, you can skip to the “summary and path forward” section at the end.)

1.1. Clarify the open problems relevant to our core mission

This was accomplished to some degree with So You Want to Save the World, and is on track to be accomplished to a greater degree with Eliezer’s sequence “Open Problems in Friendly AI,” which you should begin seeing late in August.

1.2. Identify and recruit researcher candidates who can solve research problems.

Several strategies for doing this were listed, but the only one worth doing at our current level of funding was to recruit more research associates and hire more researchers. Since August 2011 we have done both, adding half a dozen research associates and hiring nearly a dozen remote researchers, including a few who are working full-time on papers and other projects (e.g. Kaj Sotala).

1.3. Use researchers and research associates to solve open problems related to Friendly AI theory.

I never planned to be doing this by the end of 2012; it’s more of a long-term goal. A first step in this direction is to have Eliezer transition back to FAI work, e.g. with his “Open Problems in Friendly AI” Summit 2011 talk and forthcoming blog sequence. And actually, SI research associate Vladimir Slepnev has been making interesting progress in LW-style decision theory, and is working on a paper explicating one of his results. (Some credit is due to Vladimir Nesov and others.)

1.4. Estimate current AI risk levels.

Alas, we haven’t done much of this. There’s some analysis in Intelligence Explosion: Evidence and Import, Reply to Holden on Tool AI, and Reply to Holden on The Singularity Institute. Also, Anna is working on a simple model of AI risk in MATLAB (or some similar program). But I would have liked to have the cash to hire a researcher to continue things like AI Risk and Opportunity: A Strategic Analysis.

2.1. Continue operation of the Singularity Summit, which is beginning to yield a profit while also reaching more people with our message.

We did run Singularity Summit 2011, and Singularity Summit 2012 is on track to be noticeably more fun and professional than all past Summits. (So, register now!)

The strategic plan listed subgoals of gaining corporate sponsors and possibly expanding the Summit outside the USA. We gained corporate sponsors for Summit 2011, and are on track to gain even more of them for Summit 2012. Early in 2011 we also pursued an opportunity to host the first Singularity Summit in Europe, but the financing didn’t quite come through.

2.2 Cultivate LessWrong.com and the greater rationality community as a resource for Singularity Institute.

The strategic plan lists 5 subgoals, all of which we achieved. SI (a) used LessWrong to recruit additional supporters, (b) made use of LessWrong for collaborative problem solving (e.g. this and this), (c) published lots of top-level posts, (d) and published How to Run a Successful Less Wrong Meetup Group. The early efforts of CFAR, and our presence at (e.g.) Skepticon IV, made headway on 2.2.e: “Encourage improvements in critical thinking in the wider world. We need a larger community of critical thinkers for use in recruiting, project implementation, and fundraising.”

2.3. Spread our message and clarify our arguments with public-facing academic deliverables.

We did exceptionally well on this, though much more is needed. In addition to detailed posts like Reply to Holden on Tool AI and Reply to Holden on the Singularity Institute, SI has more peer-reviewed publications in 2012 than in all past years combined.

2.4. Build more relationships with the optimal philanthropy, humanist, and critical thinking communities, which share many of our values.

Though this work has been mostly invisible, Carl Shulman has spent dozens of hours on building relationships with the optimal philanthropy community. We’ve also built relationships with the humanist and critical thinking communities, through our presence at Skepticon IV but especially through the early activities of CFAR.

2.5. Cultivate and expand Singularity Institute’s Volunteer Program.

SI’s volunteer program got a new website (though we’d like to launch another redesign soon), and we estimate that SI volunteers have done 2x-5x more work per month this year than in the past few years.

2.6. Improve Singularity Institute’s web presence.

Done. We got a new domain, Singularity.org, and put up a new website there. We produced additional introductory materials, like Friendly-AI.com and IntelligenceExplosion.com. We produced lots of “landing pages,” for example our tech summaries. We did not, however, complete subgoals (d) and (e) — “Continue to produce articles on targeted websites and other venues” and “Produce high-quality videos to explain Singularity Institute’s mission” — because their ROI isn’t high enough to do at our current funding level.

2.7. Apply for grants, especially ones that are given to other organizations and researchers concerned with the safety of future technologies (e.g. synthetic biology and nanotechnology).

This one was always meant as a longer-range goal. SI still needs to be “fixed up” in certain ways before this is worth trying.

2.8. Continue targeted interactions with the public.

We didn’t do much of this, either. In particular, Eliezer’s rationality books are on hold for now; we have the author of a best-selling science book on retainer to take a crack at Eliezer’s rationality books this fall, after he completes his current project.

2.9. Improve interactions with current and past donors.

Success. We created and cleaned up our donor database, communicated more regularly with our support base (previously via monthly updates and now our shiny new newsletter, which you can sign up for here), and updated our top donors list.

3.1. Encourage a new organization to begin rationality instruction similar to what Singularity Institute did in 2011 with Rationality Minicamp and Rationality Boot Camp.

This is perhaps the single most impressive thing we did this year, in the sense that it required dozens of smaller pieces to all work, and work together. The organization is now called the Center for Applied Rationality (CFAR), and it was recently approved for 501c3 status. It has its own website, has been running extremely well-reviewed rationality retreats, and has lots more exciting stuff going on that hasn’t been described online yet. Sign up for CFAR’s newsletter to get these juicy details when they are written up.

3.2. Use Charity Navigator’s guidelines to improve financial and organizational transparency and efficiency.

There are 9 subgoals listed here. We’ve since decided we don’t want to grow to five independent board members (subgoal b) at this time, because a smaller board runs more efficiently. (I’ve now heard too many nightmare stories about trying to get things done with a large board.) We did achieve (a), (d), (e), (g), (h), and (i). Subgoal (c) is a longer term goal that we are working toward (we need a professional bookkeeper to clean up our internal processes before we can have a hired CPA audit, and we’re interviewing bookkeepers now). Subgoal (f) — a records retention policy — is in the works.

3.3. Ensure a proper orientation for new Singularity Institute staff and visiting fellows.

This is in process; we’re creating orientation materials.

3.4. Secure lines of credit to increase liquidity and smooth out the recurring cash-flow pinches that result from having to do things like make payroll and rent event spaces.

We’ve done this.

3.5. Improve safe return on financial reserves

For starters, we put a large chunk of our resources in an ING Direct high-interest savings account.

3.6. Ensure high standards for staff effectiveness.

There are two subgoals here. Subgoal (b) was to have staff maintain work logs, which we’ve been doing for many months now. Subgoal (a) is more ambiguous. We haven’t given people job descriptions because at such a small organization, such roles change quickly. But I do provide stronger management of SI staff and projects than ever before, and this clarifies the expectations for our staff, often including task and project deadlines.

3.7. When hiring, advertise for applications to find the best candidates.

We’ve been doing this for several months now, e.g. here and here.

Summary

That’s it for the main list! Now let’s check in on what we said our top priorities for 2011-2012 were:

  1. Public-facing research on creating a positive singularity. Check. SI has more peer-reviewed publications in 2012 than in all past years combined.

  2. Outreach /​ education /​ fundraising. Check. Especially, through CFAR.

  3. Improved organizational effectiveness. Check. Lots of good progress on this.

  4. Singularity Summit. Check.

In summary, I think SI is a bit behind where I hoped we’d be by now, though this is largely because we’ve poured so much into launching CFAR, and as a result, CFAR has turned out to be significantly more cool at launch than I had anticipated.

Fundraising has been a challenge. One donor failed to actually give their $46,000 pledge despite repeated reminders and requests, and our support base is (understandably) anxious to see a shift from movement-building work to FAI research, a shift I have been fighting for since I was made Executive Director. (Note that spinning off rationality work to CFAR is a substantial part of trimming SI down into being primarily an FAI research institute.)

Reforming SI into a more efficient, effective organization has been my greatest challenge. Frankly, SI was in pretty bad shape when Louie and I arrived as interns in April 2011, and there have been an incredible number of holes to dig SI out of — and several more remain. (In contrast, it has been a joy to help set up CFAR properly from the very beginning, with all the right organizational tools and processes in place.) Reforming SI presents a fundraising problem, because reforming SI is time consuming and sometimes costly, but is generally unexciting to donors. I can see the light at the end of the tunnel, though. We won’t reach it if we can’t improve our fundraising success in the next 3-6 months, but it’s close enough that I can see it.

SI’s path forward, from my point of view, looks like this:

  1. We finish launching CFAR, which takes over the rationality work SI was doing. (Before January 2013.)

  2. We change how the Singularity Summit is planned and run so that it pulls our core staff away from core mission work to a lesser degree. (Before January 2013.)

  3. Eliezer writes the “Open Problems in Friendly AI” sequence. (Before January 2013.)

  4. We hire 1-2 researchers to produce technical write-ups from Eliezer’s TDT article and from his “Open Problems in Friendly AI” sequence. (Beginning September 2012, except that right now we don’t have the cash to hire the 1-2 people who I know who could do this and who want to do this as soon as we have the money to hire them.)

  5. With the “Open FAI Problems” sequence and the technical write-ups in hand, we greatly expand our efforts to show math/​compsci researchers that there is a tractable, technical research program in FAI theory, and as a result some researchers work on the sexiest of these problems from their departments, and some other math researchers take more seriously the prospect of being hired by SI to do technical research in FAI theory. (Beginning, roughly, in April 2013.) Also: There won’t be classes on x-risk at SPARC (rationality camp for young elite math talent), but some SPARC students might end up being interested in FAI stuff by osmosis.

  6. With a more tightly honed SI, improved fundraising practices, and visible mission-central research happening, SI is able to attract more funding and hire even more FAI researchers. (Beginning, roughly, in September 2013.)

If you want to help us make this happen, please donate during our July matching drive!