Singularity Institute Executive Director Q&A #2

Previously: Interview as a researcher, Q&A #1

This is my second Q&A as Executive Director of the Singularity Institute. I’ll skip the video this time.

Singularity Institute Activities

Bugmaster asks:

...what does the SIAI actually do? You don’t submit your work to rigorous scrutiny by your peers in the field… you either aren’t doing any AGI research, or are keeping it so secret that no one knows about it… and you aren’t developing any practical applications of AI, either… So, what is it that you are actually working on, other than growing the SIAI itself ?

It’s a good question, and my own biggest concern right now. Donors would like to know: Where is the visible return on investment? How can I see that I’m buying existential risk reduction when I donate to the Singularity Institute?

SI has a problem, here, because it has done so much invisible work lately. Our researchers have done a ton of work that hasn’t been written up and published yet; Eliezer has been writing his rationality books that aren’t yet published; Anna and Eliezer have been developing a new rationality curriculum for the future “Rationality Org” that will be spun off from the Singularity Institute; Carl has been doing a lot of mostly invisible work in the optimal philanthropy community; and so on. I believe this is all valuable x-risk-reducing work, but of course not all of our supporters are willing to just take our word for it that we’re doing valuable work. Our supporters want to see tangible results, and all they see is the Singularity Summit, a few papers a year, some web pages and Less Wrong posts, and a couple rationality training camps. That’s good, but not good enough!

I agree with this concern, which is why I’m focused on doing things that happen to be both x-risk-reducing and visible.

First, we’ve been working on visible “meta” work that makes the Singularity Institute more transparent and effective in general: a strategic plan, a donor database (“visible” to donors in the form of thank-yous), a new website (forthcoming), and an annual report (forthcoming).

Second, we’re pushing to publish more research results this year. We have three chapters forthcoming in The Singularity Hypothesis, one chapter forthcoming in The Cambridge Handbook of Artificial Intelligence, one forthcoming article on the difficulty of AI, and several other articles and working papers we’re planning to publish in 2012. I’ve also begun writing the first comprehensive outline of open problems in Singularity research, so that interested researchers from around the world can participate in solving the world’s most important problems.

Third, there is visible rationality work forthcoming. One of Eliezer’s books is now being shopped to agents and publishers, and we’re field-testing different versions of rationality curriculum material for use in Less Wrong meetups and classes.

Fourth, we’re expanding the Singularity Summit brand, an important platform for spreading the memes of x-risk reduction and AI safety.

So my answer is to the question is: “Yes, visible return on investment has been a problem lately due to our choice of projects. Even before I was made Executive Director, it was one of my top concerns to help correct that situation, and this is still the case today.”

What if?

XiXiDu asks:

What would SI do if it became apparent that AGI is at most 10 years away?

This would be a serious problem because by default, AGI will be extremely destructive, and we don’t yet know how to make AGI not be destructive.

What would we do if we thought AGI was at most 10 years away?

This depends on whether it’s apparent to a wider public that AGI is at most 10 years away, or a conclusion based only on a nonpublic analysis.

If it becomes apparent to a wide variety of folks that AGI is close, then it should be much easier to get people and support for Friendly AI work, so a big intensification of effort would be a good move. If the analysis that AGI is 10 years away leads to hundreds of well-staffed and well-funded AGI research programs and a rich public literature, then trying to outrace the rest with a Friendly AI project becomes much harder. After an intensified Friendly AI effort, one could try to build up knowledge in Friendly AI theory and practice that could be applied (somewhat less effectively) to systems not designed from the ground up for Friendliness. This knowledge could then be distributed widely to increase the odds of a project pulling through, calling in real Friendliness experts, etc. But in general, a widespread belief that AGI is only 10 years away would be a much hairier situation than the one we’re in now.

But if the basis for thinking AI was 10 years away was nonpublic (but nonetheless persuasive to supporters who have lots of resources), then it could be used to differentially attract support to a Friendly AI project, hopefully without provoking dozens of AGI teams to intensify their efforts. So if we had a convincing case that AGI was only 10 years away, we might not publicize this but would instead make the case to individual supporters that we needed to immediately intensify our efforts toward a theory of Friendly AI in a way that only much greater funding can allow.

Budget

MileyCyrus asks:

What kind of budget would be required to solve the friendly AI problem?

Large research projects always come with large uncertainties concerning how difficult they will be, especially ones that require fundamental breakthroughs in mathematics and philosophy like Friendly AI does.

Even a small, 10-person team of top-level Friendly AI researchers taking academic-level salaries for a decade would require tens of millions of dollars. And even getting to the point where you can raise that kind of money requires a slow “ramping up” of researcher recruitment and output. We need enough money to attract the kinds of mathematicians who are also being recruited by hedge funds, Google, and the NSA, and have a funded “chair” for each of them such that they can be prepared to dedicate their careers to the problem. That part alone requires tens of millions of dollars for just a few researchers.

Other efforts like the Summit, Less Wrong, outreach work, and early publications cost money, and they work toward having the community and infrastructure required to start funding chairs for top-level mathematicians to be career Friendly AI researchers. This kind of work costs between $500,000 and $3 million per year, with more money per year of course producing more progress.

Predictions

Wix asks:

How much do members’ predictions of when the singularity will happen differ within the Singularity Institute?

I asked some Singularity Institute staff members to answer a slightly different question, one pulled from the Future of Humanity Institute’s 2011 machine intelligence survey:

Assuming no global catastrophe halts progress, by what year would you assign a 10%/​50%/​90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached.

In short, the survey participants’ median estimates (excepting 5 outliers) for 10%/​50%/​90% were:

2028 /​ 2050 /​ 2150

Here are five of the Singularity Institute’s staff members’ responses, names unattached, for the years by which they would assign a 10%/​50%/​90% chance of HLAI creation, conditioning on no global catastrophe halting scientific progress:

  • 2025 /​ 2073 /​ 2168

  • 2030 /​ 2060 /​ 2200

  • 2027 /​ 2055 /​ 2160

  • 2025 /​ 2045 /​ 2100

  • 2040 /​ 2080 /​ 2200

Those are all the answers I had time to prepare in this round; I hope they are helpful!