Video Q&A with Singularity Institute Executive Director

HD Video link.

MP3 ver­sion.

Tran­script be­low.

Intro

Hi ev­ery­one. I’m Luke Muehlhauser, the new Ex­ec­u­tive Direc­tor of Sin­gu­lar­ity In­sti­tute.

Liter­ally hours af­ter be­ing ap­pointed Ex­ec­u­tive Direc­tor, I posted a call for ques­tions about the or­ga­ni­za­tion on the Less Wrong.com com­mu­nity web­site, say­ing I would an­swer many of them on video — and this is that video.

I’m do­ing this be­cause I think trans­parency and com­mu­ni­ca­tion are im­por­tant.

In fact, when I be­gan as an in­tern with Sin­gu­lar­ity In­sti­tute, one of my first pro­jects was to spend over a hun­dred hours work­ing with ev­ery­one in the or­ga­ni­za­tion to write its first strate­gic plan, which the board rat­ified and you can now read on our web­site.

When I was hired as a re­searcher, I gave a long text-only in­ter­view with Michael Anis­si­mov, where I an­swered 30 ques­tions about my per­sonal back­ground, the mis­sion of Sin­gu­lar­ity In­sti­tute, about our tech­ni­cal re­search pro­gram, and about the un­solved prob­lems we work on, and also about the value of ra­tio­nal­ity train­ing.

After be­com­ing Ex­ec­u­tive Direc­tor, I im­me­di­ately posted that call for ques­tions — a few of which I will now an­swer.

Staff Changes

First ques­tion. Less Wrong user ‘wedrifid’ asks:

The staff and lead­er­ship at [Sin­gu­lar­ity In­sti­tute] seem to be un­der­go­ing a lot of changes re­cently. Is in­sta­bil­ity in the or­gani­sa­tion some­thing to be con­cerned about?

On this, I should ad­dress spe­cific staff changes that wedrifid is talk­ing about. At the end of sum­mer 2011, Jasen Mur­ray — who was run­ning the vis­it­ing fel­lows pro­gram — re­signed in or­der to pur­sue a busi­ness op­por­tu­nity re­lated to his pas­sion for im­prov­ing peo­ple’s effec­tive­ness. At that same time, I was hired as a re­searcher af­ter work­ing as an in­tern for a few months, and Louie Helm was hired as Direc­tor of Devel­op­ment af­ter hav­ing done sig­nifi­cant vol­un­teer work for Sin­gu­lar­ity In­sti­tute for even longer than that. Carl Shul­man was also hired as a re­searcher at this time, and had also done lots of vol­un­teer work be­fore that, in­clud­ing pub­lish­ing pa­pers like “Arms Con­trol and In­tel­li­gence Ex­plo­sions,” “Im­pli­ca­tions of a Soft­ware-Limited Sin­gu­lar­ity,” and “Ba­sic AI Drives and Catastophic Risks,” and maybe some others

Another change is that our Pres­i­dent, Michael Vas­sar, is launch­ing a per­son­al­ized medicine com­pany that we’re all pretty ex­cited about. It has a lot of promise, so we’re ex­cited to see him do that. He’ll still be re­tain­ing the ti­tle of Pres­i­dent be­cause he will, re­ally, con­tinue to do quite a lot of good work for us — net­work­ing and spread­ing our mis­sion wher­ever he goes. But he will no longer take a salary from Sin­gu­lar­ity In­sti­tute, and that was his idea, sev­eral months ago.

But we needed some­body to run the or­ga­ni­za­tion, and I was the fa­vorite choice for the job.

So, should you be wor­ried about in­sta­bil­ity? Well… I’m ex­cited about the way the or­ga­ni­za­tion is tak­ing shape, but I will say that we need more peo­ple. In par­tic­u­lar, our re­search team took a hit when I moved from Re­searcher to Ex­ec­u­tive Direc­tor. So if you care about our mis­sion and you can work with us to write work­ing pa­pers and other doc­u­ments, you should con­tact me! My email is luke@in­tel­li­gence.org.

And I’ll say one other thing. Do not fall prey to the sin of un­der­con­fi­dence. When I was liv­ing in Los An­ge­les I as­sumed I wasn’t spe­cial enough to ap­ply even as an un­paid vis­it­ing fel­low, and Louie Helm had to call me on Skype and talk me into it. So I thought “What the hell, it can’t hurt to con­tact Sin­gu­lar­ity In­sti­tute,” and within 9 months of that first con­tact I went from in­tern to re­searcher to Ex­ec­u­tive Direc­tor. So don’t un­der­es­ti­mate your po­ten­tial — con­tact us, and let us be the ones who say “No.”

And I sup­pose now would be a good time to an­swer an­other ques­tion, this one asked by ‘JoshuaZ’, who asks:

Are you con­cerned about po­ten­tial nega­tive sig­nal­ing/​sta­tus is­sues that will oc­cur if [Sin­gu­lar­ity In­sti­tute] has as an ex­ec­u­tive di­rec­tor some­one who was pre­vi­ously just an in­tern?

Not re­ally. And the prob­lem isn’t that I used to be an un­paid Visit­ing Fel­low, it’s just that I went from Visit­ing Fel­low to Ex­ec­u­tive Direc­tor so quickly. But that’s… one of the beau­ties of Sin­gu­lar­ity In­sti­tute. Sin­gu­lar­ity In­sti­tute is not a place where you need to “pay your dues,” or some­thing. If you’re hard-work­ing and com­pe­tent and you get along with peo­ple and you’re clearly com­mit­ted to ra­tio­nal­ity and to re­duc­ing ex­is­ten­tial risk, then the lead­er­ship of the or­ga­ni­za­tion will put you where you can do the most good and be the most effec­tive, re­gard­less of ir­rele­vant fac­tors like du­ra­tion of em­ploy­ment.

Ri­gor­ous Research

Next ques­tion. Less Wrong user ‘quartz’ asks:

How are you go­ing to ad­dress the per­ceived and ac­tual lack of rigor as­so­ci­ated with [Sin­gu­lar­ity In­sti­tute]?

Now, what I ini­tially thought quartz was talk­ing about was Sin­gu­lar­ity In­sti­tute’s rel­a­tive lack of pub­li­ca­tions in aca­demic jour­nals like Risk Anal­y­sis or Minds and Machines, so let me re­spond to that in­ter­pre­ta­tion of the ques­tion first.

Luck­ily, I am prob­a­bly the perfect per­son to an­swer this ques­tion, be­cause when I first be­came in­volved with Sin­gu­lar­ity In­sti­tute this was pre­cisely my own largest con­cern with Sin­gualrity In­sti­tute, but I changed my mind when I learned the rea­sons why Sin­gu­lar­ity In­sti­tute does not push harder than it does to pub­lish in aca­demic jour­nals.

So. Here’s the story. In March 2011, be­fore I was even an in­tern, I wrote a dis­cus­sion post on Less Wrong called ‘How [Sin­gu­lar­ity In­sti­tute] could pub­lish in main­stream cog­ni­tive sci­ence jour­nals.’ I ex­plained in de­tail not only the right style is for main­stream jour­nals, but also why Sin­gu­lar­ity In­sti­tute should pub­lish in main­stream jour­nals. My four rea­sons were:

  1. Some donors will take Sin­gu­lar­ity In­sti­tute more se­ri­ously if it pub­lishes in main­stream jour­nals.

  2. Sin­gu­lar­ity In­sti­tute would look a lot more cred­ible in gen­eral.

  3. Sin­gu­lar­ity In­sti­tute would spend less time an­swer­ing the same ques­tions again and again if it pub­lishes short, well-refer­enced re­sponses to such ques­tions.

  4. Writ­ing about these prob­lems in the com­mon style… will help other smart re­searchers to un­der­stand the rele­vant prob­lems and per­haps con­tribute to solv­ing them.

Then, in April 2011, I moved to the Bay Area and be­gan to re­al­ize why ex­ert­ing a lot of effort to pub­lish in main­stream jour­nals prob­a­bly isn’t the right way to go for Sin­gu­lar­ity In­sti­tute, and I wrote a dis­cus­sion post called ‘Rea­sons for [Sin­gu­lar­ity In­sti­tute] to not pub­lish in main­stream jour­nals.’

What are those rea­sons?

The first one is that more peo­ple read, for ex­am­ple, Yud­kowsky’s thought­ful blog posts or Nick Bostrom’s pre-prints from his web­site… than the ac­tual jour­nals.

The other rea­son is that in many cases, most of a writer’s time is in­vested af­ter the ar­ti­cle is ac­cepted to a jour­nal. Which means that most of the work comes af­ter you’ve done the most im­por­tant part and writ­ten up all the core ideas. Most of the work is tweak­ing. Those are dozens and dozens and dozens of hours not spent on find­ing new safety strate­gies, writ­ing new work­ing pa­pers, etc.

A third rea­son is that pub­lish­ing in main­stream jour­nals re­quires you to jump through lots of hoops, like re­viewer bias and the nor­mal aver­sion to stuff that sounds weird.

A fourth rea­son to not pub­lish so much in main­stream jour­nals is that pub­lish­ing in main­stream jour­nals re­quires a pretty large de­lay in pub­li­ca­tion, some­where be­tween 4 months to 2 years.

So: If you’re a main­stream aca­demic seek­ing tenure, pub­lish­ing in main­stream jour­nals is what you need to do, be­cause that’s how the sys­tem is set up. If you’re try­ing to solve hard prob­lems very quickly, pub­lish­ing in main­stream jour­nals can some­times be some­thing of a lost pur­pose.

If you’re try­ing to hard solve prob­lems in math­e­mat­ics and philos­o­phy, why would you spend most of your limited re­sources tweak­ing sen­tences rather than get­ting the im­por­tant ideas out there for your­self or oth­ers to im­prove and build on? Why would you ac­cept de­lays of 4 months to 2 years?

At Sin­gu­lar­ity In­sti­tute, we’re not try­ing to get tenure. We don’t need you to have a Ph.D. We don’t care if you work at Prince­ton or at Brown Com­mu­nity Col­lege. We need you to help us solve the most im­por­tant prob­lems in math­e­mat­ics, com­puter sci­ence, and philos­o­phy, and we need to do that quickly.

That said, it will some­times be worth it to de­velop a work­ing pa­per into some­thing that can be pub­lished in a main­stream jour­nal, if the effort re­quired and the time de­lay are not too great.

But just to drive my point home, let me read from the open­ing chap­ter of the new book Rein­vent­ing Dis­cov­ery, by Michael Niel­sen, the co-au­thor of the lead­ing text­book on quan­tum com­pu­ta­tion. It’s a re­ally great pas­sage:

Tim Gow­ers is not your typ­i­cal blog­ger. A math­e­mat­i­cian at Cam­bridge Univer­sity, Gow­ers is a re­cip­i­ent of the high­est honor in math­e­mat­ics, the Fields Medal, of­ten called the No­bel Prize of math­e­mat­ics. His blog ra­di­ates math­e­mat­i­cal ideas and in­sight.

In Jan­uary 2009, Gow­ers de­cided to use his blog to run a very un­usual so­cial ex­per­i­ment. He picked out an im­por­tant and difficult un­solved math­e­mat­i­cal prob­lem, a prob­lem he said he’d “love to solve.” But in­stead of at­tack­ing the prob­lem on his own, or with a few close col­leagues, he de­cided to at­tack the prob­lem com­pletely in the open, us­ing his blog to post ideas and par­tial progress. What’s more, he is­sued an open in­vi­ta­tion ask­ing other peo­ple to help out. Any­one could fol­low along and, if they had an idea, ex­plain it in the com­ments sec­tion of the blog. Gow­ers hoped that many minds would be more pow­er­ful than one, that they would stim­u­late each other with differ­ent ex­per­tise and per­spec­tives, and col­lec­tively make easy work of his hard math­e­mat­i­cal prob­lem. He dubbed the ex­per­i­ment the Poly­math Pro­ject.

The Poly­math Pro­ject got off to a slow start. Seven hours af­ter Gow­ers opened up his blog for math­e­mat­i­cal dis­cus­sion, not a sin­gle per­son had com­mented. Then a math­e­mat­i­cian named Jozsef Soly­mosi from the Univer­sity of Bri­tish Columbia posted a com­ment sug­gest­ing a vari­a­tion on Gow­ers’s prob­lem, a vari­a­tion which was eas­ier, but which Soly­mosi thought might throw light on the origi­nal prob­lem. Fif­teen min­utes later, an Ari­zona high-school teacher named Ja­son Dyer chimed in with a thought of his own. And just three min­utes af­ter that, UCLA math­e­mat­i­cian Ter­ence Tao—like Gow­ers, a Fields medal­ist—added a com­ment. The com­ments erupted: over the next 37 days, 27 peo­ple wrote 800 math­e­mat­i­cal com­ments, con­tain­ing more than 170,000 words. Read­ing through the com­ments you see ideas pro­posed, re­fined, and dis­carded, all with in­cred­ible speed. You see top math­e­mat­i­ci­ans mak­ing mis­takes, go­ing down wrong paths, get­ting their hands dirty fol­low­ing up the most mun­dane of de­tails, re­lentlessly pur­su­ing a solu­tion. And through all the false starts and wrong turns, you see a grad­ual dawn­ing of in­sight. Gow­ers de­scribed the Poly­math pro­cess as be­ing “to nor­mal re­search as driv­ing is to push­ing a car.” Just 37 days af­ter the pro­ject be­gan Gow­ers an­nounced that he was con­fi­dent the poly­maths had solved not just his origi­nal prob­lem, but a harder prob­lem that in­cluded the origi­nal as a spe­cial case. He de­scribed it as “one of the most ex­cit­ing six weeks of my math­e­mat­i­cal life.” Months’ more cleanup work re­mained to be done, but the core math­e­mat­i­cal prob­lem had been solved.

That is what work­ing for rapid progress on prob­lems rather than for tenure looks like.

And here’s the kicker. We’ve already done this at Sin­gu­lar­ity In­sti­tute! This is what hap­pened, though not quite as fast, when Eliezer Yud­kowsky made a few blog posts about open prob­lems in de­ci­sion the­ory, and the com­mu­nity rose to the challenge, pro­posed solu­tions, and iter­ated and iter­ated. That work con­tinued with a de­ci­sion the­ory work­shop and a mailing list that is still ac­tive, where origi­nal progress in de­ci­sion the­ory is be­ing made quite rapidly, and with none of it go­ing through the hoops and de­lays of pub­lish­ing in main­stream jour­nals.

Now, I do think that Sin­gu­lar­ity In­sti­tute needs to pub­lish more re­search, both in and out of main­stream jour­nals. But most of what we pub­lish should be blog posts and work­ing pa­pers, be­cause our goal is to solve prob­lems quickly, not to wait 4 months to 2 years to go through a main­stream pub­lisher and gar­ner tenure and pres­tige and so on.

That said, I’m quite happy when peo­ple do pub­lish on these sub­jects in main­stream jour­nals, be­cause pres­tige is use­ful for bring­ing at­ten­tion to over­looked top­ics, and be­cause hope­fully these in­stances of pub­lish­ing in main­stream jour­nals are oc­cur­ring when it isn’t a huge waste of time and effort to do so. For ex­am­ple, I love the work be­ing done by our fre­quent col­lab­o­ra­tors at the Fu­ture of Hu­man­ity In­sti­tute at Oxford, and I always look for­ward to what they’re do­ing next.

Now, back to quartz’s origi­nal ques­tion about rigor­ous re­search. I asked for clar­ifi­ca­tion on what quartz meant, and here’s what he said:

In 15 years, I want to see a text­book on the math­e­mat­ics of FAI that I can put on my book­shelf next to Pearl’s Causal­ity, Sipser’s In­tro­duc­tion to the The­ory of Com­pu­ta­tion and MacKay’s In­for­ma­tion The­ory, In­fer­ence, and Learn­ing Al­gorithms. This is not go­ing to hap­pen if re­search of suffi­cient qual­ity doesn’t start soon.

Now, that sounds won­der­ful, and I agree that the com­mu­nity of re­searchers work­ing to re­duce ex­is­ten­tial risks, in­clud­ing Sin­gu­lar­ity In­sti­tute, will need to ramp up their re­search efforts to achieve that kind of goal.

I will offer just one qual­ifi­ca­tion that I don’t think will be very con­tro­ver­sial. I think most peo­ple would agree that if a sci­en­tist hap­pened to cre­ate a syn­thetic virus that was air­borne and could kill hun­dreds of mil­lions of peo­ple if re­leased into the wild, we wouldn’t want the in­struc­tions for cre­at­ing that syn­thetic virus to be pub­lished in the open for ter­ror­ist groups or hawk­ish gov­ern­ments to use. And for the same rea­sons, we wouldn’t want a Friendly AI text­book to ex­plain how to build highly dan­ger­ous AI sys­tems. But ex­cept­ing that, I would love to see a rigor­ously tech­ni­cal text­book on friendli­ness the­ory, and I agree that friendli­ness re­search will need to in­crease for us to see that text­book be writ­ten in 15 years. Luck­ily, the Fu­ture of Hu­man­ity In­sti­tute is putting a spe­cial em­pha­sis on AI risks for the next lit­tle while, and Sin­gu­lar­ity In­sti­tute is ramp­ing up its own re­search efforts.

But the most im­por­tant thing I want to say is this. If you can take ideas and ar­gu­ments that already ex­ist in blog posts, emails, and hu­man brains (for ex­am­ple at Sin­gu­lar­ity In­sti­tute) and turn them into work­ing pa­pers or maybe even jour­nal ar­ti­cles, and you care about nav­i­gat­ing the Sin­gu­lar­ity suc­cess­fully, please con­tact me. My email ad­dress is luke@in­tel­li­gence.org. If you’re that kind of per­son who can do that kind of work, I re­ally want to talk to you.

I’d es­ti­mate we have some­thing like 30-40 pa­pers just wait­ing to be writ­ten. The con­cep­tual work has been done, we just need more re­searchers who can write this stuff up. So if you can do that, you should con­tact me: luke@in­tel­li­gence.org.

Friendly AI Sub-Problems

Next ques­tion. Less Wrong user ‘XiXiDu’ asks:

If some­one as ca­pa­ble as Ter­ence Tao ap­proached [Sin­gu­lar­ity In­sti­tute], ask­ing if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-prob­lems that de­mand some sort of ex­per­tise that [Sin­gu­lar­ity In­sti­tute] is cur­rently lack­ing?

Ter­ence Tao is a math­e­mat­i­cian at UCLA who was a child prodigy and is con­sid­ered by some peo­ple to be one of the smartest peo­ple on the planet. He is ex­actly the kind of per­son we need to suc­cess­fully nav­i­gate the Sin­gu­lar­ity, and in par­tic­u­lar to solve open prob­lems in Friendly AI the­ory.

I ex­plained in my text-only in­ter­view with Michael Anis­si­mov in Septem­ber 2011 that the prob­lem of Friendly AI breaks down into a large num­ber of smaller and bet­ter-defined tech­ni­cal sub-prob­lems. Some of the open prob­lems I listed in that in­ter­view are the ones I’d love some­body like Ter­ence Tao to work on. For ex­am­ple:

How can an agent make op­ti­mal de­ci­sions when it is ca­pa­ble of di­rectly edit­ing its own source code, in­clud­ing the source code of the de­ci­sion mechanism? How can we get an AI to main­tain a con­sis­tent util­ity func­tion through­out up­dates to its on­tol­ogy? How do we make an AI with prefer­ences about the ex­ter­nal world in­stead of about a re­ward sig­nal? How can we gen­er­al­ize the the­ory of ma­chine in­duc­tion — called Solomonoff in­duc­tion â— so that it can use higher-or­der log­ics and rea­son cor­rectly about ob­ser­va­tion se­lec­tion effects? How can we ap­prox­i­mate such ideal pro­cesses such that they are com­putable?

(That was a quote from the text-only in­ter­view.)

But even be­fore that, we’d re­ally like to write up ex­pla­na­tions of these prob­lems in all their tech­ni­cal de­tail, but again that takes re­searchers and fund­ing and we’re short on both. For now, I’ll point you to Eliezer’s talk at Sin­gu­lar­ity Sum­mit 2011, which you can Google for.

But yeah, we have a lot of tech­ni­cal prob­lems that we’d like to clar­ify the na­ture of so that we can have re­searchers work­ing on them. So we do need po­ten­tial re­searchers to con­tact us.

I loved watch­ing Bat­man and Su­per­man car­toons when I was a kid, but as it turns out, the heroes who can save the world are not those who have in­cred­ible strength or the power of flight. They are math­e­mat­i­ci­ans and com­puter sci­en­tists.

Sin­gu­lar­ity In­sti­tute needs heroes. If you are a brilli­ant math­e­mat­i­cian or com­puter sci­en­tist and you want a shot at sav­ing the world, con­tact me: luke@in­tel­li­gence.org.

I know it sounds corny, but I mean it. The world needs heroes.

Im­proved Funding

Next, Less Wrong user ‘XiXiDu’ asks:

What would [Sin­gu­lar­ity In­sti­tute] do given var­i­ous amounts of money? Would it make a differ­ence if you had 10 or 100 mil­lion dol­lars at your dis­posal...?

Yes it would. Ab­solutely. If Bill Gates de­cided to­mor­row that he wanted to save not just a billion peo­ple but the en­tire hu­man race, and he gave us 100 mil­lion dol­lars, we would hire more re­searchers and figure out the best way to spend that money. That’s a pretty big pro­ject in it­self.

But right now, my bet on how we’d end up spend­ing that money is that we would per­son­ally ar­gue for our mis­sion to each of the world’s top math­e­mat­i­ci­ans, AI re­searchers, physi­cists, and for­mal philoso­phers. The Ter­ence Taos and Judea Pearls of the world. And for any of them who could be con­vinced, we’d be able to offer them enough money to work for us. We’d also hire sev­eral suc­cess­ful Op­pen­heimer-type re­search ad­minis­tra­tors who could help us bring these brilli­ant minds to­gether to work on these prob­lems.

As nice as it is to have peo­ple from all over the world solv­ing prob­lems in math­e­mat­ics, de­ci­sion the­ory, agent ar­chi­tec­tures, and other fields col­lab­o­ra­tively over the in­ter­net, there are a lot of things you can make move faster when you bring the smartest peo­ple in the world into one build­ing and al­low them to do noth­ing else but solve the world’s most im­por­tant prob­lems.

Rationality

Next. Less Wrong user ‘JoshuaZ’ asks:

A lot of Eliezer’s work has been not at all re­lated strongly to FAI but has been to pop­u­lariz­ing ra­tio­nal think­ing. In your view, should [Sin­gu­lar­ity In­sti­tute] fo­cus ex­clu­sively on AI is­sues or should it also care about ra­tio­nal is­sues? In that con­text, how does Eliezer’s on­go­ing work re­late to [Sin­gu­lar­ity In­sti­tute]?

Yes, it’s a great ques­tion. Let me be­gin with the ra­tio­nal­ity work.

I was already very in­ter­ested in ra­tio­nal­ity be­fore I found Less Wrong and Sin­gu­lar­ity In­sti­tute, but when I first en­coun­tered the ar­gu­ments about in­tel­li­gence ex­plo­sion, one of my first thoughts was, “Uh-oh. Ra­tion­al­ity is much more im­por­tant than I had origi­nally thought.”

Why? In­tel­li­gence ex­plo­sion is a mind-warp­ing, emo­tion­ally dan­ger­ous, in­tel­lec­tu­ally difficult, and very un­cer­tain field in which we don’t get to do a dozen ex­per­i­ments so that re­al­ity can beat us over the head with the cor­rect an­swer. In­stead, when it comes to in­tel­li­gence ex­plo­sion sce­nar­ios, in or­der to get this right we have to tran­scend the nor­mal hu­man bi­ases, emo­tions, and con­fu­sions of the hu­man mind, and make the right pre­dic­tions be­fore we can run any ex­per­i­ments. We can’t try an in­tel­li­gence ex­plo­sion and see how it turns out.

More­over, to even un­der­stand what the prob­lem is, you’ve got to get past a lot of usual bi­ases and false but com­mon be­liefs. So we need a more sane world to solve these prob­lems, and we need a saner world to have a larger com­mu­nity of sup­port for ad­dress­ing these is­sues.

And, Eliezer’s choice to work on ra­tio­nal­ity has paid off. The Se­quences, and the Less Wrong com­mu­nity that grew out of them, have been suc­cess­ful. We now have a large and ac­tive com­mu­nity of peo­ple grow­ing in ra­tio­nal­ity and spread­ing it to oth­ers, and a sub­set of that com­mu­nity con­tributes to progress on prob­lems re­lated to AI. Even Eliezer’s choice to write a ra­tio­nal­ity fan­fic­tion, Harry Pot­ter and the Meth­ods of Ra­tion­al­ity, has — con­trary to my ex­pec­ta­tions — had quite an im­pact. It is now the most pop­u­lar Harry Pot­ter fan fic­tion, I think, and it was re­spon­si­ble for per­haps ¼ or ⅕ of the money raised dur­ing the 2011 sum­mer match­ing challenge, and has brought sev­eral valuable new peo­ple into our com­mu­nity. Eliezer’s forth­com­ing ra­tio­nal­ity books might have a similar type of effect.

But we un­der­stand that many peo­ple don’t see the con­nec­tion be­tween ra­tio­nal­ity and nav­i­gat­ing the Sin­gu­lar­ity suc­cess­fully the way that we do, so in our strate­gic plan we ex­plained that we’re work­ing to spin off most of the ra­tio­nal­ity work to a sep­a­rate or­ga­ni­za­tion. It doesn’t have a name yet, but in­ter­nally we just call it ‘Ra­tion­al­ity Org.’ That way, Sin­gu­lar­ity In­sti­tute can fo­cus on Sin­gu­lar­ity is­sues, and the Ra­tion­al­ity Org (what­ever it comes to be called) can fo­cus on ra­tio­nal­ity, and peo­ple can sup­port them in­de­pen­dently. That’s some­thing else Eliezer has been work­ing on, along with a cou­ple of oth­ers.

Of course, Eliezer does spend some of his time on AI is­sues, and he plans to re­turn full-time to AI once Ra­tion­al­ity Org is launched. But we need more tal­ented re­searchers, and other con­tri­bu­tions, in or­der to suc­ceed on AI. Ra­tion­al­ity has been helpful in at­tract­ing and en­hanc­ing a com­mu­nity that helps with those things.

Chang­ing Course

Next. Less Wrong user ‘JoshuaZ’ asks:

...are there spe­cific sets of events (other than the ad­vent of a Sin­gu­lar­ity) which you think will make [Sin­gu­lar­ity In­sti­tute] need to es­sen­tially reeval­u­ate its goals and pur­pose at a fun­da­men­tal level?

Yes, and I can give a few ex­am­ples that I wrote down.

Right now we’re fo­cused on what hap­pens when smarter-than-hu­man in­tel­li­gence ar­rives, be­cause the ev­i­dence available sug­gests to us that AI will be more im­por­tant than other cru­cial con­sid­er­a­tions. But sup­pose we made a se­ries of dis­cov­er­ies that made it un­likely that AI would ar­rive any­time soon, but very likely that catas­trophic biolog­i­cal ter­ror­ism was only a decade or two away, for ex­am­ple. In that situ­a­tion, Sin­gu­lar­ity In­sti­tute would shift its efforts quite con­sid­er­ably.

Another ex­am­ple: If other or­ga­ni­za­tions were do­ing our work, in­clud­ing Friendly AI, and with bet­ter effi­ciency and scale, then it would make sense to fold Sin­gu­lar­ity In­sti­tute and trans­fer re­sources, donors, and staff to these other, more effi­cient and effec­tive or­ga­ni­za­tions.

If it could be shown that some other pro­cess was much bet­ter at mo­bi­liz­ing efforts to ad­dress core is­sues, for ex­am­ple if Giv­ing What We Can (an or­ga­ni­za­tion fo­cused on op­ti­mal philan­thropy) con­tinues dou­bling each year and spin­ning off large num­bers of skil­led peo­ple to work on ex­is­ten­tial risk re­duc­tion (as one of the tar­gets of op­ti­mal philan­thropy), then fo­cus there for a while could make sense — or at least it might make sense to strip away out­reach func­tions from [Sin­gu­lar­ity In­sti­tute], per­haps leav­ing a core FAI team, and leave out­reach to the op­ti­mal philan­thropy com­mu­nity or some­thing like that.

So, those are just three ways that things could change or we could make some dis­cov­er­ies, and that would rad­i­cally shift the strat­egy that we have at Sin­gu­lar­ity In­sti­tute.

Ex­per­i­men­tal Research

Next. User ‘XiXiDu’ asks:

Is [Sin­gu­lar­ity In­sti­tute] will­ing to pur­sue ex­per­i­men­tal AI re­search or does it solely fo­cus on hy­po­thet­i­cal as­pects?

Ex­per­i­men­tal re­search would, at this point, be a di­ver­sion from work on the most im­por­tant prob­lems re­lated to our mis­sion, which are tech­ni­cal prob­lems in math­e­mat­ics, com­puter sci­ence, and philos­o­phy. If ex­per­i­men­tal re­search be­comes more im­por­tant than those prob­lems in math, com­puter sci­ence, and philos­o­phy, and if we had the fund­ing available to do ex­per­i­ments, we would do ex­per­i­men­tal re­search at that time, or fund some­body else to do it. But those aren’t the most im­por­tant or most ur­gent prob­lems that we need to solve.

Win­ning Without Friendly AI

Next. Less Wrong user ‘Wei_Dai’ asks:

Much of [Sin­gu­lar­ity In­sti­tute’s] re­search [is] fo­cused not di­rectly on [Friendly AI] but more gen­er­ally on bet­ter un­der­stand­ing the dy­nam­ics of var­i­ous sce­nar­ios that could lead to a Sin­gu­lar­ity. Such re­search could help us re­al­ize a pos­i­tive Sin­gu­lar­ity through means other than di­rectly build­ing an [Friendly AI].

Does [Sin­gu­lar­ity In­sti­tute] have any plans to ex­pand such re­search ac­tivi­ties, ei­ther in house, or by academia or in­de­pen­dent re­searchers?

The an­swer to that ques­tion is ‘Yes’.

Sin­gu­lar­ity In­sti­tute does not put all its eggs in the ‘Friendly AI’ bas­ket. In­tel­li­gence ex­plo­sion sce­nar­ios are com­pli­cated, the fu­ture is un­cer­tain, and the fea­si­bil­ity of many pos­si­ble strate­gies is un­known and un­cer­tain. Both Sin­gu­lar­ity In­sti­tute and our friends at Fu­ture of Hu­man­ity In­sti­tute at Oxford have done quite a lot of work on these kinds of strate­gic con­sid­er­a­tions, things like differ­en­tial tech­nolog­i­cal de­vel­op­ment. It’s im­por­tant work, so we plan to do more of it.

Most of this work, how­ever, hasn’t been pub­lished. So if you want to see it pub­lished, put us in con­tact with peo­ple who are good at rapidly tak­ing ideas and ar­gu­ments out of differ­ent peo­ple’s heads and putting them on pa­per. Or maybe you are that per­son! Right now we just don’t have enough re­searchers to write these things up as much as we’d like. So con­tact me: luke@in­tel­li­gence.org.

Conclusion

Well, that’s it! I’m sorry I can’t an­swer all the ques­tions. Do­ing this takes a lot more work than you might think, but if it is ap­pre­ci­ated, and es­pe­cially if it grows and en­courages the com­mu­nity of peo­ple who are try­ing to make the world a bet­ter place and re­duce ex­is­ten­tial risk, then I may try to do some­thing like this — maybe with­out the video, maybe with the video — with some reg­u­lar­ity.

Keep in mind that I do have a per­sonal feed­back form at tinyurl.com/​luke-feed­back, where you can send me feed­back on my­self and Sin­gu­lar­ity In­sti­tute. You can also check the Less Wrong page that will be ded­i­cated to this Q&A and leave some com­ments there.

Thanks for listen­ing and watch­ing. This is Luke Muehlhauser, sign­ing off.