2012 Winter Fundraiser for the Singularity Institute

Cross-posted here.

(The Sin­gu­lar­ity In­sti­tute main­tains Less Wrong, with gen­er­ous help from Trike Apps, and much of the core con­tent is writ­ten by salaried SI staff mem­bers.)

Thanks to the gen­eros­ity of sev­eral ma­jor donors, ev­ery dona­tion to the Sin­gu­lar­ity In­sti­tute made now un­til Jan­uary 20t (dead­line ex­tended from the 5th) will be matched dol­lar-for-dol­lar, up to a to­tal of $115,000! So please, donate now!

Now is your chance to dou­ble your im­pact while helping us raise up to $230,000 to help fund our re­search pro­gram.

(If you’re un­fa­mil­iar with our mis­sion, please see our press kit and read our short re­search sum­mary: Re­duc­ing Long-Term Catas­trophic Risks from Ar­tifi­cial In­tel­li­gence.)

Now that Sin­gu­lar­ity Univer­sity has ac­quired the Sin­gu­lar­ity Sum­mit, and SI’s in­ter­ests in ra­tio­nal­ity train­ing are be­ing de­vel­oped by the now-sep­a­rate CFAR, the Sin­gu­lar­ity In­sti­tute is mak­ing a ma­jor tran­si­tion. Most of the money from the Sum­mit ac­qui­si­tion is be­ing placed in a sep­a­rate fund for a Friendly AI team, and there­fore does not sup­port our daily op­er­a­tions or other pro­grams.

For 12 years we’ve largely fo­cused on move­ment-build­ing — through the Sin­gu­lar­ity Sum­mit, Less Wrong, and other pro­grams. This work was needed to build up a com­mu­nity of sup­port for our mis­sion and a pool of po­ten­tial re­searchers for our unique in­ter­dis­ci­plinary work.

Now, the time has come to say “Mis­sion Ac­com­plished Well Enough to Pivot to Re­search.” Our com­mu­nity of sup­port­ers is now large enough that qual­ified re­searchers are available for us to hire, if we can af­ford to hire them. Hav­ing pub­lished 30+ re­search pa­pers and dozens more origi­nal re­search ar­ti­cles on Less Wrong, we cer­tainly haven’t ne­glected re­search. But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on re­search.

Ac­com­plish­ments in 2012

Fu­ture Plans You Can Help Support

In the com­ing months, we plan to do the fol­low­ing:

  • As part of Sin­gu­lar­ity Univer­sity’s ac­qui­si­tion of the Sin­gu­lar­ity Sum­mit, we will be chang­ing our name and launch­ing a new web­site.

  • Eliezer will pub­lish his se­quence Open Prob­lems in Friendly AI.

  • We will pub­lish nicely-ed­ited ebooks (Kin­dle, iBooks, and PDF) for many of our core ma­te­ri­als, to make them more ac­cessible: The Se­quences, 2006-2009, Fac­ing the Sin­gu­lar­ity, and The Han­son-Yud­kowsky AI Foom De­bate.

  • We will pub­lish sev­eral more re­search pa­pers, in­clud­ing “Re­sponses to Catas­trophic AGI Risk: A Sur­vey” and a short, tech­ni­cal in­tro­duc­tion to time­less de­ci­sion the­ory.

  • We will set up the in­fras­truc­ture re­quired to host a pro­duc­tive Friendly AI team and try hard to re­cruit enough top-level math tal­ent to launch it.

(Other pro­jects are still be­ing sur­veyed for likely cost and strate­gic im­pact.)

We ap­pre­ci­ate your sup­port for our high-im­pact work! Donate now, and seize a bet­ter than usual chance to move our work for­ward. Credit card trans­ac­tions are se­curely pro­cessed us­ing ei­ther PayPal or Google Check­out. If you have ques­tions about donat­ing, please con­tact Louie Helm at (510) 717-1477 or louie@in­tel­li­gence.org.

$115,000 of to­tal match­ing funds has been pro­vided by Ed­win Evans, Mihaly Barasz, Rob Zahra, Alexei An­dreev, Jeff Bone, Michael Blume, Guy Srini­vasan, and Kevin Fischer.

I will mostly be trav­el­ing (for AGI-12) for the next 25 hours, but I will try to an­swer ques­tions af­ter that.