SIAI—An Examination

12/​13/​2011 - A 2011 up­date with data from the 2010 fis­cal year is in progress. Should be done by the end of the week or sooner.

Disclaimer

Notes

  • Images are now hosted on LessWrong.com.

  • The 2010 Form 990 data will be available later this month.

  • It is not my in­tent to prop­a­gate mis­in­for­ma­tion. Er­rors will be cor­rected as soon as they are iden­ti­fied.

Introduction

Act­ing on gw­ern’s sug­ges­tion in his Girl Scout Cookie anal­y­sis, I de­cided to look at SIAI fund­ing. After read­ing about the Visit­ing Fel­lows Pro­gram and more re­cently the Ra­tion­al­ity Boot Camp, I de­cided that the SIAI might be some­thing I would want to sup­port. I am con­cerned with ex­is­ten­tial risk and grap­ple with the util­ity im­pli­ca­tions. I feel that I should do more.

I wrote on the mini-boot camp page a pledge that I would donate enough to send some­one to ra­tio­nal­ity mini-boot camp. This seemed to me a small cost for the po­ten­tial benefit. The SIAI might get bet­ter at build­ing ra­tio­nal­ists. It might build a ra­tio­nal­ist who goes on to solve a prob­lem. Should I donate more? I wasn’t sure. I read gw­ern’s ar­ti­cle and re­al­ized that I could eas­ily get more in­for­ma­tion to clar­ify my think­ing.

So I down­loaded the SIAI’s Form 990 an­nual IRS filings and started to write down notes in a spread­sheet. As I gath­ered data and com­pared it to my ex­pec­ta­tions and my goals, my be­liefs changed. I now be­lieve that donat­ing to the SIAI is valuable. I can­not hide this be­lief in my writ­ing. I sim­ply have it.

My goal is not to con­vince you to donate to the SIAI. My goal is to provide you with in­for­ma­tion nec­es­sary for you to de­ter­mine for your­self whether or not you should donate to the SIAI. Or, if not that, to provide you with some di­rec­tion so that you can con­tinue your in­ves­ti­ga­tion.

The SIAI’s Form 990′s are available at GuideS­tar and Foun­da­tion Cen­ter. You must reg­ister in or­der to ac­cess the files at GuideS­tar.

  1. 2002 (Form 990-EZ)

  2. 2003 (Form 990-EZ)

  3. 2004 (Form 990-EZ)

  4. 2005 (Form 990)

  5. 2006 (Form 990)

  6. 2007 (Form 990)

  7. 2008 (Form 990-EZ)

  8. 2009 (Form 990)

SIAI Fi­nan­cial Overview

The Sin­gu­lar­ity In­sti­tute for Ar­tifi­cial In­tel­li­gence (SIAI) is a pub­lic or­ga­ni­za­tion work­ing to re­duce ex­is­ten­tial risk from fu­ture tech­nolo­gies, in par­tic­u­lar ar­tifi­cial in­tel­li­gence. “The Sin­gu­lar­ity In­sti­tute brings ra­tio­nal anal­y­sis and ra­tio­nal strat­egy to the challenges fac­ing hu­man­ity as we de­velop cog­ni­tive tech­nolo­gies that will ex­ceed the cur­rent up­per bounds on hu­man in­tel­li­gence.” The SIAI are also the founders of Less Wrong.

The graphs above offer an ac­cu­rate sum­mary of SIAI fi­nan­cial state since 2002. Some­times the end of year bal­ances listed in the Form 990 doesn’t match what you’d get if you did the math by hand. Th­ese are noted as dis­crep­an­cies be­tween the filed year end bal­ance and the ex­pected year end bal­ance or be­tween the filed year start bal­ance and the ex­pected year start bal­ance.

  1. Filing Er­ror 1 - There ap­pears to be a minor typo to the effect of $4.86 in the end of year bal­ance for the 2004 doc­u­ment. It ap­pears that Part I, Line 18 has been summed in­cor­rectly. $32,445.76 is listed, but the ex­pected re­sult is $32,450.41. The Part II bal­ance sheet calcu­la­tions which agree with the er­ror so the source of the er­ror is un­clear. The start of year bal­ance in 2005 re­flects the ex­pected value so this was prob­a­bly just a typo in 2004. The fol­low­ing year’s re­ported start of year bal­ance does not con­tain the er­ror.

  2. Filing Er­ror 2 - The 2006 doc­u­ment re­ports a year start bal­ance of $95,105.00 when the ex­pected year start bal­ance is $165,284.00, a dis­crep­ancy of $70,179.00. This amount is close to the es­ti­mated Pro­gram Ser­vice Ac­com­plish­ments in 2005 Form 990 Part III Line F of $72,000.00. Looks like the ser­vice ex­penses were not in­cluded com­pletely in Part II. The money is not miss­ing: fu­ture forms show ex­pected val­ues mov­ing for­ward.

  3. TheftThe or­ga­ni­za­tion re­ported $118,803.00 in theft in 2009 re­sult­ing in a year end as­set bal­ance lower than ex­pected. The SIAI is cur­rently pur­su­ing le­gal resti­tu­tion.

The SIAI has gen­er­ated a rev­enue sur­plus ev­ery year ex­cept 2008. The 2008 deficit ap­pears to be a cash­ing out of ex­cess sur­plus from 2007. As­set growth in­di­cates that the SIAI is good at uti­liz­ing the funds it has available, with­out over­spend­ing. The or­ga­ni­za­tion is ex­pand­ing it’s menu of ser­vices, but not so fast that it risks go­ing broke.

Nonethe­less, cur­rent as­set bal­ance is in­suffi­cient to sus­tain a year of op­er­a­tion at ex­ist­ing rate of ex­pen­di­ture. Sig­nifi­cant loss of rev­enue from dona­tions would re­sult in a shrink­age of ser­vices. Such a loss of rev­enue may be un­likely, but a rea­son­able goal for the or­ga­ni­za­tion would be to build up a year’s re­serves.

Revenue

Rev­enue is com­posed of pub­lic sup­port, pro­gram ser­vice (events/​con­fer­ences held, etc), and in­vest­ment in­ter­est. The “Other” cat­e­gory tends to in­clude Ama­zon.com af­fili­ate in­come, etc.

In­come from pub­lic sup­port has grown steadily with a no­table reg­u­lar in­crease start­ing in 2006. This in­crease is a re­sult of new con­tri­bu­tions from big donors. As an ex­am­ple, pub­lic sup­port in 2007 is largely com­posed of sig­nifi­cant con­tri­bu­tions from Peter Thiel ($125k), Brian Cart­mell ($75k), and Robert F. Zahra Jr ($123k) for $323k to­tal in large scale in­di­vi­d­ual con­tri­bu­tions (break down be­low).

In 2007 the SIAI started re­ceiv­ing in­come from pro­gram ser­vices. Cur­rently all “Pro­gram Ser­vice” rev­enue is from op­er­a­tion of the Sin­gu­lar­ity Sum­mit. In 2010 the sum­mit gen­er­ated sur­plus rev­enue for the SIAI. This is a sig­nifi­cant achieve­ment, as it means the or­ga­ni­za­tion has cre­ated a sus­tain­able ser­vice that could fund fur­ther ser­vices mov­ing for­ward.

A spe­cific anal­y­sis of the sum­mit is be­low.

Expenses

Ex­penses are com­posed of grants paid to win­ners, benefits paid to mem­bers, officer com­pen­sa­tion, con­tracts, travel, pro­gram ser­vices, and an other cat­e­gory.

The con­tracts column in the chart be­low in­cludes le­gal and ac­count­ing fees. The other column in­cludes ad­minis­tra­tive fees and other op­er­a­tional costs. I didn’t see rea­son to break the columns down fur­ther. In many cases the Form 990s provide more de­tailed item­iza­tion. If you care about how much officers spent on gas or when they bought new com­put­ers you might find the an­swers in the source.

I don’t have data for 2000 or 2001, but left the rows in the spread­sheet in case it can be filled in later.

Pro­gram ex­penses have grown over the years, but not un­rea­son­ably. In­deed, officer com­pen­sa­tion has de­clined steadily for sev­eral years. The grants in 2002, 2003, and 2004 were paid to Eliezer Yud­kowsky for work rele­vant to Ar­tifi­cial In­tel­li­gence.

The pro­gram ex­penses cat­e­gory in­cludes op­er­at­ing the Sin­gu­lar­ity Sum­mit, Visit­ing Fel­lows Pro­gram, etc. Some of the cost of these pro­grams is also in­cluded in the other cat­e­gory. For ex­am­ple, the 2007 Sin­gu­lar­ity Sum­mit is re­ported as cost­ing $101,577.00 but this to­tal amount is ac­counted for in mul­ti­ple sec­tions.

It ap­pears that 2009 was a more pro­duc­tive year than 2008 and also less ex­pen­sive. 2009 saw a larger Sin­gu­lar­ity Sum­mit than in 2008 and also the cre­ation of the Visit­ing Fel­lows Pro­gram.

Big Donors

This is not an ex­haus­tive list of con­tri­bu­tions. The SIAI’s 2009 filing de­tails ma­jor sup­port dona­tions for sev­eral pre­vi­ous years. Con­tri­bu­tions in the 2010 column are de­rived from http://​​in­tel­li­gence.org/​​donors. Known con­tri­bu­tions of less than $5,000 are ex­cluded for the sake of brevity. The 2006 dona­tion from Peter Thiel is sourced from a dis­cus­sion with the SIAI.

Peter Thiel and sev­eral other big donors com­pose the bulk of the or­ga­ni­za­tion’s rev­enue. It would be good to see a broader base of dona­tions mov­ing for­ward. Note, how­ever, that the base of dona­tions has been im­prov­ing. I don’t have the 2010 Form 990 yet, but it ap­pears to be the best year yet in terms of both the quan­tity of dona­tions and the num­ber of in­di­vi­d­ual donors (based on con­ver­sa­tion with SIAI mem­bers).

Officer Compensation

In 2002 to 2005 Eliezer Yud­kowsky re­ceived com­pen­sa­tion in the form of grants from the SIAI for AI re­search. It is noted in the Form 990s that no pub­lic funds were used for Eliezer’s re­search grants as he is also an officer. Start­ing in 2006 all com­pen­sa­tion for key officers is re­ported as salaried in­stead of in the form of grants.

Com­pen­sa­tion spiked in 2006, the same year of greatly in­creased pub­lic sup­port. Nonethe­less, officer com­pen­sa­tion has de­creased steadily de­spite con­tinued in­creases in pub­lic sup­port. It ap­pears that the SIAI has been man­ag­ing it’s re­sources care­fully in re­cent years, putting more money into pro­grams than into officer com­pen­sa­tion.

Eliezer’s base com­pen­sa­tion as salary in­creased 20% in 2008. It seems rea­son­able to com­pare Eliezer’s salary with that of pro­fes­sional soft­ware de­vel­op­ers. Eliezer would be able to make a fair amount more work­ing in pri­vate in­dus­try as a soft­ware de­vel­oper.

Mr. Yud­kowsky clar­ifies: “The rea­son my salary shows as $95K in 2009 is that Pay­chex screwed up and paid my first month of salary for 2010 in the 2009 tax year. My ac­tual salary was, I be­lieve, con­stant or roughly so through 2008-2010.” In this case we would ex­pect to see the 2010 Form 990 show a month re­duced salary.

Mov­ing for­ward, the SIAI will have to grap­ple with the high cost of re­cruit­ing top tier pro­gram­mers and aca­demics to do real work. I be­lieve this is an ar­gu­ment for the SIAI im­prov­ing its as­set sheet. More money in the bank means more of an abil­ity to take ad­van­tage of re­cruit­ment op­por­tu­ni­ties if they pre­sent them­selves.

Sin­gu­lar­ity Summit

Founded in 2006 by the SIAI in co­op­er­a­tion with Ray Kurzweil and Peter Thiel, the Sin­gu­lar­ity Sum­mit fo­cuses on a broad num­ber of top­ics re­lated to the Sin­gu­lar­ity and emerg­ing tech­nolo­gies. (1)

The Sin­gu­lar­ity Sum­mit was free un­til 2008 when the SIAI chose to be­gin charg­ing reg­is­tra­tion fees and ac­cept­ing spon­sor­ships. (2)

At­tendee counts are es­ti­mates drawn from SIAI Form 990 filings. 2010 is pur­ported to be the largest con­fer­ence so far. Beyond the core con­fer­ence at­ten­dees, hun­dreds of thou­sands of on­line view­ers are reached through record­ings of the Sum­mit ses­sions. (A)

The cost of run­ning the sum­mit has in­creased an­nu­ally, but rev­enue from spon­sor­ships and reg­is­tra­tion have kept pace. The con­fer­ence may have lo­gis­tic and ad­minis­tra­tive costs, but it doesn’t re­ally im­pact the SIAI bud­get. This makes the con­fer­ence a valuable blend of out­reach and ed­u­ca­tion. If the con­fer­ence con­vinces some­one to donate or in some way di­rectly sup­port work against ex­is­ten­tial risks, the benefits are effec­tively free (or at the very least come at no cost to other pro­grams).

Is the Sin­gu­lar­ity Sum­mit suc­cess­ful?

It’s difficult to eval­u­ate the suc­cess of con­fer­ences. So many of the benefits are re­al­ized down­stream of the ac­tual event. Nonethe­less, the at­tendee counts and widen­ing ex­po­sure seem to bring im­mense value for the cost. Sev­eral fac­tors con­tribute to a sense that the con­fer­ence is a suc­cess:

  • In 2010 the Sum­mit be­came a pos­i­tive rev­enue gen­er­at­ing ex­er­cise in its own right. With care­ful stew­ard­ship, the Sin­gu­lar­ity Sum­mit could grow to gen­er­ate a re­li­able an­nual rev­enue for the SIAI.

  • The abil­ity to run an effi­cient con­fer­ence is it­self valuable. Should it choose to, the SIAI could run other types of con­fer­ences or spe­cial in­ter­est events in the fu­ture with a good ex­pec­ta­tion of suc­cess.

  • The high visi­bil­ity of the Sum­mit plants seeds for fu­ture fund rais­ing. Con­fer­ence at­ten­dees likely benefit as much or more from net­work­ing as they do from the con­tent of the ses­sions. Net­work­ing builds re­la­tion­ships be­tween peo­ple able to co­or­di­nate to solve prob­lems or fund solu­tions to prob­lems.

  • The Sin­gu­lar­ity Sum­mit has gen­er­ated on­go­ing pub­lic in­ter­est and me­dia cov­er­age. Notable ar­ti­cles can be found in­Pop­u­lar Science (3),Pop­u­lar Me­chan­ics (4), the Guardian (5), andTIME Magaz­ine (6). Qual­ity me­dia cov­er­age raises pub­lic aware­ness of Sin­gu­lar­ity re­lated top­ics. There is a strong ar­gu­ment that a per­son with an in­ter­est in fu­tur­ist or ex­is­ten­tial risk con­scious­ness rais­ing reaches a wide au­di­ence by sup­port­ing the Sin­gu­lar­ity Sum­mit.

When dis­cussing “fu­ture shock lev­els”—gaps in ex­po­sure to and un­der­stand­ing of fu­tur­ist con­cepts—Eliezer Yud­kowsky wrote, “In gen­eral, one shock level gets you en­thu­si­asm, two gets you a strong re­ac­tion—wild en­thu­si­asm or dis­be­lief, three gets you fright­ened—not nec­es­sar­ily hos­tile, but fright­ened, and four can get you burned at the stake.” (7) Most fu­tur­ists are fa­mil­iar with this sen­ti­ment. In­creased pub­lic ex­po­sure to un­fa­mil­iar con­cepts through the pos­i­tive me­dia cov­er­age brought about by the Sin­gu­lar­ity Sum­mit works to im­prove the le­gi­t­i­macy of those con­cepts and re­duce fu­ture shock.

The re­sult is that hard prob­lems get eas­ier to solve. Ex­perts in­ter­ested in helping, but afraid of so­cial con­dem­na­tion, will be more likely to do core re­search. The cu­ri­ous will be fur­ther mo­ti­vated to break prob­lems down. Vague far-mode think­ing about fu­ture tech­nolo­gies will, for a few, shift into near-mode think­ing about solu­tions. Public re­ac­tion to what would oth­er­wise be shock­ing con­cepts will shift away from the ex­treme. The fu­ture be­comes more con­di­tioned to ac­cept the real work and real costs of bat­tling ex­is­ten­tial risk.

    SIAI Milestones

    This is not a com­plete list of SIAI mile­stones, but cov­ers quite a few of the ma­te­ri­als and events that the SIAI has pro­duced over the years.

    2005

    2006

    • Fundrais­ing efforts ex­pand sig­nifi­cantly.

    • SIAI hosts the first Sin­gu­lar­ity Sum­mit at Stan­ford.

    2007

    2008

    • SIAI hosts the Sin­gu­lar­ity Sum­mit in San Jose.

    • SIAI In­ter­view Series is ex­panded.

    • SIAI be­gins its sum­mer in­tern pro­gram.

    2009

    Sig­nifi­cant de­tail on 2009 achieve­ments is available here. More pub­li­ca­tions are available here.

    • RF Eliezer Yud­kowsky com­pletes the ra­tio­nal­ity se­quences.

    • Less Wrong is founded.

    • SIAI hosts the Sin­gu­lar­ity Sum­mit in New York.

    • RF Anna Sala­mon speaks on tech­nolog­i­cal fore­cast­ing at the Santa Fe in­sti­tute.

    • SIAI es­tab­lishes the Visit­ing Fel­lows Pro­gram. Grad­u­ate and un­der-grad­u­ate stu­dents within AI re­lated dis­ci­plines de­velop re­lated talks and pa­pers.

    Papers and talks from SIAI fel­lows pro­duced in 2009:

    1. “Chang­ing the frame of AI fu­tur­ism: From sto­ry­tel­ling to heavy-tailed, high-di­men­sional prob­a­bil­ity dis­tri­bu­tions”, by Steve Ray­hawk, Anna Sala­mon, Tom McCabe, Rolf Nel­son, and Michael Anis­si­mov. (Pre­sented at the Euro­pean Con­fer­ence of Com­put­ing and Philos­o­phy in July ’09 (ECAP))

    2. “Arms Con­trol and In­tel­li­gence Ex­plo­sions”, by Carl Shul­man (Also pre­sented at ECAP)

    3. Ma­chine Ethics and Su­per­in­tel­li­gence”, by Carl Shul­man and Hen­rik Jon­s­son (Pre­sented at the Asia-Pa­cific Con­fer­ence of Com­put­ing and Philos­o­phy in Oc­to­ber ’09 (APCAP))

    4. Which Con­se­quen­tial­ism? Ma­chine Ethics and Mo­ral Diver­gence”, by Carl Shul­man and Nick Tar­leton (Also pre­sented at APCAP);

    5. “Long-term AI fore­cast­ing: Build­ing method­olo­gies that work”, an in­vited pre­sen­ta­tion by Anna Sala­mon at the Santa Fe In­sti­tute con­fer­ence on fore­cast­ing;

    6. “Shap­ing the In­tel­li­gence Ex­plo­sion” and “How much it mat­ters to know what mat­ters: A back of the en­velope calcu­la­tion”, pre­sen­ta­tions by Anna Sala­mon at the Sin­gu­lar­ity Sum­mit 2009 in October

    7. “Path­ways to Benefi­cial Ar­tifi­cial Gen­eral In­tel­li­gence: Vir­tual Pets, Robot Chil­dren, Ar­tifi­cial Bio­scien­tists, and Beyond”, a pre­sen­ta­tion by SIAI Direc­tor of Re­search Ben Go­ertzel at Sin­gu­lar­ity Sum­mit 2009;

    8. “Cog­ni­tive Bi­ases and Gi­ant Risks”, a pre­sen­ta­tion by SIAI Re­search Fel­low Eliezer Yud­kowsky at Sin­gu­lar­ity Sum­mit 2009;

    9. “Con­ver­gence of Ex­pected Utility for Univer­sal Ar­tifi­cial In­tel­li­gence”, a pa­per by Peter de Blanc, an SIAI Visit­ing Fel­low.

    * Text for this list of pa­pers re­pro­duced from here.

    A list of achieve­ments, pa­pers, and talks from 2010 is pend­ing. See also the Sin­gu­lar­ity Sum­mit con­tent links above.

    Fur­ther Edi­to­rial Thoughts...

    Prior to do­ing this in­ves­ti­ga­tion I had some ex­pec­ta­tion that the SIAI was a money los­ing op­er­a­tion. I didn’t ex­pect the Sin­gu­lar­ity Sum­mit to be mak­ing money. I had an ex­pec­ta­tion that Eliezer prob­a­bly made around $70k (pro­gram­mer money dis­counted for be­ing paid by a non-profit). I figured the SIAI had a broad donor base of small dona­tions. I was off base on all counts.

    I had some ex­pec­ta­tion that the SIAI was a money los­ing op­er­a­tion.

    I had weak con­fi­dence in this be­lief, as I don’t know a lot about the fi­nances of pub­lic or­ga­ni­za­tions. The SIAI ap­pears to be man­ag­ing its cash re­serves well. It would be good to see the SIAI build up some as­set re­serves so that it could op­er­ate com­fortably in years where pub­lic sup­port dips or so that it could take ad­van­tage of un­ex­pected op­por­tu­ni­ties.

    Over­all, the al­lo­ca­tion of funds strikes me as highly effi­cient.

    I didn’t ex­pect the Sin­gu­lar­ity Sum­mit to be mak­ing money.

    This was a sur­pris­ing find­ing, al­though I in­cor­rectly con­di­tioned my ex­pec­ta­tion from ex­pe­riences work­ing with game in­dus­try con­fer­ences. I don’t know ex­actly how much the SIAI is spend­ing on food and fancy table­cloths at the Sin­gu­lar­ity Sum­mit, but I don’t think I care: it’s grow­ing and show­ing bet­ter re­sults on the rev­enue chart each year. If you at­tend the con­fer­ence and con­tribute to the event you add pure value. As dis­cussed above, the benefits of the con­fer­ence ap­pear to be very far in the “re­duc­ing ex­is­ten­tial risk” cat­e­gory. Los­ing the Sum­mit would be a blow to en­sur­ing a safe fu­ture.

    I know that the Sum­mit will not it­self do the hard work of dis­solv­ing and solv­ing prob­lems, or of syn­the­siz­ing new the­o­ries, or of test­ing those the­o­ries, or of im­ple­ment­ing solu­tions. The value of the Sum­mit lies in its abil­ity to raise aware­ness of the work that needs to be done, to cre­ate net­works of peo­ple to do that work, to lower pub­lic shock at the im­pli­ca­tions of that work, and gen­er­ate fund­ing for those do­ing that work.

    I had an ex­pec­ta­tion that Eliezer prob­a­bly made around $70k.

    Eliezer’s com­pen­sa­tion is slightly more than I thought. I’m not sure what up­per bound I would have balked at or would balk at. I do have some con­cern about the cost of re­cruit­ing ad­di­tional Re­search Fel­lows. The cost of ad­di­tional RFs has to be weighed against new pro­grams like Visit­ing Fel­lows.

    At the same time, the or­ga­ni­za­tion has been able to ex­pand ser­vices with­out drain­ing the coffers. A donor can hold a strong ex­pec­ta­tion that the bulk of their dona­tion will go to­ward ac­tual work in the form of salaries for work­ing per­son­nel or events like the Visit­ing Fel­lows Pro­gram.

    I figured the SIAI had a broad donor base of small dona­tions.

    I must have been out to lunch when mak­ing this pre­dic­tion. I figured the SIAI was mostly sup­ported by fu­tur­ism en­thu­si­asts and small scale ra­tio­nal­ists.

    The or­ga­ni­za­tion has a heavy re­li­ance on ma­jor donor sup­port. I would ex­pect the 2010 filing to re­veal a broad­en­ing of rev­enue, but I do not ex­pect the or­ga­ni­za­tion to have be­come in­de­pen­dent of big donor sup­port. Big donor sup­port is a good thing to have, but more long term sta­bil­ity would be pro­vided by a broader base of sup­port­ers.

    My sug­ges­tions to the SIAI:

    • Con­sider re­lo­cat­ing to a cheaper part of the planet. Re­search Fel­lows will likely have to ac­cept lower than mar­ket av­er­age com­pen­sa­tion for their work or no com­pen­sa­tion at all. Bet­ter to live in an area where com­pen­sa­tion goes farther.

    • Con­sider in­creas­ing sav­ings to al­low for a larger safety net and the abil­ity to take ad­van­tage of op­por­tu­ni­ties.

    • Con­sider item­iz­ing pro­gram ser­vice ex­penses in more de­tail. It isn’t re­quired, but the trans­parency makes for bet­ter de­ci­sion mak­ing on the part of donors.

    • Con­sider pro­vid­ing more in­for­ma­tion on what Eliezer and other Re­search Fel­lows are work­ing on from time to time. You are build­ing two com­mu­ni­ties. A com­mu­nity of poly­maths who will solve hard prob­lems and a com­mu­nity of sup­port­ers who be­lieve in the efforts of the poly­maths. The lat­ter are more likely to con­tinue their sup­port if they have in­sight into the ac­tivi­ties of the former.

    Mov­ing for­ward:

    John Sal­vatier pro­vided me with good in­sight into next steps for gain­ing fur­ther clar­ity into the SIAI’s op­er­a­tional goals, method­ol­ogy, and fi­nan­cial stand­ing.

    • Con­tact GiveWell for ex­pert ad­vice on or­ga­ni­za­tional anal­y­sis to help clar­ify good next steps.

    • Get more in­for­ma­tion on cur­rent and forth­com­ing SIAI re­search pro­jects. Is there ac­tive work in the re­search ar­eas the SIAI has iden­ti­fied? Is there a game plan for at­tack­ing par­tic­u­lar prob­lems in the re­search space?

    • Spend some time gath­er­ing in­for­ma­tion from SIAI mem­bers on how they would uti­lize new funds. Are there spe­cific op­por­tu­ni­ties the SIAI has iden­ti­fied? Where is the or­ga­ni­za­tion “only limited by a lack of cash”—if they had more funds, what would they im­me­di­ately pur­sue?

    • For­mu­late meth­ods of val­i­dat­ing the SIAI’s ex­e­cu­tion of goals. It ap­pears that the Sum­mit is an ex­am­ple of effi­cient ex­e­cu­tion of the re­duc­ing ex­is­ten­tial risk goal by le­gi­t­imiz­ing the ex­is­ten­tial risk and AGI prob­lem space and by build­ing net­works among in­ter­ested in­di­vi­d­u­als. How will donors ver­ify the value of SIAI core re­search work in com­ing years?

    Conclusion

    At pre­sent, the fi­nan­cial po­si­tion of the SIAI seems sound. The Sin­gu­lar­ity Sum­mit stands as a par­tic­u­lar suc­cess that should be ac­knowl­edged. The abil­ity for the or­ga­ni­za­tion to re­duce officer com­pen­sa­tion at the same time it ex­pands pro­grams is also no­table.

    Tax doc­u­ments can only tell us so much. A deeper pic­ture of the SIAI would work to re­veal more of the mov­ing parts within the or­ga­ni­za­tion. It would provide a bet­ter ac­count of monthly ac­tivi­ties and provide a means to mea­sure fu­ture suc­cess or failure. The ques­tion for many sup­port­ers will not be “should I donate” but “should I con­tinue to donate?” A ques­tion that can be an­swered by in­creased and on­go­ing trans­parency.

    It is im­por­tant that those who are con­cerned with ex­is­ten­tial risks, AGI, and the safety of fu­ture tech­nolo­gies and who choose to donate to the SIAI take a role in shap­ing a pos­i­tive fu­ture for the or­ga­ni­za­tion. Donat­ing in sup­port of AI re­search is valuable, but donat­ing and also tel­ling oth­ers about the dona­tion is far more valuable.

    Con­sider the Se­quence post ‘Why Our Kind Can’t Co­op­er­ate.’ If the SIAI is an or­ga­ni­za­tion worth sup­port­ing, and given that they are work­ing in a prob­lem space that cur­rently only has strong trac­tion with “our kind,” then there is a risk of the SIAI failing to reach its max­i­mum po­ten­tial be­cause donors do not co­or­di­nate suc­cess­fully. If you are a donor, stand up and be counted. Post on Less Wrong and de­scribe why you donated. Let the SIAI post your name. Help other donors see that they aren’t act­ing alone.

    Similarly, if you are crit­i­cal of the SIAI think about why and write it up. Create a dis­cus­sion and dig into the de­tails. The path most likely to in­crease ex­is­ten­tial risk is the one where ra­tio­nal thinkers stay silent.

    The SIAI’s cur­rent op­er­at­ing bud­get and donor rev­enue is very small. It is well within our com­mu­nity’s abil­ity to effect change.

    My re­search has led me to the con­clu­sion I should donate to the SIAI (above my pre­vi­ous pledge in sup­port of ra­tio­nal­ity boot camp). I already donate to Al­cor and am an Al­cor mem­ber. I have to de­ter­mine an amount for the SIAI that won’t cause wife ag­gro. Unilat­eral house­hold fi­nan­cial de­ci­sions in­crease my per­sonal ex­is­ten­tial risk. :P I will up­date this doc­u­ment or make a com­ment post when I know more.

    Refer­ences:

    My work­ing spread­sheet is here.

    (1)http://​​www.sin­gu­lar­i­ty­sum­mit.com/​​

    (2)http://​​less­wrong.com/​​lw/​​ts/​​sin­gu­lar­ity_sum­mit_2008/​​

    (3)http://​​www.pop­sci.com/​​sc­itech/​​ar­ti­cle/​​2009-10/​​sin­gu­lar­ity-sum­mit-2009-sin­gu­lar­ity-near

    (4)http://​​www.pop­u­larme­chan­ics.com/​​tech­nol­ogy/​​en­g­ineer­ing/​​robots/​​4332783

    (5)http://​​www.guardian.co.uk/​​tech­nol­ogy/​​2008/​​nov/​​06/​​ar­tifi­cial­in­tel­li­genceai-engineering

    (6)http://​​www.time.com/​​time/​​health/​​ar­ti­cle/​​0,8599,2048138-1,00.html

    (7) http://​​www.sl4.org/​​shock­levels.html

    (A) Sum­mit Content