12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner.
I am not affiliated with the Singularity Institute for Artificial Intelligence.
I have not donated to the SIAI prior to writing this.
I made this pledge prior to writing this document.
Images are now hosted on LessWrong.com.
The 2010 Form 990 data will be available later this month.
It is not my intent to propagate misinformation. Errors will be corrected as soon as they are identified.
Acting on gwern’s suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more.
I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn’t sure. I read gwern’s article and realized that I could easily get more information to clarify my thinking.
So I downloaded the SIAI’s Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it.
My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation.
2002 (Form 990-EZ)
2003 (Form 990-EZ)
2004 (Form 990-EZ)
2005 (Form 990)
2006 (Form 990)
2007 (Form 990)
2008 (Form 990-EZ)
2009 (Form 990)
SIAI Financial Overview
The Singularity Institute for Artificial Intelligence (SIAI) is a public organization working to reduce existential risk from future technologies, in particular artificial intelligence. “The Singularity Institute brings rational analysis and rational strategy to the challenges facing humanity as we develop cognitive technologies that will exceed the current upper bounds on human intelligence.” The SIAI are also the founders of Less Wrong.
The graphs above offer an accurate summary of SIAI financial state since 2002. Sometimes the end of year balances listed in the Form 990 doesn’t match what you’d get if you did the math by hand. These are noted as discrepancies between the filed year end balance and the expected year end balance or between the filed year start balance and the expected year start balance.
Filing Error 1 - There appears to be a minor typo to the effect of $4.86 in the end of year balance for the 2004 document. It appears that Part I, Line 18 has been summed incorrectly. $32,445.76 is listed, but the expected result is $32,450.41. The Part II balance sheet calculations which agree with the error so the source of the error is unclear. The start of year balance in 2005 reflects the expected value so this was probably just a typo in 2004. The following year’s reported start of year balance does not contain the error.
Filing Error 2 - The 2006 document reports a year start balance of $95,105.00 when the expected year start balance is $165,284.00, a discrepancy of $70,179.00. This amount is close to the estimated Program Service Accomplishments in 2005 Form 990 Part III Line F of $72,000.00. Looks like the service expenses were not included completely in Part II. The money is not missing: future forms show expected values moving forward.
Theft—The organization reported $118,803.00 in theft in 2009 resulting in a year end asset balance lower than expected. The SIAI is currently pursuing legal restitution.
The SIAI has generated a revenue surplus every year except 2008. The 2008 deficit appears to be a cashing out of excess surplus from 2007. Asset growth indicates that the SIAI is good at utilizing the funds it has available, without overspending. The organization is expanding it’s menu of services, but not so fast that it risks going broke.
Nonetheless, current asset balance is insufficient to sustain a year of operation at existing rate of expenditure. Significant loss of revenue from donations would result in a shrinkage of services. Such a loss of revenue may be unlikely, but a reasonable goal for the organization would be to build up a year’s reserves.
Revenue is composed of public support, program service (events/conferences held, etc), and investment interest. The “Other” category tends to include Amazon.com affiliate income, etc.
Income from public support has grown steadily with a notable regular increase starting in 2006. This increase is a result of new contributions from big donors. As an example, public support in 2007 is largely composed of significant contributions from Peter Thiel ($125k), Brian Cartmell ($75k), and Robert F. Zahra Jr ($123k) for $323k total in large scale individual contributions (break down below).
In 2007 the SIAI started receiving income from program services. Currently all “Program Service” revenue is from operation of the Singularity Summit. In 2010 the summit generated surplus revenue for the SIAI. This is a significant achievement, as it means the organization has created a sustainable service that could fund further services moving forward.
A specific analysis of the summit is below.
Expenses are composed of grants paid to winners, benefits paid to members, officer compensation, contracts, travel, program services, and an other category.
The contracts column in the chart below includes legal and accounting fees. The other column includes administrative fees and other operational costs. I didn’t see reason to break the columns down further. In many cases the Form 990s provide more detailed itemization. If you care about how much officers spent on gas or when they bought new computers you might find the answers in the source.
I don’t have data for 2000 or 2001, but left the rows in the spreadsheet in case it can be filled in later.
Program expenses have grown over the years, but not unreasonably. Indeed, officer compensation has declined steadily for several years. The grants in 2002, 2003, and 2004 were paid to Eliezer Yudkowsky for work relevant to Artificial Intelligence.
The program expenses category includes operating the Singularity Summit, Visiting Fellows Program, etc. Some of the cost of these programs is also included in the other category. For example, the 2007 Singularity Summit is reported as costing $101,577.00 but this total amount is accounted for in multiple sections.
It appears that 2009 was a more productive year than 2008 and also less expensive. 2009 saw a larger Singularity Summit than in 2008 and also the creation of the Visiting Fellows Program.
This is not an exhaustive list of contributions. The SIAI’s 2009 filing details major support donations for several previous years. Contributions in the 2010 column are derived from http://intelligence.org/donors. Known contributions of less than $5,000 are excluded for the sake of brevity. The 2006 donation from Peter Thiel is sourced from a discussion with the SIAI.
Peter Thiel and several other big donors compose the bulk of the organization’s revenue. It would be good to see a broader base of donations moving forward. Note, however, that the base of donations has been improving. I don’t have the 2010 Form 990 yet, but it appears to be the best year yet in terms of both the quantity of donations and the number of individual donors (based on conversation with SIAI members).
In 2002 to 2005 Eliezer Yudkowsky received compensation in the form of grants from the SIAI for AI research. It is noted in the Form 990s that no public funds were used for Eliezer’s research grants as he is also an officer. Starting in 2006 all compensation for key officers is reported as salaried instead of in the form of grants.
Compensation spiked in 2006, the same year of greatly increased public support. Nonetheless, officer compensation has decreased steadily despite continued increases in public support. It appears that the SIAI has been managing it’s resources carefully in recent years, putting more money into programs than into officer compensation.
Eliezer’s base compensation as salary increased 20% in 2008. It seems reasonable to compare Eliezer’s salary with that of professional software developers. Eliezer would be able to make a fair amount more working in private industry as a software developer.
Mr. Yudkowsky clarifies: “The reason my salary shows as $95K in 2009 is that Paychex screwed up and paid my first month of salary for 2010 in the 2009 tax year. My actual salary was, I believe, constant or roughly so through 2008-2010.” In this case we would expect to see the 2010 Form 990 show a month reduced salary.
Moving forward, the SIAI will have to grapple with the high cost of recruiting top tier programmers and academics to do real work. I believe this is an argument for the SIAI improving its asset sheet. More money in the bank means more of an ability to take advantage of recruitment opportunities if they present themselves.
Founded in 2006 by the SIAI in cooperation with Ray Kurzweil and Peter Thiel, the Singularity Summit focuses on a broad number of topics related to the Singularity and emerging technologies. (1)
The Singularity Summit was free until 2008 when the SIAI chose to begin charging registration fees and accepting sponsorships. (2)
Attendee counts are estimates drawn from SIAI Form 990 filings. 2010 is purported to be the largest conference so far. Beyond the core conference attendees, hundreds of thousands of online viewers are reached through recordings of the Summit sessions. (A)
The cost of running the summit has increased annually, but revenue from sponsorships and registration have kept pace. The conference may have logistic and administrative costs, but it doesn’t really impact the SIAI budget. This makes the conference a valuable blend of outreach and education. If the conference convinces someone to donate or in some way directly support work against existential risks, the benefits are effectively free (or at the very least come at no cost to other programs).
Is the Singularity Summit successful?
It’s difficult to evaluate the success of conferences. So many of the benefits are realized downstream of the actual event. Nonetheless, the attendee counts and widening exposure seem to bring immense value for the cost. Several factors contribute to a sense that the conference is a success:
In 2010 the Summit became a positive revenue generating exercise in its own right. With careful stewardship, the Singularity Summit could grow to generate a reliable annual revenue for the SIAI.
The ability to run an efficient conference is itself valuable. Should it choose to, the SIAI could run other types of conferences or special interest events in the future with a good expectation of success.
The high visibility of the Summit plants seeds for future fund raising. Conference attendees likely benefit as much or more from networking as they do from the content of the sessions. Networking builds relationships between people able to coordinate to solve problems or fund solutions to problems.
The Singularity Summit has generated ongoing public interest and media coverage. Notable articles can be found inPopular Science (3),Popular Mechanics (4), the Guardian (5), andTIME Magazine (6). Quality media coverage raises public awareness of Singularity related topics. There is a strong argument that a person with an interest in futurist or existential risk consciousness raising reaches a wide audience by supporting the Singularity Summit.
When discussing “future shock levels”—gaps in exposure to and understanding of futurist concepts—Eliezer Yudkowsky wrote, “In general, one shock level gets you enthusiasm, two gets you a strong reaction—wild enthusiasm or disbelief, three gets you frightened—not necessarily hostile, but frightened, and four can get you burned at the stake.” (7) Most futurists are familiar with this sentiment. Increased public exposure to unfamiliar concepts through the positive media coverage brought about by the Singularity Summit works to improve the legitimacy of those concepts and reduce future shock.
The result is that hard problems get easier to solve. Experts interested in helping, but afraid of social condemnation, will be more likely to do core research. The curious will be further motivated to break problems down. Vague far-mode thinking about future technologies will, for a few, shift into near-mode thinking about solutions. Public reaction to what would otherwise be shocking concepts will shift away from the extreme. The future becomes more conditioned to accept the real work and real costs of battling existential risk.
This is not a complete list of SIAI milestones, but covers quite a few of the materials and events that the SIAI has produced over the years.
RF Eliezer Yudkowsky publishes “A Technical Explanation of Technical Explanation”
RF Eliezer Yudkowsky writes chapters for “Global Catastrophic Risks”
AI and existential risk presentations at Stanford, Immortality Institute’s Life Extension Conference, and the Terasem Foundation.
Fundraising efforts expand significantly.
SIAI hosts the first Singularity Summit at Stanford.
SIAI hosts the Singularity Summit in San Francisco.
SIAI outreach blog is started.
SIAI Interview Series is started.
SIAI introductory video is developed and released.
SIAI hosts the Singularity Summit in San Jose.
SIAI Interview Series is expanded.
SIAI begins its summer intern program.
RF Eliezer Yudkowsky completes the rationality sequences.
Less Wrong is founded.
SIAI hosts the Singularity Summit in New York.
RF Anna Salamon speaks on technological forecasting at the Santa Fe institute.
SIAI establishes the Visiting Fellows Program. Graduate and under-graduate students within AI related disciplines develop related talks and papers.
Papers and talks from SIAI fellows produced in 2009:
“Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, Anna Salamon, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ’09 (ECAP))
“Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
“Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ’09 (APCAP))
“Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
“Long-term AI forecasting: Building methodologies that work”, an invited presentation by Anna Salamon at the Santa Fe Institute conference on forecasting;
“Shaping the Intelligence Explosion” and “How much it matters to know what matters: A back of the envelope calculation”, presentations by Anna Salamon at the Singularity Summit 2009 in October
“Pathways to Beneficial Artificial General Intelligence: Virtual Pets, Robot Children, Artificial Bioscientists, and Beyond”, a presentation by SIAI Director of Research Ben Goertzel at Singularity Summit 2009;
“Cognitive Biases and Giant Risks”, a presentation by SIAI Research Fellow Eliezer Yudkowsky at Singularity Summit 2009;
“Convergence of Expected Utility for Universal Artificial Intelligence”, a paper by Peter de Blanc, an SIAI Visiting Fellow.
* Text for this list of papers reproduced from here.
A list of achievements, papers, and talks from 2010 is pending. See also the Singularity Summit content links above.
Further Editorial Thoughts...
Prior to doing this investigation I had some expectation that the SIAI was a money losing operation. I didn’t expect the Singularity Summit to be making money. I had an expectation that Eliezer probably made around $70k (programmer money discounted for being paid by a non-profit). I figured the SIAI had a broad donor base of small donations. I was off base on all counts.
I had some expectation that the SIAI was a money losing operation.
I had weak confidence in this belief, as I don’t know a lot about the finances of public organizations. The SIAI appears to be managing its cash reserves well. It would be good to see the SIAI build up some asset reserves so that it could operate comfortably in years where public support dips or so that it could take advantage of unexpected opportunities.
Overall, the allocation of funds strikes me as highly efficient.
I didn’t expect the Singularity Summit to be making money.
This was a surprising finding, although I incorrectly conditioned my expectation from experiences working with game industry conferences. I don’t know exactly how much the SIAI is spending on food and fancy tablecloths at the Singularity Summit, but I don’t think I care: it’s growing and showing better results on the revenue chart each year. If you attend the conference and contribute to the event you add pure value. As discussed above, the benefits of the conference appear to be very far in the “reducing existential risk” category. Losing the Summit would be a blow to ensuring a safe future.
I know that the Summit will not itself do the hard work of dissolving and solving problems, or of synthesizing new theories, or of testing those theories, or of implementing solutions. The value of the Summit lies in its ability to raise awareness of the work that needs to be done, to create networks of people to do that work, to lower public shock at the implications of that work, and generate funding for those doing that work.
I had an expectation that Eliezer probably made around $70k.
Eliezer’s compensation is slightly more than I thought. I’m not sure what upper bound I would have balked at or would balk at. I do have some concern about the cost of recruiting additional Research Fellows. The cost of additional RFs has to be weighed against new programs like Visiting Fellows.
At the same time, the organization has been able to expand services without draining the coffers. A donor can hold a strong expectation that the bulk of their donation will go toward actual work in the form of salaries for working personnel or events like the Visiting Fellows Program.
I figured the SIAI had a broad donor base of small donations.
I must have been out to lunch when making this prediction. I figured the SIAI was mostly supported by futurism enthusiasts and small scale rationalists.
The organization has a heavy reliance on major donor support. I would expect the 2010 filing to reveal a broadening of revenue, but I do not expect the organization to have become independent of big donor support. Big donor support is a good thing to have, but more long term stability would be provided by a broader base of supporters.
My suggestions to the SIAI:
Consider relocating to a cheaper part of the planet. Research Fellows will likely have to accept lower than market average compensation for their work or no compensation at all. Better to live in an area where compensation goes farther.
Consider increasing savings to allow for a larger safety net and the ability to take advantage of opportunities.
Consider itemizing program service expenses in more detail. It isn’t required, but the transparency makes for better decision making on the part of donors.
Consider providing more information on what Eliezer and other Research Fellows are working on from time to time. You are building two communities. A community of polymaths who will solve hard problems and a community of supporters who believe in the efforts of the polymaths. The latter are more likely to continue their support if they have insight into the activities of the former.
John Salvatier provided me with good insight into next steps for gaining further clarity into the SIAI’s operational goals, methodology, and financial standing.
Contact GiveWell for expert advice on organizational analysis to help clarify good next steps.
Get more information on current and forthcoming SIAI research projects. Is there active work in the research areas the SIAI has identified? Is there a game plan for attacking particular problems in the research space?
Spend some time gathering information from SIAI members on how they would utilize new funds. Are there specific opportunities the SIAI has identified? Where is the organization “only limited by a lack of cash”—if they had more funds, what would they immediately pursue?
Formulate methods of validating the SIAI’s execution of goals. It appears that the Summit is an example of efficient execution of the reducing existential risk goal by legitimizing the existential risk and AGI problem space and by building networks among interested individuals. How will donors verify the value of SIAI core research work in coming years?
At present, the financial position of the SIAI seems sound. The Singularity Summit stands as a particular success that should be acknowledged. The ability for the organization to reduce officer compensation at the same time it expands programs is also notable.
Tax documents can only tell us so much. A deeper picture of the SIAI would work to reveal more of the moving parts within the organization. It would provide a better account of monthly activities and provide a means to measure future success or failure. The question for many supporters will not be “should I donate” but “should I continue to donate?” A question that can be answered by increased and ongoing transparency.
It is important that those who are concerned with existential risks, AGI, and the safety of future technologies and who choose to donate to the SIAI take a role in shaping a positive future for the organization. Donating in support of AI research is valuable, but donating and also telling others about the donation is far more valuable.
Consider the Sequence post ‘Why Our Kind Can’t Cooperate.’ If the SIAI is an organization worth supporting, and given that they are working in a problem space that currently only has strong traction with “our kind,” then there is a risk of the SIAI failing to reach its maximum potential because donors do not coordinate successfully. If you are a donor, stand up and be counted. Post on Less Wrong and describe why you donated. Let the SIAI post your name. Help other donors see that they aren’t acting alone.
Similarly, if you are critical of the SIAI think about why and write it up. Create a discussion and dig into the details. The path most likely to increase existential risk is the one where rational thinkers stay silent.
The SIAI’s current operating budget and donor revenue is very small. It is well within our community’s ability to effect change.
My research has led me to the conclusion I should donate to the SIAI (above my previous pledge in support of rationality boot camp). I already donate to Alcor and am an Alcor member. I have to determine an amount for the SIAI that won’t cause wife aggro. Unilateral household financial decisions increase my personal existential risk. :P I will update this document or make a comment post when I know more.
(A) Summit Content