My comment in June was in response to Normal_Anomaly’s comment:
Count me as another person who would switch some of my charitable contribution from VillageReach to SIAI if I had more information on this subject [what research will be done with donated funds].
I replied:
the most exciting developments in this space in years (to my knowledge) are happening right now, but it will take a while for things to happen and be announced.
To my memory, I had two things in mind:
The Strategic Plan I was then developing, which does a better job of communicating what SIAI will do with donated funds than ever before. This was indeed board-ratified and published.
A greater push from SIAI to publish its research.
The second one takes longer but is in progress. We do have several chapters forthcoming in The Singularity Hypothesis volume from Springer, as well as other papers in the works. We have also been actively trying to hire more researchers. I was the first such hire, and have 1-4 papers/chapters on the way, but am now Executive Director. We tried to hire a few other researchers, but they did not work out. Recruiting researchers to work on these problems has been difficult for both SIAI and FHI, but we continue to try.
Mostly, we need (1) more funds, and (2) smart people who not only say they think AI risk is the most important problem in the world, but who are willing to make large life changes as if those words reflect their actual anticipations. (Of course I don’t mean that the rational thing to do if you’re a smart researcher who cares about AI risk is to come work for Singularity Institute, but that should be true for some smart researchers.)
I’ll answer this one here.
My comment in June was in response to Normal_Anomaly’s comment:
I replied:
To my memory, I had two things in mind:
The Strategic Plan I was then developing, which does a better job of communicating what SIAI will do with donated funds than ever before. This was indeed board-ratified and published.
A greater push from SIAI to publish its research.
The second one takes longer but is in progress. We do have several chapters forthcoming in The Singularity Hypothesis volume from Springer, as well as other papers in the works. We have also been actively trying to hire more researchers. I was the first such hire, and have 1-4 papers/chapters on the way, but am now Executive Director. We tried to hire a few other researchers, but they did not work out. Recruiting researchers to work on these problems has been difficult for both SIAI and FHI, but we continue to try.
Mostly, we need (1) more funds, and (2) smart people who not only say they think AI risk is the most important problem in the world, but who are willing to make large life changes as if those words reflect their actual anticipations. (Of course I don’t mean that the rational thing to do if you’re a smart researcher who cares about AI risk is to come work for Singularity Institute, but that should be true for some smart researchers.)
What sort of life changes?
For example, moving to the Bay Area to be paid to do research on particular sub-problems of Friendly AI research.
Or at the very least, doing some of these small tasks.