Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years.
SIAI started before the rationality blogging. Vernor Vinge warned about AI causing the end of the human race back in 1993.
I have difficulty accepting that a substantial portion of FAI researchers were drawn to the subject by HPMOR.
(Of course, FAI researchers, LWers and HPMOR fans are distinct groups of people)
Information on the history of the MIRI from 2002 through 2006 is sparse, as gleaned from the Wikipedia page on the organization. As the SIAI in 2006, they successfully raised $200,000 as part of a donation campaign, with $100,000 matched as a donation by Peter Thiel. In the years since, the MIRI seems to have at least once annually held fundraisers that turn out just as successful. “The Sequences” were scarcely started in 2006, so I don’t know if Peter Thiel got wind of Eliezer’s ideas and organization on SL4, or Overcoming Bias, or what. Anyway, while Vinge, and earlier, I. J. Good, warned against the dangers of machine superintelligence, Eliezer founded a research organization aimed at solving this problem, formulated the mission for doing so, and popularized this through his meetings. I’m using metrics such as the raised profile of risks from machine intelligence, and the amount of vocal support and donations the MIRI receives as a proxy for how they and Eliezer specifically have raised the profile of this field of inquiry and concern. I assume others would not have done so much for the MIRI if they didn’t believe in its mission. Most of the recent coverage should probably be attributed to Nick Bostrom ans his recent book, though.
At the 2014 Effective Altruism Summit, Eliezer reported there are only four full-time FAI researchers in the world. That is himself, Nate Soares, and Benja Fallenstein of the MIRI, and Stuart Armstrong of the FHI. I was incredulous, and guessed Eliezer’s definition of ‘FAI researcher’ was more stringent than most sensible people would use. I asked Luke Muehlhauser for clarification. He remarked beyond those four Paul Christiano might count as ‘half a FAI researcher’ because he spends a portion of his time as a mathematician as UCB working on mathematics in line with the MIRI’s research agenda. The MIRI has since hired Patrick LaVictoire, and perhaps others.
The point is, the MIRI itself thinks there’s less than a dozen FAI researchers. For all we know, all FAI researchers might be users of LessWrong, and HPMoR fans. I could ask all of the known “FAI researchers” if they were first introduced to these research ideas, through LessWrong, through HPMoR. That indeed might be a “substantial portion”. You or I might qualify “FAI researcher” differently, but Eliezer by his own admission believes writing more HPMoR is one of the surprisingly best way to draw more attention from Math Olympiad contestants to their research, as does the MIRI.
That may not be the only reason that didn’t get off the ground as a movement. Movements have existed before the internet. However, in a different way the internet may matter: a world with internet and modern computers may make something like a superintelligent AI more viscerally plausible as a possibility.
Movements certainly have existed before the net, but generally where there is a high enough density of potential members to organise via word of mouth and print media. With the possible exception of a few places such as Silicon Valley, I don’t think that exists in this case.
I do agree with you that in many ways superintelligence seems more plausible given modern technology, but OTOH people are cautious after the AI winters.
SIAI started before the rationality blogging. Vernor Vinge warned about AI causing the end of the human race back in 1993.
I have difficulty accepting that a substantial portion of FAI researchers were drawn to the subject by HPMOR.
(Of course, FAI researchers, LWers and HPMOR fans are distinct groups of people)
Information on the history of the MIRI from 2002 through 2006 is sparse, as gleaned from the Wikipedia page on the organization. As the SIAI in 2006, they successfully raised $200,000 as part of a donation campaign, with $100,000 matched as a donation by Peter Thiel. In the years since, the MIRI seems to have at least once annually held fundraisers that turn out just as successful. “The Sequences” were scarcely started in 2006, so I don’t know if Peter Thiel got wind of Eliezer’s ideas and organization on SL4, or Overcoming Bias, or what. Anyway, while Vinge, and earlier, I. J. Good, warned against the dangers of machine superintelligence, Eliezer founded a research organization aimed at solving this problem, formulated the mission for doing so, and popularized this through his meetings. I’m using metrics such as the raised profile of risks from machine intelligence, and the amount of vocal support and donations the MIRI receives as a proxy for how they and Eliezer specifically have raised the profile of this field of inquiry and concern. I assume others would not have done so much for the MIRI if they didn’t believe in its mission. Most of the recent coverage should probably be attributed to Nick Bostrom ans his recent book, though.
At the 2014 Effective Altruism Summit, Eliezer reported there are only four full-time FAI researchers in the world. That is himself, Nate Soares, and Benja Fallenstein of the MIRI, and Stuart Armstrong of the FHI. I was incredulous, and guessed Eliezer’s definition of ‘FAI researcher’ was more stringent than most sensible people would use. I asked Luke Muehlhauser for clarification. He remarked beyond those four Paul Christiano might count as ‘half a FAI researcher’ because he spends a portion of his time as a mathematician as UCB working on mathematics in line with the MIRI’s research agenda. The MIRI has since hired Patrick LaVictoire, and perhaps others.
The point is, the MIRI itself thinks there’s less than a dozen FAI researchers. For all we know, all FAI researchers might be users of LessWrong, and HPMoR fans. I could ask all of the known “FAI researchers” if they were first introduced to these research ideas, through LessWrong, through HPMoR. That indeed might be a “substantial portion”. You or I might qualify “FAI researcher” differently, but Eliezer by his own admission believes writing more HPMoR is one of the surprisingly best way to draw more attention from Math Olympiad contestants to their research, as does the MIRI.
I.J. Good also warned about it in the 1960s.
Indeed, although since this was before the internet, it didn’t start any sort of movement.
That may not be the only reason that didn’t get off the ground as a movement. Movements have existed before the internet. However, in a different way the internet may matter: a world with internet and modern computers may make something like a superintelligent AI more viscerally plausible as a possibility.
Movements certainly have existed before the net, but generally where there is a high enough density of potential members to organise via word of mouth and print media. With the possible exception of a few places such as Silicon Valley, I don’t think that exists in this case.
I do agree with you that in many ways superintelligence seems more plausible given modern technology, but OTOH people are cautious after the AI winters.