I don’t know how to say this without sounding rude, but the Future of Humanity Institute and Nick Bostrom give a formal/professional appearance that the SIAI and Eliezer Yudkowsky lack.
There’s some truth to that—but I can’t say I am particularly sold on the FHI either. Yudkowsky seems less deluded about brain emulation than they are. Both organisations are basically doom-mongering. Doom-mongers are not known for their sanity or even-headedness:
History is peppered with false prognostications of imminent doom. Blustering doomsayers are harmful: Not only do they cause unnecessary fear and disturbance, but worse: they deplete our responsiveness and make even sensible efforts to understand or reduce existential risk look silly by association.
It seems difficult to study this subject and remain objective. Those organisations that have tried so far have mostly exaggerated the prospects for the end of the world. They form from those who think the end of the world is more likely than most, associate with others with the same mindset, and their funding often depends on how convincing and dramatic picture of DOOM they can paint. The results tend to lead to something of a credibility gap.
In what way do you consider them to be deluded about brain emulation?
While I agree that in general, organizations have an incentive to doom-monger in order to increase their funding, I’m not so sure this applies to FHI. They’re an academic department associated with a major university. Presumably their funding is more tied to their academic accomplishments, and academics tend to look down on excessive doom-mongering.
My understanding is that Tim thinks de novo AI is very probably very near, leaving little time for brain emulation, and that far more resources will go into de novo AI, or that incremental insights into the brain would enable AI before emulation becomes possible.
On the other hand, FHI folk are less confident that AI theory will cover all the necessary bases in the next couple decades, while neuroimaging continues to advance apace. If neuroimaging at the relevant level of cost and resolution comes quickly while AI theory moves slowly, processing the insights from brain imaging into computer science may take longer than just running an emulation.
There’s some truth to that—but I can’t say I am particularly sold on the FHI either. Yudkowsky seems less deluded about brain emulation than they are. Both organisations are basically doom-mongering. Doom-mongers are not known for their sanity or even-headedness:
It seems difficult to study this subject and remain objective. Those organisations that have tried so far have mostly exaggerated the prospects for the end of the world. They form from those who think the end of the world is more likely than most, associate with others with the same mindset, and their funding often depends on how convincing and dramatic picture of DOOM they can paint. The results tend to lead to something of a credibility gap.
In what way do you consider them to be deluded about brain emulation?
While I agree that in general, organizations have an incentive to doom-monger in order to increase their funding, I’m not so sure this applies to FHI. They’re an academic department associated with a major university. Presumably their funding is more tied to their academic accomplishments, and academics tend to look down on excessive doom-mongering.
My understanding is that Tim thinks de novo AI is very probably very near, leaving little time for brain emulation, and that far more resources will go into de novo AI, or that incremental insights into the brain would enable AI before emulation becomes possible.
On the other hand, FHI folk are less confident that AI theory will cover all the necessary bases in the next couple decades, while neuroimaging continues to advance apace. If neuroimaging at the relevant level of cost and resolution comes quickly while AI theory moves slowly, processing the insights from brain imaging into computer science may take longer than just running an emulation.