...given how easy it is to collect karma points simply by praising him even without substantiating the praise...
There is praise everywhere on the Internet, and in the case of Yudkowsky it is very much justified. People actually criticize him as well. The problem are some of the overall conclusions, extraordinary claims and ideas. They might be few compared to the massive amount of rationality, but they can easily outweigh all other good deeds if they are faulty.
Note that I am not saying that any of those ideas are wrong, but I think people here are too focused on, and dazzled by, the mostly admirable and overall valuable writings on the basics of rationality.
Really smart and productive people can be wrong, especially if they think they have to save the world. And if someone admits:
I mean, it seems to me that where I think an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project...
...I am even more inclined to judge the output of that person in the light of his goals.
To put it bluntly, people who focus on unfriendly AI might miss the weak spots that are more likely to be unfriendly humans, or even worse, friendly humans who are wrong.
One of the problems here is that talking about this mostly leads to discord or the perception of an attempted ad hominem. It is easy to focus on unfriendly AI but the critical examination of the motives or beliefs of actual people is hard. And in the case of existential risks it might actually be harmful, ifthe person is right.
Overall I don’t think that there are any real excuses not to study existential risks. But there are other possibilities, like the The Future of Humanity Institute. Currently I would praise everyone who decides to contribute money to the FHI. Not that you can do much wrong by donating money to the SIAI, after all they contribute to the awareness of existential risks. But I haven’t been able to overcome some bad feelings associated with it. And I don’t know how to say this without sounding rude, but the Future of Humanity Institute and Nick Bostrom give a formal/professional appearance that the SIAI and Eliezer Yudkowsky lack. I am sorry, but that is my personal perception. The SIAI and LW sometimes appear completely over the top to me.
(ETA: Please don’t stop writing about ideas that might seem crazy just because of the above, I love that stuff, I am only concerned about the possibility of real life consequences due to people who take those ideas too seriously.)
I don’t know how to say this without sounding rude, but the Future of Humanity Institute and Nick Bostrom give a formal/professional appearance that the SIAI and Eliezer Yudkowsky lack.
There’s some truth to that—but I can’t say I am particularly sold on the FHI either. Yudkowsky seems less deluded about brain emulation than they are. Both organisations are basically doom-mongering. Doom-mongers are not known for their sanity or even-headedness:
History is peppered with false prognostications of imminent doom. Blustering doomsayers are harmful: Not only do they cause unnecessary fear and disturbance, but worse: they deplete our responsiveness and make even sensible efforts to understand or reduce existential risk look silly by association.
It seems difficult to study this subject and remain objective. Those organisations that have tried so far have mostly exaggerated the prospects for the end of the world. They form from those who think the end of the world is more likely than most, associate with others with the same mindset, and their funding often depends on how convincing and dramatic picture of DOOM they can paint. The results tend to lead to something of a credibility gap.
In what way do you consider them to be deluded about brain emulation?
While I agree that in general, organizations have an incentive to doom-monger in order to increase their funding, I’m not so sure this applies to FHI. They’re an academic department associated with a major university. Presumably their funding is more tied to their academic accomplishments, and academics tend to look down on excessive doom-mongering.
My understanding is that Tim thinks de novo AI is very probably very near, leaving little time for brain emulation, and that far more resources will go into de novo AI, or that incremental insights into the brain would enable AI before emulation becomes possible.
On the other hand, FHI folk are less confident that AI theory will cover all the necessary bases in the next couple decades, while neuroimaging continues to advance apace. If neuroimaging at the relevant level of cost and resolution comes quickly while AI theory moves slowly, processing the insights from brain imaging into computer science may take longer than just running an emulation.
Yeah, discussing rationality in a clown suit is an interesting first step in learning an Aesop on how important it is to focus on the fundamentals over the forms, but you can’t deny it’s unnecessarily distracting, especially to outsiders, i.e. most of humanity and therefore most of the resources we need. BTW, I love the site’s new skin.
Oh, I’m not saying he doesn’t deserve praise, Guy’s works changed my life forever. I ’m just saying I got points for praising him without properly justifying it on more than one occasion, which I feel guilty for. I also don’t think he should be bashed for the sake of bashing or other Why Our Kind Can’t Cooperate gratuitous dissenting.
There is praise everywhere on the Internet, and in the case of Yudkowsky it is very much justified. People actually criticize him as well. The problem are some of the overall conclusions, extraordinary claims and ideas. They might be few compared to the massive amount of rationality, but they can easily outweigh all other good deeds if they are faulty.
Note that I am not saying that any of those ideas are wrong, but I think people here are too focused on, and dazzled by, the mostly admirable and overall valuable writings on the basics of rationality.
Really smart and productive people can be wrong, especially if they think they have to save the world. And if someone admits:
...I am even more inclined to judge the output of that person in the light of his goals.
To put it bluntly, people who focus on unfriendly AI might miss the weak spots that are more likely to be unfriendly humans, or even worse, friendly humans who are wrong.
One of the problems here is that talking about this mostly leads to discord or the perception of an attempted ad hominem. It is easy to focus on unfriendly AI but the critical examination of the motives or beliefs of actual people is hard. And in the case of existential risks it might actually be harmful, if the person is right.
Overall I don’t think that there are any real excuses not to study existential risks. But there are other possibilities, like the The Future of Humanity Institute. Currently I would praise everyone who decides to contribute money to the FHI. Not that you can do much wrong by donating money to the SIAI, after all they contribute to the awareness of existential risks. But I haven’t been able to overcome some bad feelings associated with it. And I don’t know how to say this without sounding rude, but the Future of Humanity Institute and Nick Bostrom give a formal/professional appearance that the SIAI and Eliezer Yudkowsky lack. I am sorry, but that is my personal perception. The SIAI and LW sometimes appear completely over the top to me.
(ETA: Please don’t stop writing about ideas that might seem crazy just because of the above, I love that stuff, I am only concerned about the possibility of real life consequences due to people who take those ideas too seriously.)
There’s some truth to that—but I can’t say I am particularly sold on the FHI either. Yudkowsky seems less deluded about brain emulation than they are. Both organisations are basically doom-mongering. Doom-mongers are not known for their sanity or even-headedness:
It seems difficult to study this subject and remain objective. Those organisations that have tried so far have mostly exaggerated the prospects for the end of the world. They form from those who think the end of the world is more likely than most, associate with others with the same mindset, and their funding often depends on how convincing and dramatic picture of DOOM they can paint. The results tend to lead to something of a credibility gap.
In what way do you consider them to be deluded about brain emulation?
While I agree that in general, organizations have an incentive to doom-monger in order to increase their funding, I’m not so sure this applies to FHI. They’re an academic department associated with a major university. Presumably their funding is more tied to their academic accomplishments, and academics tend to look down on excessive doom-mongering.
My understanding is that Tim thinks de novo AI is very probably very near, leaving little time for brain emulation, and that far more resources will go into de novo AI, or that incremental insights into the brain would enable AI before emulation becomes possible.
On the other hand, FHI folk are less confident that AI theory will cover all the necessary bases in the next couple decades, while neuroimaging continues to advance apace. If neuroimaging at the relevant level of cost and resolution comes quickly while AI theory moves slowly, processing the insights from brain imaging into computer science may take longer than just running an emulation.
I am under the impression that SIAI is well aware that they could use more appearance of seriousness.
Yeah, discussing rationality in a clown suit is an interesting first step in learning an Aesop on how important it is to focus on the fundamentals over the forms, but you can’t deny it’s unnecessarily distracting, especially to outsiders, i.e. most of humanity and therefore most of the resources we need. BTW, I love the site’s new skin.
Oh, I’m not saying he doesn’t deserve praise, Guy’s works changed my life forever. I ’m just saying I got points for praising him without properly justifying it on more than one occasion, which I feel guilty for. I also don’t think he should be bashed for the sake of bashing or other Why Our Kind Can’t Cooperate gratuitous dissenting.