[This comment is a response to the original post, but seemed to fit here most.]
I upvoted the OP for raising interesting questions that will arise often and deserve an accessible answer. If someone can maybe put out or point to a reading guide with references.
On the crackpot index the claim that everyone else got it wrong deserves to raise a red flag, but that does not mean it is wrong. There are way to many examples on that in the world. (To quote Eliezer:‘yes, people really are that stupid’)
Read “The Checklist Manifesto” by Atul Gawande for a real life example that is ridiculously simple to understand. (Really read that. It is also entertaining!)
Look at the history of science. Consider the treatment that Semmelweis got for suggesting that doctors wash their hands before operations. You find lots of samples were plain simple ideas where ridiculed. So yes it can happen that a whole profession goes blind on one spot and for every change there has to be someone trying it out in the first place.
The degree on which research is not done well is subject to judgment
.
Now it might be helpful to start out with more applicable ideas, like improving the tool set for real life problems.
You don’t have to care about the singularity to care about other LW content like self-debiasing, or winning.
Regarding the donation aspect, it seems like rationalist are particularly bad at supporting their own causes. You might estimate how much effort you spend in checking out any charity you do support, and then try to not demand higher standards of this one.
Yes, but there are also many examples that show people coming up with the same idea or conclusion at the same time. Take for example A. N. Kolmogorov and Gregory Chaitin who proposed the same definition of randomness independently.
The circumstances regarding Eliezer Yudkowsky are however different. Other people came up with the ideas he is using as supportive fortification and pronunciamento. Some of those people even made similar inferences, yet they do not ask for donations to stop the otherwise inevitable apocalypse.
Your argument does not seem to work. I pointed out how there is stupidity in professionals, but I made no claim that there is only stupidity. So your samples do not disprove the point.
It is nice when people come up with similar things, especially if they happen to be correct, but it is by no means to be expected in every case.
Would you be interested in taking specific pieces apart and/or arguing them?
The argument was that Eliezer Yudkowsky, to my knowledge, has not come up with anything unique. The ideas on which the SIAI is based and asks for donations are not new. Given the basic idea of superhuman AI and widespread awareness of it I thought it was not unreasonable to inquire about the state of activists trying to prevent it.
Are you trying to disprove an argument I made? I asked for an explanation and wasn’t stating some insight about why the SIAI is wrong.
Is Robin Hanson donating most of his income to the SIAI?
You might estimate how much effort you spend in checking out any charity you do support, and then try to not demand higher standards of this one.
While it is silly to selectively apply efficacy standards to charity (giving to inefficient charities without thinking, and then rejecting much more efficient ones on the grounds that they are not maximal [compared to what better choice?]), far better to apply the same high standards across the board than low ones.
[This comment is a response to the original post, but seemed to fit here most.] I upvoted the OP for raising interesting questions that will arise often and deserve an accessible answer. If someone can maybe put out or point to a reading guide with references.
On the crackpot index the claim that everyone else got it wrong deserves to raise a red flag, but that does not mean it is wrong. There are way to many examples on that in the world. (To quote Eliezer:‘yes, people really are that stupid’) Read “The Checklist Manifesto” by Atul Gawande for a real life example that is ridiculously simple to understand. (Really read that. It is also entertaining!) Look at the history of science. Consider the treatment that Semmelweis got for suggesting that doctors wash their hands before operations. You find lots of samples were plain simple ideas where ridiculed. So yes it can happen that a whole profession goes blind on one spot and for every change there has to be someone trying it out in the first place. The degree on which research is not done well is subject to judgment . Now it might be helpful to start out with more applicable ideas, like improving the tool set for real life problems. You don’t have to care about the singularity to care about other LW content like self-debiasing, or winning.
Regarding the donation aspect, it seems like rationalist are particularly bad at supporting their own causes. You might estimate how much effort you spend in checking out any charity you do support, and then try to not demand higher standards of this one.
Yes, but there are also many examples that show people coming up with the same idea or conclusion at the same time. Take for example A. N. Kolmogorov and Gregory Chaitin who proposed the same definition of randomness independently.
The circumstances regarding Eliezer Yudkowsky are however different. Other people came up with the ideas he is using as supportive fortification and pronunciamento. Some of those people even made similar inferences, yet they do not ask for donations to stop the otherwise inevitable apocalypse.
Your argument does not seem to work. I pointed out how there is stupidity in professionals, but I made no claim that there is only stupidity. So your samples do not disprove the point. It is nice when people come up with similar things, especially if they happen to be correct, but it is by no means to be expected in every case. Would you be interested in taking specific pieces apart and/or arguing them?
The argument was that Eliezer Yudkowsky, to my knowledge, has not come up with anything unique. The ideas on which the SIAI is based and asks for donations are not new. Given the basic idea of superhuman AI and widespread awareness of it I thought it was not unreasonable to inquire about the state of activists trying to prevent it.
Are you trying to disprove an argument I made? I asked for an explanation and wasn’t stating some insight about why the SIAI is wrong.
Is Robin Hanson donating most of his income to the SIAI?
While it is silly to selectively apply efficacy standards to charity (giving to inefficient charities without thinking, and then rejecting much more efficient ones on the grounds that they are not maximal [compared to what better choice?]), far better to apply the same high standards across the board than low ones.