This comment will be downvoted but I hope you people will actually explain yourself and not just click ‘Vote down’, every bot can do that.
Now that I’ve slept I read your comment again and I don’t see any justification for why it got upvoted even once. I never claimed that EY can’t ask for money, you are creating a straw man there. You also do not know what I do expect from other organisations. Further, it is not fallacious to suspect that Yudkowsky has some responsibility if people get nighmares from ideas that he would be able to resolve. If he really believes those things, it is of course his right to proclaim them. But the gist of my comment was meant to inquire about the foundations of those beliefs and stating that it does not appear to me that they are based on evidence which makes it legally right but ethically irresponsible to tell people to worry to such an extent or even not to tell them not to worry.
I rather suspect that if all those demands were meant you would go ahead and find new rhetorical demands to make.
I just don’t know how to parse this. I mean what I asked for and I do not ask for certainty here. I’m not doubting evolution and climate change. The problem is that even a randomly picked research paper likely bears more analysis, evidence and references than all of LW and the SIAI’ documents together regarding risks posed by recursive self-improvement from artificial general intelligence.
That quote is out of context.
The quotes have been relevant as they showed that Yudkowsky clearly believes in his intellectual and epistemic superiority, yet any corroborative evidence seems to be missing. Yes, there is this huge amount of writings on rationality and some miscellaneous musing on artificial intelligence. But given how the idea of risks from AGI is weighted by him, it is just the cherry on top of marginal issues that do not support the conclusions.
Speak for yourself. I don’t have the difficulty comprehending the premises either the ones you have questions here or the others required to make an adequate evaluation for the purpose of decision making.
I don’t have a difficulty to comprehend them either. I’m questioning the propositions, the conclusions drawn and further speculations based on those premises.
Neither I nor Eliezer and the SIAI need to force understanding of the Scary Idea upon you for it to be rational for us to place credence on it.
This is ridiculous. I never said you are forced to explain yourself. You are forced to explain yourself if you want people like me to take you serious.
The quotes have been relevant as they showed that Yudkowsky clearly believes in his intellectual and epistemic superiority, yet any corroborative evidence seems to be missing. Yes, there is this huge amount of writings on rationality and some miscellaneous musing on artificial intelligence. [...]
Yudkowsky is definitely a clever fellow. He may not have fancy qualifications—and he is far from infallible—but he is pretty smart.
In the particular post in question, I am pretty sure he was being silly—which is a rather unfortunate time to be claiming superiority.
However, I don’t really know. The stunt created intrigue, mystery, the forbidden, added to the controversy. Overall, Yudkowsky is pretty good at marketing—and maybe this was a taste of it.
I wonder if his Harry Potter fan-fic is marketing—or else how he justifies it.
This is ridiculous. I never said you are forced to explain yourself. You are forced to explain yourself if you want people like me to take you serious.
If you had restrained your claim in that way (ie. not made the claim that I had quoted in the above context) then I would have agreed with you.
I cannot account for every possible interpretation in what I write in a comment. It is reasonable not to infer oughts from questions. I said:
This is a community devoted to refining the art of rationality. How is it rational to believe the Scary Idea without being able to tell if it is more than an idea?
That is, if you can’t explain yourself why you hold certain extreme beliefs then how is it rational for me to believe that the credence you place on it is justified? The best response you came up with was telling me that you are able to understand and that you don’t have to force this understanding onto me to believe into it yourself. That is a very poor argument and that is what I called ridiculous. Even more so as people voted it up, which is just sad.
I though this has been sufficiently clear from what I wrote before.
That is a very poor argument and that is what I called ridiculous. Even more so as people voted it up, which is just sad.
And it is at this point in the process that an accomplished rationalist says to himself, “I am confused”, and begins to learn.
My impression is that you and Wedrifid are talking past each other. You think that you both are arguing about whether uFAI is a serious existential risk. Wedrifid isn’t even concerned with that. He is concerned with “process questions”—with the analysis of the dialog that you two are conducting, rather than the issue of uFAI risk. And the reason he is being upvoted is because this forum, believe it or not, is a process question forum. It is about rationality, not about AI. Many people here really aren’t that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks. They just have a visceral dislike of rhetorical questions.
If you want to see the standard arguments in favor of the Scary Idea, follow Louie’s advice and read the papers at the SIAI web site. But if you find those arguments unsatisfactory (and I suspect you will) exercise some care if you come looking for a debate on the question here on Less Wrong. Because not everyone who engages with you here will be engaging you on the issue that you want to talk about.
Many people here really aren’t that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks.
I am somewhat more interested in understanding why Gortzel would say what he says about AI. Just saying ‘Gortzel’s brain doesn’t appear to work right’ isn’t interesting. But the Hansonian signalling motivations behind academic posturing is more so.
(Although to be more precise I don’t have a visceral dislike of rhetorical questions per se. It is the use of rhetoric to subvert reason that produces the visceral reaction, not the rhetoric(al question) itself.)
This comment will be downvoted but I hope you people will actually explain yourself and not just click ‘Vote down’, every bot can do that.
Now that I’ve slept I read your comment again and I don’t see any justification for why it got upvoted even once. I never claimed that EY can’t ask for money, you are creating a straw man there. You also do not know what I do expect from other organisations. Further, it is not fallacious to suspect that Yudkowsky has some responsibility if people get nighmares from ideas that he would be able to resolve. If he really believes those things, it is of course his right to proclaim them. But the gist of my comment was meant to inquire about the foundations of those beliefs and stating that it does not appear to me that they are based on evidence which makes it legally right but ethically irresponsible to tell people to worry to such an extent or even not to tell them not to worry.
I just don’t know how to parse this. I mean what I asked for and I do not ask for certainty here. I’m not doubting evolution and climate change. The problem is that even a randomly picked research paper likely bears more analysis, evidence and references than all of LW and the SIAI’ documents together regarding risks posed by recursive self-improvement from artificial general intelligence.
The quotes have been relevant as they showed that Yudkowsky clearly believes in his intellectual and epistemic superiority, yet any corroborative evidence seems to be missing. Yes, there is this huge amount of writings on rationality and some miscellaneous musing on artificial intelligence. But given how the idea of risks from AGI is weighted by him, it is just the cherry on top of marginal issues that do not support the conclusions.
I don’t have a difficulty to comprehend them either. I’m questioning the propositions, the conclusions drawn and further speculations based on those premises.
This is ridiculous. I never said you are forced to explain yourself. You are forced to explain yourself if you want people like me to take you serious.
Yudkowsky is definitely a clever fellow. He may not have fancy qualifications—and he is far from infallible—but he is pretty smart.
In the particular post in question, I am pretty sure he was being silly—which is a rather unfortunate time to be claiming superiority.
However, I don’t really know. The stunt created intrigue, mystery, the forbidden, added to the controversy. Overall, Yudkowsky is pretty good at marketing—and maybe this was a taste of it.
I wonder if his Harry Potter fan-fic is marketing—or else how he justifies it.
If you had restrained your claim in that way (ie. not made the claim that I had quoted in the above context) then I would have agreed with you.
I cannot account for every possible interpretation in what I write in a comment. It is reasonable not to infer oughts from questions. I said:
That is, if you can’t explain yourself why you hold certain extreme beliefs then how is it rational for me to believe that the credence you place on it is justified? The best response you came up with was telling me that you are able to understand and that you don’t have to force this understanding onto me to believe into it yourself. That is a very poor argument and that is what I called ridiculous. Even more so as people voted it up, which is just sad.
I though this has been sufficiently clear from what I wrote before.
And it is at this point in the process that an accomplished rationalist says to himself, “I am confused”, and begins to learn.
My impression is that you and Wedrifid are talking past each other. You think that you both are arguing about whether uFAI is a serious existential risk. Wedrifid isn’t even concerned with that. He is concerned with “process questions”—with the analysis of the dialog that you two are conducting, rather than the issue of uFAI risk. And the reason he is being upvoted is because this forum, believe it or not, is a process question forum. It is about rationality, not about AI. Many people here really aren’t that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks. They just have a visceral dislike of rhetorical questions.
If you want to see the standard arguments in favor of the Scary Idea, follow Louie’s advice and read the papers at the SIAI web site. But if you find those arguments unsatisfactory (and I suspect you will) exercise some care if you come looking for a debate on the question here on Less Wrong. Because not everyone who engages with you here will be engaging you on the issue that you want to talk about.
I am somewhat more interested in understanding why Gortzel would say what he says about AI. Just saying ‘Gortzel’s brain doesn’t appear to work right’ isn’t interesting. But the Hansonian signalling motivations behind academic posturing is more so.
Well said.
(Although to be more precise I don’t have a visceral dislike of rhetorical questions per se. It is the use of rhetoric to subvert reason that produces the visceral reaction, not the rhetoric(al question) itself.)