That’s surprising; I found having a set of views from outside the LW/SIAI cluster quite refreshing. What do you think was bad about those? My only quibble would be that I found some of the questions awkward/guiding/irrelevant; I would have prefered a better set of questions. But Xixidu improved them with time.
Those questionnaires are not a particularly good introduction to the LW/SI memespace. I worry that he is therefore making a poor first impression on our behalf, reducing the odds that these people will end up contributing to existential risk reduction and/or friendliness research.
What WrongBot says. I approve of the project of getting outside views, and I approve of the idea of making more AI researchers aware of AI as a possible existential risk. (From what I’ve heard, SIAI is quietly doing this themselves with some of the more influential groups.) But XiXiDu doesn’t understand SIAI’s actual object-level claims, let alone the arguments that link them, and he writes AI researchers in a style that looks crankish.
I can hardly think of a better way to prejudice researchers against genuinely examining their intuitions than a wacky letter asking their opinions on an array of absurd-sounding claims with no coherent structure to them, which is exactly what he presents them with.
But XiXiDu doesn’t understand SIAI’s actual object-level claims, let alone the arguments that link them, and he writes AI researchers in a style that looks crankish.
Agreed—some of his questions were cringe-inducing, but overall, I appreciated that series of posts because it’s interesting to hear what a broad range of AI researchers have to say about the topic; some of the answers were insightful and well-argued.
I agree that sounding crankish could be a problem, but I don’t think Xixidu was presenting himself as writing in LW/SIAI’s name. Crankiness from some lesswrongers tarring the reputation of Eliezer’s writings is hard to avoid anyway: the main problem is that there’s no clear way to refer to Eliezer’s writings; “The Sequences” is obscure and covers too much stuff, some of which isn’t Eliezer; “Overcoming Bias” worked at the time, and “Less Wrong” is a name that wasn’t even used when most of the core Sequences were written, and now mostly refers to the community.
3) I directly contacted SIAI via email several times asking them about how to answer replies I got from researchers.
4) All the interviews got highly upvoted.
5) I never claimed to be associated with SIAI.
XiXiDu doesn’t understand SIAI’s actual object-level claims...
I understand them well enough for the purpose of asking researchers a few questions. My karma score has been 5700+ at some point. Do you think that would have been possible without having a basic understanding of some of the underlying ideas?
...a wacky letter asking their opinions on an array of absurd-sounding claims with no coherent structure to them, which is exactly what he presents them with.
I think this is just unfair. I do not think that my email, or the questions I asked were wrong. There is also no way to ask a lot of researchers about this topic without sounding a bit wacky.
All you could do is 1) tell them to read the Sequences 2) not to ask them at all and just trust Eliezer Yudkowsky. 1) Won’t work since they have no reason to suspect that Eliezer Yudkowsky knows some incredible secret knowledge they don’t. 2) Is no option for me. He could tell me anything about AI and I would have no way to tell if he knows what he is talking about.
I understand them well enough for the purpose of asking researchers a few questions. My karma score has been 5700+ at some point. Do you think that would have been possible without having a basic understanding of some of the underlying ideas?
Yes. I attribute my 18k karma to excessive participation. If I didn’t have a clue what I was talking about it would have taken longer but I would have collected thousands of karma anyway just by writing many comments with correct grammar.
Karma—that is, total karma of users—means very little.
I was going to upvote this comment until I got to the last line. XiXiDu’s email campaign is almost certainly doing more harm than good.
That’s surprising; I found having a set of views from outside the LW/SIAI cluster quite refreshing. What do you think was bad about those? My only quibble would be that I found some of the questions awkward/guiding/irrelevant; I would have prefered a better set of questions. But Xixidu improved them with time.
Those questionnaires are not a particularly good introduction to the LW/SI memespace. I worry that he is therefore making a poor first impression on our behalf, reducing the odds that these people will end up contributing to existential risk reduction and/or friendliness research.
What WrongBot says. I approve of the project of getting outside views, and I approve of the idea of making more AI researchers aware of AI as a possible existential risk. (From what I’ve heard, SIAI is quietly doing this themselves with some of the more influential groups.) But XiXiDu doesn’t understand SIAI’s actual object-level claims, let alone the arguments that link them, and he writes AI researchers in a style that looks crankish.
I can hardly think of a better way to prejudice researchers against genuinely examining their intuitions than a wacky letter asking their opinions on an array of absurd-sounding claims with no coherent structure to them, which is exactly what he presents them with.
Agreed—some of his questions were cringe-inducing, but overall, I appreciated that series of posts because it’s interesting to hear what a broad range of AI researchers have to say about the topic; some of the answers were insightful and well-argued.
I agree that sounding crankish could be a problem, but I don’t think Xixidu was presenting himself as writing in LW/SIAI’s name. Crankiness from some lesswrongers tarring the reputation of Eliezer’s writings is hard to avoid anyway: the main problem is that there’s no clear way to refer to Eliezer’s writings; “The Sequences” is obscure and covers too much stuff, some of which isn’t Eliezer; “Overcoming Bias” worked at the time, and “Less Wrong” is a name that wasn’t even used when most of the core Sequences were written, and now mostly refers to the community.
1) Luke Muehlhauser, excutive director of SIAI, listed my attempt to interview AI researchers in his post ‘Useful Things Volunteers Can Do Right Now’
2) Before writing anyone I asked for feedback and continued to ask for feedback and improved my questions.
3) I directly contacted SIAI via email several times asking them about how to answer replies I got from researchers.
4) All the interviews got highly upvoted.
5) I never claimed to be associated with SIAI.
I understand them well enough for the purpose of asking researchers a few questions. My karma score has been 5700+ at some point. Do you think that would have been possible without having a basic understanding of some of the underlying ideas?
I think this is just unfair. I do not think that my email, or the questions I asked were wrong. There is also no way to ask a lot of researchers about this topic without sounding a bit wacky.
All you could do is 1) tell them to read the Sequences 2) not to ask them at all and just trust Eliezer Yudkowsky. 1) Won’t work since they have no reason to suspect that Eliezer Yudkowsky knows some incredible secret knowledge they don’t. 2) Is no option for me. He could tell me anything about AI and I would have no way to tell if he knows what he is talking about.
Yes. I attribute my 18k karma to excessive participation. If I didn’t have a clue what I was talking about it would have taken longer but I would have collected thousands of karma anyway just by writing many comments with correct grammar.
Karma—that is, total karma of users—means very little.
I’d kinda like to see it expressed as (total karma/total posts), that might help a little bit...