I actually loved reading it. Some of those are up there among my favorite EY quotes. Arrogant, sometimes needing context to make them make sense and sometimes best left unsaid for practical reasons but still brilliant. For example:
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them—just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified.
There is also a quote there that I agree should remain visible, to Eliezer’s shame, until such time that he swallows his ego and publicly admits that it was an utterly idiotic way to behave. Then there is at least one quote which really deserves a disclaimer in a footnote—that EY has already written an entire sequence on admitting how stupid he was to think the way he thought when he wrote it!
I was actually rather disappointed when the list only went for a page or two. I was looking forward to reading all the highlights and lowlights. He deserves at least a few hundred best of and worst of quotes!
Then there is at least one quote which really deserves a disclaimer in a footnote...
By following the link below the quote people could learn that he claims that he doesn’t agree with what he wrote there anymore. But I added an extra disclaimer now.
Yes, I created that blog. And it is not meant to give Eliezer Yudkowsky a bad name. The context of each quote is provided.
I think it is useful to emphasize some of the beliefs hold within this community.
Here is what someone over at Google+ wrote that I agree with,
And actually I think exhibiting these quotes ‘out of context’ is quite a useful activity, since they make very striking claims which might be missed or glossed over in the middle of a lengthy & intricate argument. The context is available on the click of a button, and people can then judge for themselves how well they stand up.
Anyway, how could I possible libel someone by publishing what he and his followers believe to be true and good?
You know people are lazy; how many will click through to see the context? (Have you attached Javascript handlers to record click-throughs and compared them against the page traffic?)
Anyway, how could I possible libel someone by publishing what he and his followers believe to be true and good?
How could I possibly libel someone by quoting out of context edited things he has written? “I did [...] have sex with that woman.”
What would you have done if you had meant to give him a bad name but nonetheless had to refrain from simply lying?
That’s fairly easy. There are many ways to do that, although he is already pretty good at it himself.
First I would start acting like Otto E. Rössler with respect to risks from AI. Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME, “it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us” And to back up my claims I would frequently cite posts and papers written by Yudkowsky and talk about how he is probably the most rational person alive and how most AI researchers are just biased.
Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME,
Not to nitpick or anything, but since you don’t actually seem to believe it’s “crunch time”, the strategy you outlined would indeed be a series of lies, regardless of whether Eliezer believes it true.
The tumblr strikes me as pretty much like the thing where you repeat what the other person says in a high-pitched voice, only minus the high-pitched voice.
The index wedrifid was alluding to, if anyone cares: http://shityudkowskysays.tumblr.com/
I actually loved reading it. Some of those are up there among my favorite EY quotes. Arrogant, sometimes needing context to make them make sense and sometimes best left unsaid for practical reasons but still brilliant. For example:
There is also a quote there that I agree should remain visible, to Eliezer’s shame, until such time that he swallows his ego and publicly admits that it was an utterly idiotic way to behave. Then there is at least one quote which really deserves a disclaimer in a footnote—that EY has already written an entire sequence on admitting how stupid he was to think the way he thought when he wrote it!
I was actually rather disappointed when the list only went for a page or two. I was looking forward to reading all the highlights and lowlights. He deserves at least a few hundred best of and worst of quotes!
There’s always sorting in http://www.ibiblio.org/weidai/lesswrong_user.php?u=Eliezer_Yudkowsky
By following the link below the quote people could learn that he claims that he doesn’t agree with what he wrote there anymore. But I added an extra disclaimer now.
Thanks for making me find out what the Roko-thing was about :(
Yes, I created that blog. And it is not meant to give Eliezer Yudkowsky a bad name. The context of each quote is provided.
I think it is useful to emphasize some of the beliefs hold within this community.
Here is what someone over at Google+ wrote that I agree with,
Anyway, how could I possible libel someone by publishing what he and his followers believe to be true and good?
You know people are lazy; how many will click through to see the context? (Have you attached Javascript handlers to record click-throughs and compared them against the page traffic?)
How could I possibly libel someone by quoting out of context edited things he has written? “I did [...] have sex with that woman.”
What would you have done if you had meant to give him a bad name but nonetheless had to refrain from simply lying?
That’s fairly easy. There are many ways to do that, although he is already pretty good at it himself.
First I would start acting like Otto E. Rössler with respect to risks from AI. Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME, “it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us” And to back up my claims I would frequently cite posts and papers written by Yudkowsky and talk about how he is probably the most rational person alive and how most AI researchers are just biased.
No lies. Hands-free.
Not to nitpick or anything, but since you don’t actually seem to believe it’s “crunch time”, the strategy you outlined would indeed be a series of lies, regardless of whether Eliezer believes it true.
The tumblr strikes me as pretty much like the thing where you repeat what the other person says in a high-pitched voice, only minus the high-pitched voice.