Today I will present a coherent and cogent case for Eliezer being a crook and a con-artist. This is not for the purpose of defaming him but to show that he is wasting your money and your time. I realize that SIAI has been evaluated by an ignoramus already I am merely filling in the gaps.
I will present facts and the proper citations in text. Let’s begin:
NOTE: all sources are direct quotes from Eliezer’s mouth either video or text.
Facts Eliezer (here after referred to as DMF) claims of himself:
IQ: 143 (no mention of the test administered if it was Catell then the score can be properly converted to 126)
Highest Percentile Score: 9.9998 (no mention of the test that he saw the score on)
DMF learned calculus at age 13.
Source: http://www.youtube.com/watch?v=9eWvZLYcous
Math Ability:
“I was a spoiled math prodigy as a child...”
“[Marcello math work] …That’s not right” and maybe half the time it will actually be wrong. And when I’m feeling inadequate I remind myself that having mysteriously good taste in final results is an empirically verifiable talent, at least when it comes to math.”
Standard Workday:
When writing:
2-3 hours writing then a couple hours off
When doing FAI work:
2-3 hours work then break then 2-3 hours with a day off before repeating
(During down time math may be studied, did not sound like that happened very much.)
Blogging:
1 post per day sometimes 2 posts they do not seem to exceed 12 pages from what I have seen.
Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
Publications Officially Listed:
“In 2001, he published the first technical analysis of motivationally stable goal systems, with his book-length Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. In 2002, he wrote “Levels of Organization in General Intelligence,” a paper on the evolutionary psychology of human general intelligence, published in the edited volume Artificial General Intelligence (Springer, 2006). He has two papers in the edited volume Global Catastrophic Risks (Oxford, 2008), “Cognitive Biases Potentially Affecting Judgment of Global Risks” and “AI as a Positive and Negative Factor in Global Risk.”
Source: http://singinst.org/aboutus/team
Claims About the FAI Problem:
“My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof.”
Source: http://johncarlosbaez.wordpress.com/2011/03/07/this-weeks-finds-week-311/
Stated Job Description and Plan:
“Eliezer Yudkowsky: My job title is Research Fellow, but I often end up doing things other than research. Right now I’m working on a book on human rationality (current pace is around 10,000-13,000 words/week for a very rough first draft, I’m around 150,000 words in and halfway done with the rough draft if I’m lucky). When that’s done I should probably block out a year to study math and then go back to Artificial Intelligence theory, hopefully ever after (until the AI theory is done, then solid AI development until the AI is finished, et cetera).”
Source: http://hplusmagazine.com/2010/07/21/simplified-humanism-positive-futurism-how-prevent-universe-being-turned-paper-clips/
Since this is your first post here, I’ll temper my response and suggest you take the time to rebuild this comment into something coherent, using the proper link structure of LessWrong and rules of English grammar. You can click ‘Help’ in the lower right of the comment box for syntax.
It’d also be nice if you could put it in the right place, such as Discussion, instead of as an apparently random reply to an unrelated article.
However, before doing so, I’d further suggest you ensure that you understand what claims you’re making and how they are supported or not by available evidence. There are several older articles on the topic of evidence which can be found using search functions.
By the way, I think we can all recognize this as the leading criticism of my ideas, to which all newcomers, requesting to know what my critics have said in response to me, should be directed.
To single out just one part… I don’t understand the point of
Standard Workday: When writing: 2-3 hours writing then a couple hours off When doing FAI work: 2-3 hours work then break then 2-3 hours with a day off before repeating (During down time math may be studied, did not sound like that happened very much.) Blogging: 1 post per day sometimes 2 posts they do not seem to exceed 12 pages from what I have seen. Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
4-6 hours is perfectly normal for authors. This is true whether you look at great scientists like Charles Darwin, or merely ordinary contemporary science/engineering faculty. See the quotes from Ericsson 1993, in ‘The Role of Deliberate Practice’, in http://www.gwern.net/About#fn23
That comment was also an excuse to links/discuss some interesting snippets I found in my reading somewhere more permanent than #lesswrong. (Criticizing the troll was just part of it.)
The post stands at −31, which is certainly the lowest score I’ve ever seen here. It’s possible that there have been posts that had lower scores, but if so they were deleted before I saw them.
I think an actual pro-SIAI plant would make arguments which were quite a bit better than HonestAbe’s, this is too obviously stupid to work as a strawman.
Certainly true. I still haven’t found a sufficiently accurate way of describing this sort of situation; “a low chance” would imply that the quality of the critique updated me away from believing the author was a plant, whereas “a significant chance” has too much weight. “Nonzero” works in common parlance but is pseudo-meaningless, since there’s a nonzero chance of practically anything.
Good question. “Almost suspect” works sometimes. “Actually considered the possibility”. “Remote possibility”. Just ‘chance’. “chance (albeit slim)”. Oscar’s ‘Non-negligible’.
Voting you down, even though I sort of agree with some of what you said. This is the wrong place to put this as Rain said, and you should have taken the time to figure out how to present this is an easily readable fashion. Perhaps have included in some more explanation and reasoning. For instance, how is his work schedule that different then what many college professors employed in comparable fields of research follow?
edit: I should also point out that I visit Less Wrong with the explicit purpose of wasting time because it is an interesting waste of time.
Today I will present a coherent and cogent case for Eliezer being a crook and a con-artist. This is not for the purpose of defaming him but to show that he is wasting your money and your time. I realize that SIAI has been evaluated by an ignoramus already I am merely filling in the gaps.
I will present facts and the proper citations in text. Let’s begin:
NOTE: all sources are direct quotes from Eliezer’s mouth either video or text.
Facts Eliezer (here after referred to as DMF) claims of himself: IQ: 143 (no mention of the test administered if it was Catell then the score can be properly converted to 126) Highest Percentile Score: 9.9998 (no mention of the test that he saw the score on) DMF learned calculus at age 13. Source: http://www.youtube.com/watch?v=9eWvZLYcous
Math Ability: “I was a spoiled math prodigy as a child...”
“[Marcello math work] …That’s not right” and maybe half the time it will actually be wrong. And when I’m feeling inadequate I remind myself that having mysteriously good taste in final results is an empirically verifiable talent, at least when it comes to math.”
Source: http://johncarlosbaez.wordpress.com/2011/03/07/this-weeks-finds-week-311/
Standard Workday: When writing: 2-3 hours writing then a couple hours off When doing FAI work: 2-3 hours work then break then 2-3 hours with a day off before repeating (During down time math may be studied, did not sound like that happened very much.) Blogging: 1 post per day sometimes 2 posts they do not seem to exceed 12 pages from what I have seen. Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
Admission by DMF: DMF admits to a weakness of will. Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
Publications Officially Listed: “In 2001, he published the first technical analysis of motivationally stable goal systems, with his book-length Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. In 2002, he wrote “Levels of Organization in General Intelligence,” a paper on the evolutionary psychology of human general intelligence, published in the edited volume Artificial General Intelligence (Springer, 2006). He has two papers in the edited volume Global Catastrophic Risks (Oxford, 2008), “Cognitive Biases Potentially Affecting Judgment of Global Risks” and “AI as a Positive and Negative Factor in Global Risk.” Source: http://singinst.org/aboutus/team
Claims About the FAI Problem: “My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof.” Source: http://johncarlosbaez.wordpress.com/2011/03/07/this-weeks-finds-week-311/
AI Related Projects Started: Flare Source: http://flarelang.sourceforge.net/ Abandoned Flare: JB, ditched Flare years ago. (2008) Source: http://lesswrong.com/lw/tf/dreams_of_ai_design/msj A legacy of pre-2003 Eliezer, of no particular importance one way or another. Source: http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/121t
DMF Discounted LOGI: “LOGI’s out the window, of course, as anyone who’s read the arc of LW could very easily guess.” Source: http://lesswrong.com/lw/1hn/call_for_new_siai_visiting_fellows_on_a_rolling/1av0
Stated Job Description and Plan: “Eliezer Yudkowsky: My job title is Research Fellow, but I often end up doing things other than research. Right now I’m working on a book on human rationality (current pace is around 10,000-13,000 words/week for a very rough first draft, I’m around 150,000 words in and halfway done with the rough draft if I’m lucky). When that’s done I should probably block out a year to study math and then go back to Artificial Intelligence theory, hopefully ever after (until the AI theory is done, then solid AI development until the AI is finished, et cetera).” Source: http://hplusmagazine.com/2010/07/21/simplified-humanism-positive-futurism-how-prevent-universe-being-turned-paper-clips/
How Is He a Crook? DMF claims that he mastered calculus at 13 and is a math prodigy what evidence is there for this claim? Papers: The only paper with any degree of math albeit simple math is “An Intuitive Explanation of Bayes’ Theorem” Source: http://yudkowsky.net/rational/bayes What about his quantum physics posts? Source: http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ Never once does DMF solve the wave equation nor does DMF solve a single derivative or integral equation the following list are most of posts with any math in them: http://lesswrong.com/lw/pe/joint_configurations/ http://lesswrong.com/lw/q0/entangled_photons/ http://lesswrong.com/lw/q2/spooky_action_at_a_distance_the_nocommunication/ http://lesswrong.com/lw/q4/decoherence_is_falsifiable_and_testable/ The other posts contain amusing graphs many hand-drawn and pseudo math: http://lesswrong.com/lw/pl/no_individual_particles/ http://lesswrong.com/lw/pk/feynman_paths/ http://lesswrong.com/lw/pj/the_quantum_arena/ http://lesswrong.com/lw/pi/classical_configuration_spaces/ http://lesswrong.com/lw/pp/decoherence/ http://lesswrong.com/lw/pq/the_socalled_heisenberg_uncertainty_principle/ (amusing pseudo math) http://lesswrong.com/lw/pu/on_being_decoherent/ http://lesswrong.com/lw/pz/decoherence_as_projection/ If DMF mastered calculus at 13 then why is there no evidence in any of these posts? If DMF is a math prodigy; who is good at explaining math; why no explanation of the wave equation? He does mention it in his timeless physics post but it appears that he took his description from wikipedia, since there are some striking similarities. It is one thing to talk with math jargon such a derivatives and gradients its another thing entirely to be able to actually use those ideas so solve an equation or model a system. DMF has shown no evidence that he can do such things.
Since this is your first post here, I’ll temper my response and suggest you take the time to rebuild this comment into something coherent, using the proper link structure of LessWrong and rules of English grammar. You can click ‘Help’ in the lower right of the comment box for syntax.
It’d also be nice if you could put it in the right place, such as Discussion, instead of as an apparently random reply to an unrelated article.
However, before doing so, I’d further suggest you ensure that you understand what claims you’re making and how they are supported or not by available evidence. There are several older articles on the topic of evidence which can be found using search functions.
I work 4-5 hours at a stretch when writing.
By the way, I think we can all recognize this as the leading criticism of my ideas, to which all newcomers, requesting to know what my critics have said in response to me, should be directed.
This post probably is evidence that HonestAbe isn’t secretly Eliezer—unless that’s what he wants us to think!
To single out just one part… I don’t understand the point of
4-6 hours is perfectly normal for authors. This is true whether you look at great scientists like Charles Darwin, or merely ordinary contemporary science/engineering faculty. See the quotes from Ericsson 1993, in ‘The Role of Deliberate Practice’, in http://www.gwern.net/About#fn23
cough DFTT.
That comment was also an excuse to links/discuss some interesting snippets I found in my reading somewhere more permanent than #lesswrong. (Criticizing the troll was just part of it.)
Fair enough!
(Is −30 a record low score?)
The Popper troll’s post got to −36 before Eliezer removed it in some way that left it available if you knew the URL, but not in recent posts.
The post stands at −31, which is certainly the lowest score I’ve ever seen here. It’s possible that there have been posts that had lower scores, but if so they were deleted before I saw them.
Trying not to feed the troll by replying to him directly, but I’m too curious not to ask: why does ve refer to EY as “DMF”?
http://lesswrong.com/lw/5o1/designing_rationalist_projects/46em suggests its meaning.
Why “DMF”?
DMF.
I’d support booting “HonestAbe” off the site.
Heh. I actually guessed that (didn’t know it was an established term), but didn’t want to say it...
Anyway, I favor thoroughly downvoting future posts like this and letting it work itself out.
This critique is so poor that I think there’s a nonzero chance that you’re a plant.
It took me a fraction of a second to remember the correct meaning of “plant”, during which I imagined HonestAbe as a cactus.
I find this side effect quite pleasant.
Hah, I’d thought I was the only one!
(And now I’m imagining a cactus that looks like Abraham Lincoln.)
I think an actual pro-SIAI plant would make arguments which were quite a bit better than HonestAbe’s, this is too obviously stupid to work as a strawman.
Zero is such a non-probability that I think there is a nonzero chance that you are a plant!
Certainly true. I still haven’t found a sufficiently accurate way of describing this sort of situation; “a low chance” would imply that the quality of the critique updated me away from believing the author was a plant, whereas “a significant chance” has too much weight. “Nonzero” works in common parlance but is pseudo-meaningless, since there’s a nonzero chance of practically anything.
What would you recommend in this case?
Non-negligible?
Good question. “Almost suspect” works sometimes. “Actually considered the possibility”. “Remote possibility”. Just ‘chance’. “chance (albeit slim)”. Oscar’s ‘Non-negligible’.
You left The Cartoon Guide to Löb’s Theorem out of your assessment.
Well, that doesn’t require calculus either, technically.
He also missed the opportunity to point out that organizational resources have been used to produce escapist fantasy literature. :)
Voting you down, even though I sort of agree with some of what you said. This is the wrong place to put this as Rain said, and you should have taken the time to figure out how to present this is an easily readable fashion. Perhaps have included in some more explanation and reasoning. For instance, how is his work schedule that different then what many college professors employed in comparable fields of research follow?
edit: I should also point out that I visit Less Wrong with the explicit purpose of wasting time because it is an interesting waste of time.