Note for the clueless (i.e. RationalWiki): This is photoshopped. It is not an actual slide from any talk I have given.
Here is a real photo if you need one ;-)
Note for the clueless (i.e. RationalWiki): This is photoshopped. It is not an actual slide from any talk I have given.
Here is a real photo if you need one ;-)
To make the first step and show that this is not some kind of evil ploy, I now deleted the (1) Yudkowsky quotes page and (2) the post on his personality (explanation on how that post came about).
I realize that they were unnecessarily offending and apologize for that. If I could turn back the clock I would do a lot differently and probably stay completely silent about MIRI and LW.
Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org...
Even Greg Egan managed to copublish papers on arxiv.org :-)
ETA
Here is what John Baez thinks about Greg Egan (science fiction author):
He’s incredibly smart, and whenever I work with him I feel like I’m a slacker. We wrote a paper together on numerical simulations of quantum gravity along with my friend Dan Christensen, and not only did they do all the programming, Egan was the one who figured out a great approximation to a certain high-dimensional integral that was the key thing we were studying. He also more recently came up with some very nice observations on techniques for calculating square roots, in my post with Richard Elwes on a Babylonian approximation of sqrt(2). And so on!
That’s actually what academics should be saying about Eliezer Yudkowsky if it is true. How does an SF author manage to get such a reputation instead?
What would the SIAI do given various amounts of money? Would it make a difference if you had 10 or 100 million dollars at your disposal, would a lot of money alter your strategic plan significantly?
I can smell the “arrogance,” but do you think any of the claims in these paragraphs is false?
I am the wrong person to ask if a “a doctorate in AI would be negatively useful”. I guess it is technically useful. And I am pretty sure that it is wrong to say that others are “not remotely close to the rationality standards of Less Wrong”. That’s of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.
But that’s besides the point. Those statements are clearly false when it comes to public relations.
If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.
Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don’t think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.
It is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won’t be enough to say, “I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards.” Because at the point that you utter the word “Singularity” you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.
Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a “research fellow of the Singularity Institute”? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn’t write that line myself, it’s actually from an email conversation with a top-notch person that didn’t give me their permission to publish it). In any case, you won’t make them listen to you, let alone do what you want.
Compare the following:
Eliezer Yudkowsky, research fellow of the Singularity Institute.
Education: -
Professional Experience: -
Awards and Honors: A lot of karma on lesswrong and many people like his Harry Potter fanfiction.
vs.
Eliezer Yudkowsky, chief of research at the Institute for AI Ethics.
Education: He holds three degrees from the Massachusetts Institute of Technology: a Ph.D in mathematics, a BS in electrical engineering and computer science, and an MS in physics and computer science.
Professional Experience: He worked on various projects with renowned people making genuine insights. He is the author of numerous studies and papers.
Awards and Honors: He holds various awards and is listed in the Who’s Who in computer science.
Who are people going to listen to? Well, okay...the first Eliezer might receive a lot of karma on lesswrong, the other doesn’t have enough time for that.
Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like “Well-Kept Gardens Die By Pacifism” will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won’t even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.
Think about it. Imagine how easy it would have been for me to cause serious damage to SI and the idea of risks from AI by writing different kinds of emails.
Why does that rational wiki entry about lesswrong exist? You are just lucky that they are the only people who really care about lesswrong/SI. What do you think will happen if you continue to act like you do and real experts feel uncomfortable about your statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage your reputation permanently.
What is each member of the SIAI currently doing and how is it related to friendly AI research?
Note: The following depicts my personal perception and feelings.
What bothers me is that Less Wrong isn’t trying to reach the level of Timothy Gowers’ Polymath Project but at the same time acts like being on that level by showing no incentive to welcome lesser rationalists or more uneducated people who want to learn the basics.
One of the few people here who sometimes tries to actually tackle hard problems appears to be cousin_it. I haven’t been able to follow much of his posts but all of them have been very exciting and actually introduced me to novel ideas and concepts.
Currently, most of Less Wrong is just boring. Many of the recent posts are superb, clearly written and show that the author put a lot of work into them. Such posts are important and necessary. But I wouldn’t call them exciting or novel.
I understand that Less Wrong does not want to intimidate most of its possible audience by getting too technical. But why not combine both worlds by creating accompanying non-technical articles that explain the issue in question and at the same time teach people the maths?
I know that some people here are working on decision theoretic problems and other technical issues related to rationality. Why don’t you talk about it here on Less Wrong? You could introduce each article with a non-technical description or write an accompanying article that teaches the basics that are necessary to understand what you are trying to solve.
I’m intrigued as to the thought processes and motivations which lead to this article in light of your previous two weeks of comments and posts.
I realized that I might have entered some sort of vicious circle of motivated skepticism.
I can’t ask other people to explore both sides of an argument if I don’t do so either.
Someone wrote that I shouldn’t ask AI researchers about risks from AI if I don’t understand the basic arguments underlying the possibility.
I was curious if my perception of the arguments in favor of risks from AI is flawed and if I am missing important points. Since I haven’t read the Sequences.
I recently wrote that I agree with 99,99% of what Eliezer Yudkowsky writes. The number was wrong. But I wanted to show that it isn’t just made up.
I don’t perceive myself to be a troll at all. Although some unthoughtful comments might have given that impression.
Although it looks like that everyone hates me now, I still don’t want to be wrong.
I know that not having read the Sequences is received badly. Especially since I posted a lot in the past. But that’s not some incredible evil plan or anything. I am unable to play games I want to play for longer than 20 minutes either. Yet I have to do physical exercises every day for like 2 hours, even though I don’t really want to. It sometimes takes me months to read a single book. I think some here underestimate how people can act in a weird way without being evil. I am in psychiatric therapy for 3 years now (yeah, I can prove this).
I can neither get myself to read the Sequences nor am I able to ignore risks from AI. But I am trying.
It looks like it turned awful since I’ve read it the last time:
This essay, while entertaining and useful, can be seen as Yudkowsky trying to reinvent the sense of awe associated with religious experience in the name of rationalism. It’s even available in tract format.
The most fatal mistake of the entry in its current form seems to be that it does lump together all of Less Wrong and therefore does stereotype its members. So far this still seems to be a community blog with differing opinions. I got a Karma score of over 1700 and I have been criticizing the SIAI and Yudkowsky (in a fairly poor way).
I hope you people are reading this. I don’t see why you draw a line between you and Less Wrong. This place is not an invite-only party.
LessWrong is dominated by Eliezer Yudkowsky, a research fellow for the Singularity Institute for Artificial Intelligence.
I don’t think this is the case anymore. You can easily get Karma by criticizing him and the SIAI. Most of all new posts are not written by him anymore either.
Members of the Less Wrong community are expected to be on board with the singularitarian/transhumanist/cryonics bundle.
Nah!
If you indicate your disagreement with the local belief clusters without at least using their jargon, someone may helpfully suggest that “you should try reading the sequences” before you attempt to talk to them.
I don’t think this is asked too much. As the FAQ states:
Why do you all agree on so much? Am I joining a cult?
We have a general community policy of not pretending to be open-minded on long-settled issues for the sake of not offending people. If we spent our time debating the basics, we would never get to the advanced stuff at all.
It’s unclear whether Descartes, Spinoza or Leibniz would have lasted a day without being voted down into oblivion.
So? I don’t see what this is supposed to prove.
Indeed, if anyone even hints at trying to claim to be a “rationalist” but doesn’t write exactly what is expected, they’re likely to be treated with contempt.
Provide some references here.
Some members of this “rationalist” movement literally believe in what amounts to a Hell that they will go to if they get artificial intelligence wrong in a particularly disastrous way.
I’ve been criticizing the subject matter and got upvoted for it, as you obviously know since you linked to my comments as reference. Further I never claimed that the topic is unproblematic or irrational but that I was fearing unreasonable consequences and that I have been in disagreement about how the content was handled. Yet I do not agree with your portrayal insofar that it is not something that fits a Wiki entry about Less Wrong. Because something sounds extreme and absurd it is not wrong. In theory there is nothing that makes the subject matter fallacious.
Yudkowsky has declared the many worlds interpretation of quantum physics is correct, despite the lack of testable predictions differing from the Copenhagen interpretation, and despite admittedly not being a physicist.
I haven’t read the quantum physics sequence but by what I have glimpsed this is not the crucial point that distinguishes MWI from other interpretations. That’s why people suggest one should read the material before criticizing it.
P.S. I’m curious if you know of a more intelligent and rational community than Less Wrong? I don’t! Proclaiming that Less Wrong is more rational than most other communities isn’t necessarily factually wrong.
Edit: “[...] by what I have glimpsed this is just wrong.” now reads “[...] by what I have glimpsed this is not the crucial point that distinguishes MWI from other interpretations.”
Do you think any part of what MIRI does is at all useful?
It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).
I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI’s position is extreme.
Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.
There should be a study group whose sole purpose is to think about the issue. All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
Personally, most of the time, I alternate between position 3 and 4.
Some people associated with MIRI take positions that are even more extreme than position 11 and go as far as banning the discussion of outlandish thought experiments related to AI. I believe that to be crazy.
Extensive and baseless fear-mongering might very well cause MIRI’s value to be overall negative.
I think a lot of SIAI’s “arrogance” is simply made up by people who have an instinctive alarm for “trying to accomplish goals beyond your social status” or “trying to be part of the sacred magisterium”, etc., and who then invent data to fit the supposed pattern.
Some quotes by you that might highlight why some people think you/SI is arrogant :
I tried—once—going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren’t visibly incompetent, they had their various research interests and I’m sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking “What am I even doing here?” (Competent Elites)
More:
I don’t mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other’s partial successes and accumulate hacks as a community. (Above-Average AI Scientists)
Even more:
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them—just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)
And:
If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t. (Eliezer_Yudkowsky August 2010 03:57:30PM)
I have a bunch of posts on this topic:
(1) AI vs. humanity and the lack of concrete scenarios
(2) Questions regarding the nanotechnology-AI-risk conjunction
(3) AI risk scenario: Deceptive long-term replacement of the human workforce
(4) AI risk scenario: Elite Cabal
(5) AI risk scenario: Social engineering
(6) AI risk scenario: Insect-sized drones
...has someone had a polite word with them about not killing all humans by sheer accident?
Shane Legg is familiar with AI risks. So is Jaan Tallinn, a top donor of MIRI, who is also associated with DeepMind. I suppose they will talk about their fears with Google.
This made my trust in the community and my judgement of its average quality go down a LOT...
I expected almost everyone to agree with Eliezer on most important things...
Alicorn (top-poster) doesn’t agree with Eliezer about ethics. PhilGoetz (top-poster) doesn’t agree with Eliezer. Wei_Dai (top-poster) doesn’t agree with Eliezer on AI issues. wedrifid (top-poster) doesn’t agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.
I am pretty sure Yvain doesn’t agree with Eliezer on quite a few things too (too lazy to look it up now).
Generally there are a lot of top-notch people who don’t agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.
But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI’s top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.
I apologize for any possible misunderstanding in this comment. My reading comprehension is often bad.
I know that in the original post I offered to add a statement of your choice to any of my posts. I stand by this, although I would have phrased this differently now. I would like to ask you to consider that there are also personal posts which are completely unrelated to you, MIRI, or LW. Such as photography posts and math posts. It would be really weird and confusing to readers to add your suggested header to those posts. If that is what you want, I will do it.
You also mention that I could delete my site (I already deleted a bunch of posts related to you and MIRI). I am not going to do that, as it is my homepage and contains completely unrelated material. I am sorry if I possibly gave a false impression here.
You further talk about withdrawing entirely from all related online discussions. I am willing to entirely stop to add anything negative to any related discussion. But I will still use social media to link to material produced by MIRI or LW (such as MIRI blog posts) and professional third party critiques (such as a possible evaluation of MIRI by GiveWell) without adding my own commentary.
I stand by what I wrote above, irrespective of your future actions. But I would be pleased if you maintain a charitable portrayal of me. I have no problem if you in future write that my arguments are wrong, that I have been offending, or that I only have an average IQ etc. But I would be pleased if you abstain from portraying me as an evil person, or that I deliberately lie. Stating that I misrepresented you is fine. But suggesting that I am a malicious troll who hates you is what I strongly disagree with.
As evidence that I mean what I write I now deleted my recent comments made on reddit.
Yes, it was a huge overreaction on my side and I shouldn’t have written such a comment in the first place. It was meant as an explanation of how that post came about, it was not meant as an excuse. It was still wrong. The point I want to communicate is that I didn’t do it out of some general interest to cause MIRI distress.
I apologize for offending people and overreacting to what I perceived the way I described it but which was, as you wrote, not that way. I already deleted that post yesterday.
Since you have not yet replied to my other comment, here is what I have done so far:
(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]
(2) I slightly changed your given disclaimer and added it to my about page:
Note that I wrote some posts, posts that could previously be found on this blog, during a dark period of my life. Eliezer Yudkowsky is a decent and honest person with no ill intent, and anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret those posts, and leave this note here as an archive to that regret.
The reason for this alteration is that my blog has been around since 2001, and for most of the time it did not contain any mention of you, MIRI, or LW. For a few years it even contained positive referrals to you and MIRI. This can all be checked by looking at e.g. archive.org for domains such as xixidu.com. I estimate that much less than 1% of all content over those years has been related to you or MIRI, and even less was negative.
But my previous comment, in which I asked you to consider that your suggested header would look really weird and confusing if added to completely unrelated posts, still stands. If that’s what you desire, let me know. But I hope you are satisfied with the actions I took so far.
[1] If I missed something, let me know.
You are clearly not capable of thinking rationally with respect to a fundamental belief where evidence makes the question overdetermined. Why should I listen to you?
People who hold obviously incorrect beliefs can still be highly intelligent and productive:
Peter Duesberg (a professor of molecular and cell biology at the University of California, Berkeley) “claimed that AIDS is not caused by HIV, which made him so unpopular that his colleagues and others have — until recently — been ignoring his potentially breakthrough work on the causes of cancer.”
Francisco J. Ayala who “…has been called the “Renaissance Man of Evolutionary Biology” is a geneticist ordained as a Dominican priest. “His “discoveries have opened up new approaches to the prevention and treatment of diseases that affect hundreds of millions of individuals worldwide…”
Francis Collins (geneticist, Human Genome Project) noted for his landmark discoveries of disease genes and his leadership of the Human Genome Project (HGP) and described by the Endocrine Society as “one of the most accomplished scientists of our time” is a evangelical Christian.
Georges Lemaître (a Belgian Roman Catholic priest) proposed what became known as the Big Bang theory of the origin of the Universe.
Kurt Gödel (logician, mathematician and philosopher) who suffered from paranoia and believed in ghosts. “Gödel, by contrast, had a tendency toward paranoia. He believed in ghosts; he had a morbid dread of being poisoned by refrigerator gases; he refused to go out when certain distinguished mathematicians were in town, apparently out of concern that they might try to kill him.”
There are many more examples. All of them are outliers indeed, and I don’t think that calcsam has been able to prove that his achievements and general capability to think clearly in some fields does outweigh the heavy burden of being religious. Yet there is evidence that such people do exist and he offers you the chance to challenge him.
Generally I agree with you, but I also think that calcsam provides a fascinating example of the internal dichotomy of some human minds and a case study that might provide insights to how the arguments employed by Less Wrong fail in some cases.
The organization reported $118,803.00 in theft in 2009 resulting in a year end asset balance lower than expected. The SIAI is currently pursuing legal restitution.
It isn’t much harder to steal code than to steal money from a bank account. Given the nature of research being conducted by the SIAI, one of the first and most important steps would have to be to think about adequate security measures.
If you are a potential donor interested to mitigate risks from AI then before contributing money you will have to make sure that your contribution does not increase those risks even further.
If you believe that risks from AI are to be taken seriously then you should demand that any organisation that studies artificial general intelligence has to establish significant measures against third-party intrusion and industrial espionage that is at least on par with the biosafety level 4 required for work with dangerous and exotic agents.
It might be the case that the SIAI does already employ various measures against the possibility of theft of sensitive information, yet any evidence that hints at the possibility of weak security should be taken seriously. Especially the possibility that there are potentially untrustworthy people who can access critical material should be examined.
If someone as capable as Terence Tao approached the SIAI, asking if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-problems that demand some sort of expertise that the SIAI is currently lacking?