Note for the clueless (i.e. RationalWiki): This is photoshopped. It is not an actual slide from any talk I have given.
Here is a real photo if you need one ;-)
Note for the clueless (i.e. RationalWiki): This is photoshopped. It is not an actual slide from any talk I have given.
Here is a real photo if you need one ;-)
To make the first step and show that this is not some kind of evil ploy, I now deleted the (1) Yudkowsky quotes page and (2) the post on his personality (explanation on how that post came about).
I realize that they were unnecessarily offending and apologize for that. If I could turn back the clock I would do a lot differently and probably stay completely silent about MIRI and LW.
Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org...
Even Greg Egan managed to copublish papers on arxiv.org :-)
ETA
Here is what John Baez thinks about Greg Egan (science fiction author):
He’s incredibly smart, and whenever I work with him I feel like I’m a slacker. We wrote a paper together on numerical simulations of quantum gravity along with my friend Dan Christensen, and not only did they do all the programming, Egan was the one who figured out a great approximation to a certain high-dimensional integral that was the key thing we were studying. He also more recently came up with some very nice observations on techniques for calculating square roots, in my post with Richard Elwes on a Babylonian approximation of sqrt(2). And so on!
That’s actually what academics should be saying about Eliezer Yudkowsky if it is true. How does an SF author manage to get such a reputation instead?
What would the SIAI do given various amounts of money? Would it make a difference if you had 10 or 100 million dollars at your disposal, would a lot of money alter your strategic plan significantly?
I can smell the “arrogance,” but do you think any of the claims in these paragraphs is false?
I am the wrong person to ask if a “a doctorate in AI would be negatively useful”. I guess it is technically useful. And I am pretty sure that it is wrong to say that others are “not remotely close to the rationality standards of Less Wrong”. That’s of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.
But that’s besides the point. Those statements are clearly false when it comes to public relations.
If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.
Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don’t think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.
It is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won’t be enough to say, “I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards.” Because at the point that you utter the word “Singularity” you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.
Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a “research fellow of the Singularity Institute”? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn’t write that line myself, it’s actually from an email conversation with a top-notch person that didn’t give me their permission to publish it). In any case, you won’t make them listen to you, let alone do what you want.
Compare the following:
Eliezer Yudkowsky, research fellow of the Singularity Institute.
Education: -
Professional Experience: -
Awards and Honors: A lot of karma on lesswrong and many people like his Harry Potter fanfiction.
vs.
Eliezer Yudkowsky, chief of research at the Institute for AI Ethics.
Education: He holds three degrees from the Massachusetts Institute of Technology: a Ph.D in mathematics, a BS in electrical engineering and computer science, and an MS in physics and computer science.
Professional Experience: He worked on various projects with renowned people making genuine insights. He is the author of numerous studies and papers.
Awards and Honors: He holds various awards and is listed in the Who’s Who in computer science.
Who are people going to listen to? Well, okay...the first Eliezer might receive a lot of karma on lesswrong, the other doesn’t have enough time for that.
Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like “Well-Kept Gardens Die By Pacifism” will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won’t even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.
Think about it. Imagine how easy it would have been for me to cause serious damage to SI and the idea of risks from AI by writing different kinds of emails.
Why does that rational wiki entry about lesswrong exist? You are just lucky that they are the only people who really care about lesswrong/SI. What do you think will happen if you continue to act like you do and real experts feel uncomfortable about your statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage your reputation permanently.
What is each member of the SIAI currently doing and how is it related to friendly AI research?
Note: The following depicts my personal perception and feelings.
What bothers me is that Less Wrong isn’t trying to reach the level of Timothy Gowers’ Polymath Project but at the same time acts like being on that level by showing no incentive to welcome lesser rationalists or more uneducated people who want to learn the basics.
One of the few people here who sometimes tries to actually tackle hard problems appears to be cousin_it. I haven’t been able to follow much of his posts but all of them have been very exciting and actually introduced me to novel ideas and concepts.
Currently, most of Less Wrong is just boring. Many of the recent posts are superb, clearly written and show that the author put a lot of work into them. Such posts are important and necessary. But I wouldn’t call them exciting or novel.
I understand that Less Wrong does not want to intimidate most of its possible audience by getting too technical. But why not combine both worlds by creating accompanying non-technical articles that explain the issue in question and at the same time teach people the maths?
I know that some people here are working on decision theoretic problems and other technical issues related to rationality. Why don’t you talk about it here on Less Wrong? You could introduce each article with a non-technical description or write an accompanying article that teaches the basics that are necessary to understand what you are trying to solve.
I’m intrigued as to the thought processes and motivations which lead to this article in light of your previous two weeks of comments and posts.
I realized that I might have entered some sort of vicious circle of motivated skepticism.
I can’t ask other people to explore both sides of an argument if I don’t do so either.
Someone wrote that I shouldn’t ask AI researchers about risks from AI if I don’t understand the basic arguments underlying the possibility.
I was curious if my perception of the arguments in favor of risks from AI is flawed and if I am missing important points. Since I haven’t read the Sequences.
I recently wrote that I agree with 99,99% of what Eliezer Yudkowsky writes. The number was wrong. But I wanted to show that it isn’t just made up.
I don’t perceive myself to be a troll at all. Although some unthoughtful comments might have given that impression.
Although it looks like that everyone hates me now, I still don’t want to be wrong.
I know that not having read the Sequences is received badly. Especially since I posted a lot in the past. But that’s not some incredible evil plan or anything. I am unable to play games I want to play for longer than 20 minutes either. Yet I have to do physical exercises every day for like 2 hours, even though I don’t really want to. It sometimes takes me months to read a single book. I think some here underestimate how people can act in a weird way without being evil. I am in psychiatric therapy for 3 years now (yeah, I can prove this).
I can neither get myself to read the Sequences nor am I able to ignore risks from AI. But I am trying.
It looks like it turned awful since I’ve read it the last time:
This essay, while entertaining and useful, can be seen as Yudkowsky trying to reinvent the sense of awe associated with religious experience in the name of rationalism. It’s even available in tract format.
The most fatal mistake of the entry in its current form seems to be that it does lump together all of Less Wrong and therefore does stereotype its members. So far this still seems to be a community blog with differing opinions. I got a Karma score of over 1700 and I have been criticizing the SIAI and Yudkowsky (in a fairly poor way).
I hope you people are reading this. I don’t see why you draw a line between you and Less Wrong. This place is not an invite-only party.
LessWrong is dominated by Eliezer Yudkowsky, a research fellow for the Singularity Institute for Artificial Intelligence.
I don’t think this is the case anymore. You can easily get Karma by criticizing him and the SIAI. Most of all new posts are not written by him anymore either.
Members of the Less Wrong community are expected to be on board with the singularitarian/transhumanist/cryonics bundle.
Nah!
If you indicate your disagreement with the local belief clusters without at least using their jargon, someone may helpfully suggest that “you should try reading the sequences” before you attempt to talk to them.
I don’t think this is asked too much. As the FAQ states:
Why do you all agree on so much? Am I joining a cult?
We have a general community policy of not pretending to be open-minded on long-settled issues for the sake of not offending people. If we spent our time debating the basics, we would never get to the advanced stuff at all.
It’s unclear whether Descartes, Spinoza or Leibniz would have lasted a day without being voted down into oblivion.
So? I don’t see what this is supposed to prove.
Indeed, if anyone even hints at trying to claim to be a “rationalist” but doesn’t write exactly what is expected, they’re likely to be treated with contempt.
Provide some references here.
Some members of this “rationalist” movement literally believe in what amounts to a Hell that they will go to if they get artificial intelligence wrong in a particularly disastrous way.
I’ve been criticizing the subject matter and got upvoted for it, as you obviously know since you linked to my comments as reference. Further I never claimed that the topic is unproblematic or irrational but that I was fearing unreasonable consequences and that I have been in disagreement about how the content was handled. Yet I do not agree with your portrayal insofar that it is not something that fits a Wiki entry about Less Wrong. Because something sounds extreme and absurd it is not wrong. In theory there is nothing that makes the subject matter fallacious.
Yudkowsky has declared the many worlds interpretation of quantum physics is correct, despite the lack of testable predictions differing from the Copenhagen interpretation, and despite admittedly not being a physicist.
I haven’t read the quantum physics sequence but by what I have glimpsed this is not the crucial point that distinguishes MWI from other interpretations. That’s why people suggest one should read the material before criticizing it.
P.S. I’m curious if you know of a more intelligent and rational community than Less Wrong? I don’t! Proclaiming that Less Wrong is more rational than most other communities isn’t necessarily factually wrong.
Edit: “[...] by what I have glimpsed this is just wrong.” now reads “[...] by what I have glimpsed this is not the crucial point that distinguishes MWI from other interpretations.”
Do you think any part of what MIRI does is at all useful?
It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).
I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI’s position is extreme.
Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.
There should be a study group whose sole purpose is to think about the issue. All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
Personally, most of the time, I alternate between position 3 and 4.
Some people associated with MIRI take positions that are even more extreme than position 11 and go as far as banning the discussion of outlandish thought experiments related to AI. I believe that to be crazy.
Extensive and baseless fear-mongering might very well cause MIRI’s value to be overall negative.
I think a lot of SIAI’s “arrogance” is simply made up by people who have an instinctive alarm for “trying to accomplish goals beyond your social status” or “trying to be part of the sacred magisterium”, etc., and who then invent data to fit the supposed pattern.
Some quotes by you that might highlight why some people think you/SI is arrogant :
I tried—once—going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren’t visibly incompetent, they had their various research interests and I’m sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking “What am I even doing here?” (Competent Elites)
More:
I don’t mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other’s partial successes and accumulate hacks as a community. (Above-Average AI Scientists)
Even more:
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them—just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)
And:
If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t. (Eliezer_Yudkowsky August 2010 03:57:30PM)
If someone as capable as Terence Tao approached the SIAI, asking if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-problems that demand some sort of expertise that the SIAI is currently lacking?