But there’s a clear selection bias here; maybe the 10% of people who are most unhappy with their jobs visit medical clinics 5x as much as anybody else.
In any case, thanks for the info.
But there’s a clear selection bias here; maybe the 10% of people who are most unhappy with their jobs visit medical clinics 5x as much as anybody else.
In any case, thanks for the info.
I’d also add that when it comes to rationalizations, utilitarians should be the last ones to throw stones. In practice, utilitarianism has never been much more than a sophisticated framework for constructing rationalizations for ideological positions on questions where correct utilitarian answers are at worst just undefined, and at best wildly intractable to calculate. (As is the case for pretty much all questions of practical interest.)
The phenomenon of utilitarianism serving as a sophisticated framework for constructing rationalizations for ideological positions exists and is perhaps generic. But there’s an analogous phenomenon of virtue ethics being rhetorically (think about both sides of the abortion debate). I strongly disagree that utilitarianism is ethically useless in practice. Do you disagree that VillageReach’s activity has higher utilitarian expected value per dollar than that of the Make A Wish Foundation?
Yes, there are plenty of situations where game theoretic dynamics and coordination problems make utilitarian style analysis useless, but your claim seems overly broad and sweeping.
I’m pretty sure that I endorse the same method you do, and that the “EEV” approach is a straw man.
The post doesn’t highlight you as an example of someone who uses the EEV approach and I agree that there’s no evidence that you do so. That said, it doesn’t seem like the EEV approach under discussion is a straw man in full generality. Some examples:
As lukeprog mentions, Anna Salamon gave the impression of using the EEV approach in one of her 2009 Singularity Summit talks.
One also sees this sort of thing on LW from time to time, e.g. [1], [2].
As Holden mentions, the issue came up in the 2010 exchange with Giving What We Can.
Can you tell us more about how you’ve seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn’t a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I’m not looking, so would like to know more about that.
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson’s view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer’s status to compensate for what people perceive as inappropriate status grubbing on his part.
Most of the smart people who I know (including myself) perceive him as exhibiting a high degree of overconfidence in the validity of his views about the world.
This leads some of them conceptualize him as a laughingstock; as somebody who’s totally oblivious and feel that the idea that we should be thinking about artificial intelligence is equally worthy of ridicule. I personally am quite uncomfortable with these attitudes, agreeing with Holden Karnofsky’s comment
“I believe that there are enormous risks and upsides associated with artificial intelligence. Managing these deserves serious discussion, and it’s a shame that many laugh off such discussion.”
I’m somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.
Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.
Yes, I think that you’re right. I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction. There are other qualitatively similar things (but more mild) things that Eliezer has said that have been more widely disseminated.
It looks to me as though you’ve focused in on one of the weaker points in XiXiDu’s post rather than engaging with the (logically independent) stronger points.
I made a couple of comments here http://lesswrong.com/lw/1kr/that_other_kind_of_status/255f at Yvain’s post titled “That Other Kind of Status.” I messed up in writing my first comment in that it did not read as I had intended it to. Please disregard my first comment (I’m leaving it up to keep the responses in context).
I clarified in my second comment. My second comment seems to have gotten buried in the shuffle and so I thought I would post again here.
I’ve been a lurker in this community for three months and I’ve found that it’s the smartest community that I’ve ever come across outside of parts of the mathematical community. I recognize a lot of the posters as similar to myself in many ways and so have some sense of having “arrived home.”
At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they’re sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe.
I don’t want to get involved in a debate about this point now (although I’d be happy to elaborate and give my thoughts in detail if there’s interest).
What I want to do is to draw attention to the remarks that I made in my second comment at the link. From what I’ve read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you’re a part of these groups (*).
My drawing attention to this question is not out of malice toward any of you—as I indicated above, I feel more comfortable with Less Wrong than I do with almost any other large group that I’ve ever come across. I like you people and if some of you are suffering from the issue (*) I see this as understandable and am sympathetic—we’re all only human.
But I am concerned that I haven’t seen much evidence of serious reflection about the possibility of (*) on Less Wrong. The closest that I’ve seen is Yvain’s post titled “Extreme Rationality: It’s Not That Great”. Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, (*) is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility (*).
Any thoughts? I’d also be interested in any relevant references.
[Edited in response to cupholder’s comment, deleted extraneous words.]
Luke: I appreciate your transparency and clear communication regarding SingInst.
The main reason that I remain reluctant to donate to SingInst is that I find your answer (and the answers of other SingInst affiliates who I’ve talked with) to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblems of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.
My general impression is that the SingInst staff have insufficient exposure to technical research to understand how hard it is to answer questions posed at such a level of generality. I’m largely in agreement with Vladimir M’s comments on this thread.
Now, it may well be possible to further subdivide and sharpen the subproblems at hand to the point where they’re well defined enough to answer, but the fact that you seem unaware of how crucial this is is enough to make me seriously doubt SingInst’s ability to make progress on these problems.
I’m glad to see that you place high priority on talking to good researchers, but I think that the main benefit that will derive from doing so (aside from increasing awareness of AI risk) will be to shift SingInst staff member’s beliefs in the direction of the Friendly AI problem being intractable.
I doubt that becoming a psychiatrist is the way for you to do the most good. Some thoughts:
Influencing a single young utilitarian-inclined person to become a psychiatrist in the immediate future who would not otherwise have would have the same expected impact as you yourself becoming one. (The situation would be different if you had already gone through the schooling and were deciding whether or not to keep doing it or to do something else).
I think (but am not sure) that there’s plausibly enough low-hanging opportunity for utilitarian networking and activism so that you could have at least the impact described in the above point by focusing on networking and activism. I have some ideas about this; PM me for more if you’d like.
As Carl Shulman alluded to, there’s the possibility of working at a foundation moving more money than you would make for the rest of your life. I have little sense for what qualifications are required to get such a position, but I would guess that they’d be significantly less than what it would take to become a psychiatrist. On the other hand you wouldn’t have arbitrary flexibility over where the money went and so couldn’t use it optimally; so there’s some sort of trade off that would need to be made.
My observation is that when utilitarian types (including myself) consider going through an unpleasant experience for a good cause, they tend to massively underestimate the probability that they’ll burn out and underestimate the severity of burnout. See the second and third paragraphs of Carl’s comment here. I don’t completely identify with the framing but (sadly) have found that in my own experience making large personal sacrifices massively undermines my (ordinarily high) utilitarian motivations. Things can get very ugly when this happens (in the sense that it destroys my self-image and leads to long periods of self-loathing during which I doubt whether I was ever a good person).
When causes vary in effectiveness by orders of magnitude, it’s almost always more important to pin down the correct cause than it is to maximize one’s donated income. If job related stress were to lead you to miss the best cause by an order of magnitude then your effort would have been in vain.
I don’t have direct answers to your questions, but the main point that I would make here is that such stated beliefs don’t necessarily run very deep.
I would guess the people who you were teaching with probably are quite similar to you and that their stated beliefs which seem so foreign function as belief as attire, serving primarily to bind them together. Such beliefs tend to be compartmentalized and need not have a strong impact on their views about things overall.
I’m glad to hear that you’ve been enjoying your studies and am happy that you’re here.
You might find the References & Resources For Less Wrong posting useful.
Aside from the ones mentioned there, one book that I had a favorable impression of upon browsing through it and that looked pretty accessible to me is Carl Sagan’s The Demon-Haunted World: Science as a Candle in the Dark.
I concur with the points made by SarahC and Unnamed. My experience has been that with very few exceptions, learning substantive subject for the first time is a struggle. I’ve often felt bewildered and disoriented for a significant period of time before things started to gel and coalesce.
Quoting from Sections 7 and 8 of William Thurston’s Mathematical Education essay :
Mathematics is amazingly compressible: you may struggle a long time, step by step, to work through some process or idea from several approaches. But once you really understand it and have the mental perspective to see it as a whole, there is often a tremendous mental compression. You can file it away, recall it quickly and completely when you need it, and use it as just one step in some other mental process. The insight that goes with this compression is one of the real joys of mathematics.
After mastering mathematical concepts, even after great effort, it becomes very hard to put oneself back in the frame of mind of someone to whom they are mysterious.
I remember as a child, in fifth grade, coming to the amazing (to me) realization that the answer to 134 divided by 29 is 134⁄29 (and so forth). What a tremendous labor-saving device! To me, ‘134 divided by 29’ meant a certain tedious chore, while 134⁄29 was an object with no implicit work. I went excitedly to my father to explain my major discovery. He told me that of course this is so, a/b and a divided by b are just synonyms. To him it was just a small variation in notation.
One of my students wrote about visiting an elementary school and being asked to tutor a child in subtracting fractions. He was startled and sobered to see how much is involved in learning this skill for the first time, a skill which had condensed to a triviality in his mind.
Mathematics is full of this kind of thing, on all levels. It never stops.
[...]
Similarly, students at more advanced levels know many things which less advanced students don’t yet know. It is very intimidating to hear others casually toss around words and phrases as if any educated person should know them, when you haven’t the foggiest idea what they’re talking about. Less advanced students have trouble realizing that they will (or would) also learn these theories and their associated vocabulary readily when the time comes and afterwards use them casually and naturally. I remember many occasions when I felt intimidated by mathematical words and concepts before I understood them: negative, decimal, long division, infinity, algebra, variable, equation, calculus, integration, differentiation, manifold, vector, tensor, sheaf, spectrum, etc. It took me a long time before I caught on to the pattern and developed some immunity.
Okay, thinking it over for the last hour, I now have a concrete statement to make about my willingness to donate to SIAI. I promise to donate $2000 to SIAI in a year’s time if by that time SIAI has secured a 2-star rating from GiveWell for donors who are interested in existential risk.
I will urge GiveWell to evaluate existential risk charities with a view toward making this condition a fair one. If after year’s time GiveWell has not yet evaluated SIAI, my offer will still stand.
[Edit: Slightly rephrased, removed condition involving quotes which had been taken out of context.]
And I would indeed expect IQ to correlate positively with what you might call openness.
My own experience is that the correlation is not very high. Most of the people who I’ve met who are as smart as me (e.g. in the sense of having high IQ) are not nearly as open as I am.
I didn’t realize at all that by “smart” you meant “instrumentally rational”;
I did not intend to equate intelligence with instrumental rationality. The reason why I mentioned instrumental rationality is that ultimately what matters is to get people with high instrumental rationality (whether they’re open minded or not) interested in existential risk.
My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers, that although people are being irrational to dismiss Eliezer as fast as they do, that doesn’t mean that they’re holistically irrational. My own experience has been that my openness has both benefits and drawbacks.
The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics.
Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists—they recognize that Eliezer’s intellectually sophisticated. They’re still biased to dismiss him out of hand. See bentram’s comment
Edit: You might wonder where the bias to dismiss Eliezer comes from. I think it comes mostly from conformity, which is, sadly, very high even among very smart people.
It seems almost certain that nuclear winter is not an existential risk in and of itself but it could precipitate a civilizational collapse from which it’s impossible to recover (e.g. because we’ve already depleted too much of the low hanging natural resource supply). This seems quite unlikely, maybe the chance conditional on nuclear winter is between 1 and 10 percent. Given that governments already consider nuclear war to be a national security threat and that the probability seems much lower than x-risk due to future technologies it seems best to focus on other things. Even if nothing direct can be done about x-risk from future technologies, movement building seems better than nuclear risk reduction.
If I understand him correctly DanielLC is saying that the cost-effectiveness of donating to VillageReach is greater than that which your post suggests because in the event of a Singularity scenario it could have the effect of allowing 28 people to lead very very long lives rather than just 1 person.
Though there are many brilliant people within academia, there is also shortsightedness and group-think within academia which could have led the academic establishment to ignore important issues concerning safety of advanced future technologies.
I’ve seen very little (if anything) in the way of careful rebuttals of SIAI’s views from the academic establishment. As such, I don’t think that there’s strong evidence against SIAI’s claims. At the same time, I have the impression that SIAI has not done enough to solicit feedback from the academic establishment.
John Baez will be posting an interview with Eliezer sometime soon. It should be informative to see the back and forth between the two of them.
Concerning the apparent group think on Less Wrong: something relevant that I’ve learned over the past few months is that some of the vocal SIAI supporters on LW express views that are quite unrepresentative of those of the SIAI staff. I initially misjudged SIAI on account of past unawareness of this point.
I believe that if you’re going to express doubts and/or criticism about LW and/or SIAI you should take the time and energy to express these carefully and diplomatically. Expressing unclear or inflammatory doubts and/or criticism is conducive to being rejected out of hand. I agree with Anna’s comment here.
Since de-colonization, Africa has gone from a roughly late-19th century Euro-American equivalency to, except in specific areas, roughly mid-19th century equivalent. If you think aid does not actively harm development, what is your explanation?
So, first of all, I don’t know enough about the topic to know whether the claim that you’re making is correct. I would appreciate a reference supporting your claim.
A natural explanation for the alleged phenomenon that you allude is that colonization introduced foreign elements to Africa which worked okay in juxtaposition with the colonial occupation but which caused serious problems once the colonial powers pulled out on account of these foreign elements meshing poorly with the native cultures.
So my impression has been that the situation is that
(i) Eliezer’s writings contain a great deal of insightful material.
(ii) These writings do not substantiate the idea that [that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing].
I say this having read perhaps around a thousand pages of what Eliezer has written. I consider the amount of reading that I’ve done to be a good “probabilistic proof” that the points (i) and (ii) apply to the portion of his writings that I haven’t read.
That being said, if there are any particular documents that you would point me to which you feel do provide a satisfactory evidence for the idea [that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing], I would be happy to examine them.
I’m unwilling to read the whole of his opus given how much of it I’ve already read without being convinced. I feel that the time that I put into reducing existential risk can be used to better effect in other ways.
Instead of donating directly to VillageReach, I’m going to just donate to GiveWell. They pool the funds they get and distribute them to their top charities, and I trust their analytic, evidence-based, largely utilitarian approach. Mostly, however, I think the work they’re doing gathering and distributing information about charities is critically important. If more charities actually competed on evidence of efficacy, the whole endeavour might be a lot different.
Seems sensible; note that according to GiveWell’s plan for 2011: top-level priorities :
We aim to raise our annual revenue from around $200,000 to around $400,000 (if we do not make it all the way to $400,000 we will cut some staff going into 2012, as outlined in our overview of our financial situation). I (Holden Karnofsky) will be taking primary responsibility for this task, and expect it to take a fair amount of time.
so that GiveWell has room for more funding.
According to the most recent board meeting GiveWell has been (and will continue for the foreseeable future to) avoid soliciting the general public in order to avoid the appearance of conflict of interest in the eyes of people who are unfamiliar with the organization and looking for charity recommendations, but for people who are already sold on GiveWell’s mission and trust the staff to use funding wisely; giving directly to GiveWell makes sense. As you point out, in the past GiveWell has redistributed excess funds to its top rated charities.
All the thousands of words you’ve written avoid confronting the main point, which is whether people should donate to SIAI.
I agree that my most recent post does not address the question of whether people should donate to SIAI.
So far, you have given exactly one order of magnitude estimate, and it was shot down as ridiculous by multiple qualified people. Since then, you have consistently refused to give any numbers whatsoever. The logical conclusion is that, like most people, you lack the order of magnitude estimation skill. And unfortunately, that means that you cannot speak credibly on questions where order of magnitude estimation is required.
There are many ways in which I could respond here, but I’m not sure how to respond because I’m not sure what your intent is. Is your goal to learn more from me, to teach me something new, to discredit me in the eyes of others, or something else?
Your conclusion doesn’t follow from your premise. Moreover I don’t know what you mean by “advanced sanity techniques.” I agree that you’ve probably increased to number of cryonics signups substantially but I doubt that increased rationality has played a significant role.