I have taken the survey.
Evan_Gaensbauer
I’ve gone back, sorted the comments by ‘new’, and upvoted everyone who commented they did the survey since I took it, and upvoted everyone who did it before me. This way I’ve upvoted everyone, and they got more karma. It took me three minutes. If you spend a substantial amount of spare time on Less Wrong, it might be worth it for others for you to do the same. The more people who do this, the more karma everyone gets. Also, it can act as an incentive for people to take the survey for karma even if they’re late to the game.
On the suggestion of Gunnar_Zarncke, this comment has been transformed into a Discussion post.
- 12 Oct 2014 2:42 UTC; 0 points) 's comment on LessWrong as social catalyst by (
At the rationality meetup today, there was a great newcomer. He’s read up most of Eliezer’s Yudkowsky’s original Sequences up to 2010, and he’s also read a handful of posts promoted on the front page. As a landing pad for the rationalist community, to me, Less Wrong seems to be about updating beyond the abstract reasoning principles of philosophy past, toward realizing that through a combination of microeconomics, probability theory, decision theory, cognitive science, social psychology, and information theory, that humans can each hack their own minds, and notice how they use heuristics, to increase their success rate at which they form functional beliefs, and achieve their goals.
Then, I think about how if someone has only been following the rationalist community of Less Wrong for the last few years, and then they come to a meetup for the first time in 2014, everyone else who’s been around for a few years will be talking about things that don’t seem to fit with the above model of what the rationalist community is about. Putting myself back into a newcomer/outsider perspective, here are some memes that don’t seem to immediately, obviously follow from ‘cultivating rationality habits’:
Citing Moloch, an ancient demon, as a metaphorical source of all the problems humanity currently faces.
How a long series of essays yearning for the days of yore has led to intensely insular discussion of polarized contrarian social movements, This doesn’t square with how Less Wrong has historically avoided political debates because of how they often drift to ideological bickering, name-calling, and signaling allegiance to a coalition. Such debates aren’t usually conducive to everyone reaching more accurate conclusions together, but we’re having them anyway.
Some of us reversing our previous opinions on what’s fundamentally true, or false.
Less Wrong is also welcomes discussion of contrarian, and controversial, ideas, such as cryopreservation, and transhumanism. If this is the first thing somebody learns about Less Wrong through the grapevine, the first independent sources they may come across may be rather unflattering of the community as a whole, and disproportionately cynical about what most of us actually believe. Furthermore, controversy attracts media coverage like moths to a flame, which hasn’t gone to well for Less Wrong, and which falsely paints divergent opinions as our majority beliefs.
I’m not calling for Less Wrong to write a press coverage package, or protocol. However, I want to foster a local community at which I can discuss cognitive science, and the applications of microeconomics of everyday life, without new friends getting hung up on the weird beliefs they associate me with.
Additionally, in growing the local meetup, my friends, and I, in Vancouver, have gone to other meetups, and seeded the idea that it’s worth our friend’s time to check out Less Wrong. We’ve made waves to the point that a local student newspaper may want to publish an article about what Less Wrong is about, and profile some of my friends in particular. However, this has backfired to the point where I meet new people, or talk to old friends, and they’re associating me with creepy beliefs I don’t follow. It sucks that I feel I might have to do damage control for my personal standing in a close-knit community. So, I’m going to try writing another post detailing all the boring, useful ideas on Less Wrong nobody else notices, such as Luke’s posts about scientific self-help, or Scott’s great arguments in favor or niceness, community, and having better debates by interpreting your opponent’s arguments charitably, or the repositories of useful resources.
If you have links/resources about the most boring useful ideas on Less Wrong, or an introduction that highlights, e.g., all the discourse of Less Wrong which is merely the practical applications of scientific insight for everyday life, please share them below. I’ll try including them in whatever guide I generate.
A heuristic I’ve previously encountered being thrown around about whether to donate to the MIRI, or the FHI, is to fund whichever one has more room for more funding, or whichever one is experiencing more of a funding crunch at a given time. As Less Wrong is a hub for an unusually large number of donors to each of these organizations, it might be nice if there was a (semi-)annual discussion on these matters with representatives from the various organizations. How feasible would this be?
Context: Main is currently disabled; LessWrong 2.0
LessWrong is actively being redesigned. Until further notice, posts to Main have been disabled. Once the redesign is complete, LW may have multiple subs, none of which might be called ‘Main’, but one or more of which will be designated as where the nice Forest of Classic LW Stuff you’re hoping to find here. The only posts in Main recently are meetup posts and the survey, which were promoted there for visibility. Apparently, usage statistics show for the last several months Discussion has been getting much more attention than Main, so Discussion is where non-crap is. Of course, there is no more explicit division between crap and non-crap you’d expect the ‘Main’/‘Discussion’ divide to reflect. Try finding other ways to filter out crap, like reading the top posts from the previous week.
I’m drafting a post for Discussion about how users on LessWrong who feel disconnected from the rationalist community can get involved and make friends and stuff.
What I’ve got so far: Where everybody went away from LessWrong, and why How you can keep up with great content/news/developments in rationality on sites other than LessWrong *Get involved by going to meetups, and using the LW Study Hall
What I’m looking for:
A post I can link to about why the LW Study Hall is great.
Testimonials about how attending a meetup transformed social or intellectual life for you. I know this is the case in the Bay Area, and I know life became much richer for some friends e.g., I have in Vancouver or Seattle.
A repository of ideas for meetups, and other socializing, if somebody planning or starting a meetup can’t think of anything to do.
How to become friends and integrate socially with other rationalists/LWers. A rationalist from Toronto visited Vancouver, noticed we were all friends, and was asking us how we became all friends, rather than a circle of individuals who share intellectual interests, but not much else. The only suggestions we could think of were:
Be friends with a couple people from the meetup for years before, and hang out with everyone else for 2 years until it stops being awkward.
and
If you can get a ‘rationalist’ house with roommates from your LW meetup, you can force yourselves to rapidly become friends.
These are bad or impractical suggestions. If you have better ones to share, that’d be fantastic.
Please add suggestions for the numbered list. If relevant resources don’t exist, notify me, and I/we/somebody can make them. If you think I’m missing something else, please let me know.
- 23 Feb 2015 15:07 UTC; 3 points) 's comment on Open thread, Feb. 23 - Mar. 1, 2015 by (
In the past, I’ve been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I’ve spoken up I’ve done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure.
That was a few years ago. For lots of reasons, it’s easier, less costly, less risky and easier to not feel fear for me now. I don’t know yet what I’ll say regarding any or all of this related to Leverage because I don’t have any sense of how I might be prompted or provoked to respond. Yet I expect I’ll have more to say and towards what I might share as relevant I don’t have any particular feelings about yet. I’m sensitive to how my statements might impact others but for myself personally I feel almost indifferent.
As someone who was inspired by your post from a year ago, and who was thinking of contributing to LessWrong as a public archipelago, here are some things that stopped me from contributing much. Maybe other people have these things in common with me and why they wanted to but failed to contribute in the last year.
1. There is less interest in the rationality community for the things I would be interested in writing about on LessWrong, or the rationality community is actively disinterested in things I am interested in writing about. This demotivates me to post on LW. I am in private group chats and closed Facebook groups largely populated by members of the rationalist diaspora. These discussions don’t take place on LessWrong, not only because there might be relatively few people who would participate in LessWrong, but because they’re discussions of subjects the rationality community is seen as hostile, indifferent, or disinterested in, such as many branches of philosophy. This discourages these discussions on the public archipelago. I expect there is a lot of people who don’t post on LessWrong because they share this kind of perception. It’s possible to find people with whom to have private discussions, but having them be on a public archipelago on LW, it if was possible to satisfy people, would make it easier and better from my viewpoint.
2. One particular worry I and others have is that, as in mainstream culture more and more things become politicized, more and more types of conversations on LW would be discouraged as ‘politically mindkilling.’ I personally wouldn’t know what to expect as what the norms are here, though I am not as worried as others because I don’t see it as much of a loss for there to be fewer half-baked speculations on political subjects online. A fear that the list of subjects discouraged as being too overtly ‘political’ could endlessly grow is discouraging.
3. The number of people who are interested in the subjects I am interested in on LessWrong is too small to motivate me to write more. I haven’t explored this as much, and I think I have been too lazy in not trying. Yet a decent quantity of feedback, of sufficiently engaging and deep quality, seems like to me what would motivate I know to participate more on LW. One possibility is getting people I find who are not currently part of the rationality community, or a typical LW user, to read my posts on LW, and build something new out of it. I think this is fine to talk about, and I really agree with the shift since LW2.0 to develop LW as its own thing, still working with but distinct and independent from MIRI and AI alignment, CFAR, and the rationality community. So cleaving new online spaces on LW, which maybe can be especially tailored due to how much control I have over my own posts as a user, is something I am still open to trying.
I haven’t read your entire series of posts on Givewell and effective altruism. So I’m basing this comment mostly off of just this post. It seems like it is jumping all over the place.
You say:
Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell’s top charities; they were worried that this would be an unfair way to save lives.
This sets up a false dichotomy. Both the Gates Foundation and Good Ventures are focused on areas in addition to funding interventions in the developing world. Obviously, they both believe those other areas, e.g., in Good Ventures’ case, existential risk reduction, present them with the opportunity to prevent just as many, if not more, deaths than interventions in the developing world. Of course, a lot of people disagree with the idea something like AI alignment, which Good Ventures funds, is in any way comparable to cost-effective interventions into the developing world in terms of how many deaths it prevents, its cost-effectiveness, or its moral value. Yet based on how you used to work for Givewell, and you’re now much more focused on AI alignment, it doesn’t seem like you’re one of those people.
If you were one of those people, you would be the kind of person to think that Good Ventures not spending all their money on developing-world interventions, and instead spreading out their grants over time to shape the longer-term future in terms of AI safety and other focus areas, quite objectionable. If you are that kind of person, i.e., you believe it is indeed objectionable Good Ventures is, from the viewpoint of thinking their top priority should be developing-world interventions, ‘hoarding’ their money for other focus areas like AI alignment is objectionable, that is not at all clear or obvious.
Unless you believe that, then, right here, there is a third option other than “Gates Foundation and Good Ventures are hoarding money at the price of millions of deaths”, and the “numbers are wildly exaggerated”. That is, both foundations believe the money they are reserving for focus areas other than developing-world interventions aren’t being hoarded at the expense of millions of lives. Presumably, this is because both foundations also believe the counterfactual expected value of these other focus areas is at least comparable to the expected value of developing-world interventions.
If the Gates Foundation and Good Ventures appear not to, across the proportions of their endowments they’ve respectively allotted for developing-world interventions and other focus areas, be not giving away their money as quickly as they could while still being as effective as possible, then you objecting to it would make sense. However, that would be a separate thesis that you haven’t covered in this post. Were you to put forward such a thesis, you’ve already laid out the case for what’s wrong with a foundation like Good Ventures not fully funding each year the developing-world interventions of Givewell’s recommended charities.
Yet you would still need to make additional arguments for what Good Ventures is doing wrong in only granting to another focus area like AI alignment as much as they annually are now, instead of grantmaking at a much higher annual rate or volume. Were you to do that, it would be appropriate to point out what is wrong with the reasons an organization like the Open Philanthropy Project (Open Phil) doesn’t grant much more to their other focus areas each year.
For example, one reason it wouldn’t make sense for AI alignment to be granting in total each year 100x as much to AI risk as they are now, starting this year, is because it’s not clear AI risk as a field currently has that much room for more funding. It is at least not clear AI risk organizations could sustain such a high growth rate assuming their grants from Open Phil were 100x bigger than they are now. That’s an entirely different point than any you made in this post. Also, as far as I’m aware, that isn’t an argument you’ve made anywhere else.
Given that you are presumably familiar with these considerations, it seems to me you should have been able to anticipate the possibility of the third option. In other words, unless you’re going to make the case that either:
it is objectionable for a foundation like Good Ventures to reserve some of their endowment for the long-term development of a focus area like AI risk, instead of using it all to fund cost-effective developing-world interventions, and/or;
it is objectionable Good Ventures isn’t funding AI alignment more than they currently are, and why;
you should have been able to tell in advance the dichotomy you presented is indeed a false one. It seems like of the two options in the dichotomy you presented, you believe cost-effectiveness estimates like those from Givewell are wildly exaggerated. I don’t know why you presented it as though you thought it might just as easily be one of the two scenarios you presented, but the fact you’re exactly the kind of person who should have been able to anticipate a plausible third scenario and didn’t undermines the point you’re trying to make.
Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.
One thing that falls out of my above commentary is that since it is not clearly the case that is only one of the two scenarios you presented is true, it is not necessarily the case either that the mentioned cost-effectiveness estimates “have to be interpreted as marketing copy designed to control your behaviour”. What’s more, you’ve presented another false dichotomy here. It is not the case Givewell’s cost-effectiveness estimates must be only and exclusively one of either:
severely distorted marketing copy designed for behavioural control.
unbiased estimates designed to improve the quality of your decision-making process.
Obviously, Givewell’s estimates aren’t unbiased. I don’t recall Givewell ever claiming to be unbiased, although it is a problem for other actors in EA to treat Givewell’s cost-effectiveness estimates as unbiased. I recall from reading a couple posts from you series on Givewell it seemed as though you were trying to hold Givewell responsible for the exaggerated rhetoric made by others in EA using Givewell’s cost-effectiveness estimates. It seems like you’re doing that again now. I never understood then, and I don’t understand now, why you’ve tried explaining all this as if Givewell is responsible for how other people are misusing their numbers. Perhaps Givewell should do more to discourage a culture of exaggeration and bluster in EA built on people using their cost-effectiveness estimates and prestige as a charity evaluator to make claims about developing-world interventions that aren’t actually backed up by Givewell’s research and analysis.
Yet that is another, different argument you would have to make, and one that you didn’t. To hold Givewell as exclusively culpable for how their cost-effectiveness estimates and analyses have been misused as you have, in the past and present, would only be justified by some kind of evidence Givewell is actively trying to cultivate a culture of exaggeration and bluster and shiny-distraction-via-prestige around themselves. I’m not saying no such evidence exists, but if it does, you haven’t presented any of it.
We should be more skeptical, not less, of vague claims by the same parties to even more spectacular returns on investment for speculative, hard to evaluate interventions, especially ones that promise to do the opposite of what the argument justifying the intervention recommends.
You make this claim as though it might be the exact same people in the organizations of Givewell, Open Phil, and Good Ventures who are responsible for all the following decisions:
presenting Givewell’s cost-effectiveness estimates in the way they do.
making recommendations to Good Ventures via Givewell about how much Good Ventures should grant to each of Givewell’s recommended charities.
Good Ventures’ stake in OpenAI.
However, it isn’t the same people making all of these decisions across these 3 organizations.
Dustin Moskowitz and Cari Tuna are ultimately responsible for what kinds of grants Good Ventures makes, regardless of focus area, but they obviously delegate much decision-making to Open Phil.
Good Ventures obviously has tremendous influence over how Givewell conducts their research and analysis to reach particular cost-effectiveness estimates, but by all appearances Good Ventures appears to have let Givewell operate with a great deal of autonomy, and haven’t been trying to influence Givewell to dramatically alter how they conduct their research and analysis. Thus, it would make sense to look to Givewell, and not Good Ventures, for what to make of their research and analysis.
Elie Hassenfeld is the current executive director of Givewell, and thus is the one to be held ultimately accountable for Givewell’s cost-effectiveness estimates, and recommendations to Good Ventures. Holden Karnofsky is a co-founder of Givewell, but for a long time has been focusing full-time on his role as executive director of Open Phil. Holden no longer co-directs Givewell with Elie.
As ED of Open Phil, Holden has spearheaded Open Phil’s work in, and Good Ventures’ funding of, AI risk research.
That their is a division of labour whereby Holden has led Open Phil’s work, and Elie Givewell’s, has been common knowledge in the effective altruism movement for a long time.
What many people disagreed with about Open Phil recommending Good Ventures take a stake in OpenAI, and Holden Karnofsky consequently being made a Board Member of Open Phil, is based on the particular roles played by the people involved in the grant investigation that I won’t go through here. Also, like yourself, on the expectation OpenAI may make the state of things in AI risk worse rather than better, based on either OpenAI’s ignorance or misunderstanding of how AI alignment research should be conducted, at least in the eyes of many people in the rationality and x-risk reduction communities.
The assertion Givewell is wildly exaggerating their cost-effectiveness estimates is an assertion the numbers are being fudged at a different organization than Open Phil. The common denominator is of course that Good Ventures made grants made on recommendations from both Open Phil and Givewell. Holden and Elie are co-founders of both Open Phil and Givewell. However, with the two separate cases of Givewell’s cost-effectiveness estimates, and Open Phil’s process for recommending Good Ventures take a stake in OpenAI, it is two separate organizations, run by two separate teams, led separately by Elie and Holden respectively. If in each of the cases you present of Givewell, and Open Phil’s support for OpenAI, something wrong has been done, they are two very different kinds of mistakes made for very different reasons.
Again, Good Ventures is ultimately accountable for grants made in both cases. You could hold each organization accountable separately, but when you refer to them as the “same parties”, you’re making it out as though Good Ventures, and their satellite organizations, are either, generically, incompetent or dishonest. I say ” generically”, because while you set it up that way, you know as well as anyone the specific ways in which the two cases of Givewell’s estimates, and Open Phil’s/Good Venture’s relationship with OpenAI, differ. You know this because you have been one of, if not the most, prominent individual critic in both cases for the last few years.
Yet when you call them all the “same parties”, you’re treating both cases as if the ‘family’ of Good Ventures and surrounding organizations generally can’t be trusted, because it’s opaque to us how they come to make these decisions that lead to dishonest or mistaken outcomes as you’ve alleged. Yet you’re one of the people who made clear to everyone else how the decisions were made; who were the different people/organizations who made the decisions; and what one might find objectionable about them.
To substantiate the claim the two different cases of Givewell’s estimates, and Open Phil’s relationship to OpenAI, are sufficient grounds to reach the conclusion none of these organizations, nor their parent foundation Good Ventures, can generally be trusted, you could have held Good Ventures accountable for not being diligent enough in monitoring the fidelity of the recommendations they receive from either Givewell and Open Phil. Yet you didn’t do that. You could have also, now or in the past, tried to make the arguments Givewell and Open Phil should each separately be held accountable for what you see as their mistakes on in the two separate cases. Yet you didn’t do that either.
Making any of those arguments would have made sense. Yet what you did is you treated it as though Givewell, Open Phil, and Good Ventures all play the same kind of role in both cases. Not even all 3 organizations are involved in both cases. To summarize all this, the two cases of Givewell’s estimates, and Open Phil’s relationship to OpenAI, if they are problematic, are not the same kinds of problems caused by Good Ventures for the same reasons. Yet you’re making it out as though they are.
It might make more sense if you were someone else who just saw the common connection of Good Ventures, and didn’t know how to go about criticizing them other than to point out they were sloppy in both cases. Yet you know everything I’ve mentioned about who the different people are in each of the two cases, and the different kinds of decisions each organization is responsible for, and how they differ in how they make those decisions. So, you know how to hold each organization separately accountable for what you see as their separate mistakes. You know these things because you:
identified as an effective altruist for several years.
have been a member of the rationality community for several years.
are a former employee of Givewell.
have transitioned since you’ve left Givewell to focusing more of your time on AI alignment.
Yet you make it out as though Good Ventures, Givewell, and Open Phil are some unitary blob that makes poor decisions. If you wanted to make any one of, or even all, the other specific, alternative arguments I suggested about how to hold each of the 3 organizations individually accountable, it would have been a lot easier for you to make a solid and convincing argument than the one you’ve actually made regarding these organizations. Yet because you didn’t, this is another instance of you undermining what you yourself are trying to accomplish with a post like this.
As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent.
You started this post off with what’s wrong with Peter Singer’s cost-effectiveness estimates from his 1997 essay. Then you pointed out what you see as being wrong similarly done by specific EA-aligned organizations today. Then you bridge to how, because funding gaps are illusory given the erroneous cost-effectiveness estimates, the Gates Foundation and Good Ventures are doing much less than they should with regards to developing-world interventions.
Then, you zoom in on what you see as the common pattern of bad recommendations being given to Good Ventures by Open Phil and Givewell. Yet the two cases of recommendations you’ve provided are from these 2 separate organizations who make their decisions and recommendations in very different ways, and are run by 2 different teams of staff, as I pointed out above. And as it’s I’ve established you’ve known all this in intimate detail for years, you’re making arguments that make much less sense than the ones you could have made based on the information available to you.
None of that has anything to do with the Gates Foundation. You told me in response to another comment I made on this post that it was another recent discussion on LW where the Gates Foundation came up that inspired you to make this post. You made your point about the Gates Foundation. Then, that didn’t go anywhere, because you made unrelated points about unrelated organizations.
For the record, when you said:
If you give based on mass-marketed high-cost-effectiveness representations, you’re buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There’s no substitute for developing and acting on your own models of the world.
and
Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.
none of that applies to the Gates Foundation, because the Gates Foundation isn’t an EA-aligned organization “mass-marketing high cost effectiveness representations” in a bid to get small, individual donors to build up a mass movement of effective charitable giving to fill illusory funding gaps they could easily fill themselves. Other things being equal, the Gates Foundation could obviously fill the funding gap. None of the rest of those things apply to the Gates Foundation, though, and they would have to for it to make sense that this post, and its thesis, were inspired by mistakes being made by the Gates Foundation, not just EA-aligned organizations.
However, going back to “the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent”, it seems like you’re claiming the thesis of Singer’s 1997 essay, and the basis for effective altruism as a movement(?), are predicated exclusively on reliably nonsensical cost-effectiveness estimates from Givewell/Open Phil, not just for developing-world interventions, but in general. None of that is true, because Singer’s thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, and Singer’s thesis isn’t the exclusive basis for the effective altruism movement. Even if that was a logically valid argument, your conclusion would not be sound either way, because, as I’ve pointed out above, the premise that it makes sense to treat Givewell, Open Phil, and Good Ventures, like a unitary actor, is false.
In other words, because ” mass-marketed high-cost-effectiveness representations” are not the foundation of “the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent” in general, and certainly isn’t some kind of primary basis for effective altruism if that was something you were suggesting, your conclusion destroys nothing.
To summarize:
you knowingly presented a false dichotomy about why the Gates Foundation and Good Ventures don’t donate their entire endowments to developing-world interventions.
you knowingly set up a false dichotomy whereby everyone has been acting the whole time as if it’s the case Givewell’s and Open Phil’s cost-effectiveness estimates are unbiased, or the reason they are wildly exaggerated is because those organizations are trying to deliberately manipulate people’s behaviour.
you cannot deny you could not have been cognizant of the fact these dichotomies are false, because the evidence with which you present them are your own prior conclusions drawn in part from your personal and professional experiences.
you said this post was inspired by the point you made about the Gates Foundation, but that has nothing to do with the broader arguments you’ve made about Good Ventures, Open Phil, or Givewell, and those arguments don’t back the conclusion you’ve consequently drawn about utilitarianism and effective altruism.
In this post, you’ve raised some broad concerns of things happening in the effective altruism movement I think are worth serious consideration.
My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell’s top charities; they were worried that this would be an unfair way to save lives.
I don’t believe the rationale for why Givewell doesn’t recommend to Good Ventures to fully fund Givewell’s top charities totally holds up, and I’d like to understand better why they don’t. I think Givewell maybe should recommend Good Ventures fully fund their own top charities each year.
Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.
That EA has a tendency to move people in a direction too far away from these more ordinary and concrete aspects of their lives is a valid one.
I am also unhappy with much of what has happened relating to OpenAI.
All these are valid concerns that would be much easier to take more seriously from you if you presented arguments for them on their own, as opposed to presenting them as a few of many different assertions that related to each other, at best, in a very tenuous manner, in a big soup of an argument against effective altruism that doesn’t logically hold up, based on the litany of unresolved issues with it I’ve pointed out above. It’s also not clear why you wouldn’t have realized any of this before you made this post, based on all the knowledge that served as the evidence you used for your premises in this post you had before you made this post, as it was information you yourself published on the internet.
Even if all the apparent leaps of logic you’ve made in this post are artifacts of this post being a truncated summary of your entire, extensive series of posts on Givewell, and EA, the entire structure of this one post undermines the point(s) you’re trying to make with it.
[Meta]
While writing my recent post, I was thinking that it would be great if there was a summary of all the best answers received from widely shared ‘stupid questions’ in Stupid Questions Threads. I’d call this post ‘The Best Answers to Stupid Questions’, or ‘Frequently Asked Stupid Questions, and Answers’, and it would be analogous to how NancyLeibovitz summarized conclusions on procedural knowledge gaps. I might include responses from this thread in such a post. I have some questions about how I/we might gauge the best (answers to) stupid questions, aside from upvotes, among others. Would I only include questions/answers that seemed most generalizable?
Also, if I wrote this, how much should I care about privacy? I notice I don’t actually have a common sense in this regard, so I’m asking honestly.
If someone has already posed a question in a Stupid Questions thread using their account, is it fair to assume that it’s fine to share it more widely by profiling it specifically in a post? Is this a non-issue?
If it’s not fair to assume so, how should I go about seeking consent to include the Q/A in the post? What’s reasonable?
Thanks. Another note: what you’re trying with the upvotes/downvotes is the most innovative karma system I’ve ever tried. It feels fluid. I haven’t gotten used to it yet, but I like the novelty of it.
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can’t claim to know much about setting up effective norms for defending whistleblowers though.
It’s difficult to ordain beforehand what sort of sexual content is fit for LW, because it’s just as much how you write it as it is about what you write about. Rationalist culture emerged from rationalists on LW1.0 discussing experimenting with alternative lifestyle choices, and the ones which worked for some rationalists tend to spread. To be fair, polyamory and other sex-related behaviour are more viral among rationalists in this regard: when a bunch of single people who have difficulty relating to most others around them suddenly find they can relate to each other very well; are willing to make the otherwise taboo tradeoffs that sex-positivity introduces into one’s life; and tend to have be (relatively) deprived of sexual/romantic expression that’s anywhere close to what they get out of relationships with other rationalists or rationalist-adjacent people, they’re going to be more open about this sort of this. That’s not going away.
Since the rationalist diaspora is going to be talking about sex somewhere anywhere, I’d prefer conversations intending to match the map to the territory on sex-related topics take place on LW, if the likely counterfactual would be them taking place somewhere else and having a toxic impact. Just because everything seems hunky-dory on LW doesn’t necessarily mean they’re aren’t other schisms in the community borne over epistemic disagreements over charged issues. What’s more, if we can bring epistemic discussion of sex onto LW, while keeping separate the interpersonal and political aspects of it in our lives, it’s not so muddled we can’t be confident there are non-zero conversations among rationalists where intellectual progress is being made and recorded.
LessWrong aspires to be like academia in quality, but not necessarily like academia in culture or content. Much discussion of sexuality in academia probably either sucks or is too dry and abstract to be useful for much, to say nothing of how the research quality may have declined with the replication crisis, and how LW should aspire to do better than that.
So some conversations about sex can take place here. I don’t think anything is inherently wrong with it. I don’t think there’s anything inherently right about it either. I’d like conversations about sex on LW to have rationality be the focus, and sex the object of discussion rationality is being applied to. The examples provided in ‘The Typical Sex Life Fallacy’ were relevant to illustrating the point, but personal anecdotes about your favourite kind of tea don’t add much. Also, not that I think Ozy would do this, but they’re close enough that others might see what was written and mistake that for using intimately detailed examples from their own personal lives on LW, including the nature of their private relationships with other community members. I think this is a mistake. I think at least switching out some of the details and replacing the names of real people with Alice and Bob and Charlie and Dana so nobody’s privacy is even remotely close to being compromised without their consent is a good idea.
Content warnings seem appropriate, and when and for what they’re used should be up to the user posting, unless someone in the comments indicates a preference for content warnings on a particular topic going forward after that, or even just signals they found coverage of some content offputting.
Cursing, except in cases where it’s hard to find any suitable alternative for describing the thing (e.g., “genderfuck”), seems like an unnecessary aesthetic flourish. I guess “fucking” is less icky than “humping”, but “sleeping together” seems to fit just fine. If we’re writing about sex on LW, I don’t see why we shouldn’t try to write it as SFW as we can, except for when changing up the tone might be key to the theme of a well-intentioned post.
I think if we write about sex, we need to remember this is *still LessWrong*. If we’re writing about sex, is for nerds, by nerds. If the information value of a post about sex is high enough, it’s fine if it’s written in a boring rather than titillating fashion, and it might have better results that way.
The Rationalist Community: Catching Up to Speed
Note: the below comment is intended for my friend(s) who is/are not on Less Wrong yet, or presently, as an explanation of how the rationality community has changed in the interceding years between when Eliezer Yudkowsky finished writing his original sequences, and 2014. This is an attempt to bridge procedural knowledge gaps. Long-time users, feel free to comment below with suggestions for changes, or additions.
Off of Less Wrong, the perspective of the rationality community has changed in light of the research, and expansion of horizon, by the Center For Applied Rationality. A good start introduction to these changes is found in the essay Three Ways CFAR Has Changed My View of Rationality written by Julia Galef, the president of the CFAR.
On Less Wrong itself, Scott Alexander has written what this community of users has learned together in an essay aptly titled Five Years and One Week of Less Wrong.
The Decline of Less Wrong was a discussion this year about why Less Wrong has declined, where the rationalist community has moved, and what should, or shouldn’t be done about it. If that interests you, the initial post is great, and there is some worthy insight in the comments as well.
However, if you want to catch up to speed right now, then check out the epic Map of the Rationalist Community from Slate Star Codex.
For a narrower focus, you can search the list of blogs on the Less Wrong wiki, which are sorted alphabetically by author name, and have a short list of topics each blog typically covers.
Finally, if you’re (thinking of getting) on Tumblr, check out the Rationalist Masterlist which is a collated list of Tumblrs from (formerly) regular contributors to Less Wrong, and others who occupy the same memespace
So, first of all, when you write this:
Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated.
It seems like what you’re trying to accomplish for rhetorical effect, but not irrationally, is to demonstrate that the only alternative to “wildly exaggerated” cost-effectiveness estimates is that foundations like these are doing something even worse, that they are hoarding money. There are a few problems with this.
You’re not distinguishing who the specific cost-effectiveness estimates you’re talking about are coming from. While it’s a bit of a nitpick to point out it’s Givewell rather than Good Ventures that makes the estimate, when the 2 organizations are so closely connected, and Good Ventures can be held responsible for the grants they make on the basis of the estimates, if not the original analysis that informed them.
At least in the case of Good Ventures, there is a third alternative that they are reserving billions of dollars not at the price of millions of preventable deaths, because, for a variety of reasons, they intend in the present and future to give that money to their diverse portfolio of causes they believe present just as if not more opportunity to prevent millions of deaths, or to otherwise do good. Thus, in the case of Good Ventures, you knew as well as anyone that the idea only one of the two conclusions you’ve posed here is wildly misleading.
So, what might have worked better as something like:
Foundations like Good Ventures apportion a significant amount of their endowment to developing-world interventions. If the low cost-per-life-saved numbers Good Ventures is basing this giving off of are not wildly exaggerated, then Good Ventures is saving millions fewer lives than they could with this money.
The differences in my phrasing are:
it doesn’t imply foundations like Good Ventures or the Gates Foundation are the only ones to be held responsible for the fact the cost-effectiveness estimates are wildly exaggerated.
it doesn’t create the impression Good Ventures and the Gates Foundation, in spite of common knowledge, ever intended exclusively to use their respective endowments to save lives with developing-world interventions, which sets up a false dichotomy that the organizations are necessarily highly deceptive, or doing something even more morally indictable.
You say a couple sentences later:
Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.
As you’ve covered in discussions elsewhere, the implication of the fact, based on the numbers they’re using these foundations they could be saving more lives they aren’t with the money they’ve intended to use to save lives through developing-world interventions, is the estimates are clearly distorted. You don’t need an “either scenario”, one of which you wrote about in a way that implies something could be true you know is false, to get across that implication is clear.
There aren’t 2 scenario, one which makes Good Ventures look worse than they actually are, and one about the actual quality of their mission that is less than the impression people have of it. There is just 1 scenario where it is the case the ethical quality of these foundations’ progress on their goals is less than the impression much of the public has gotten of it.
As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there’s a way to fix these problems as a low-info donor, there’s already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense.
Here, you do the same thing of conflating multiple, admittedly related actors. When you say “the same people”, you could be referring to any or all of the following, and it isn’t clear who you are holding responsible for what:
Good Ventures
The Open Philanthropy Project
Givewell
The Gates Foundation
‘effective altruism’ as a movement/community, independent of individual, officially aligned or affiliated non-profit organizations
In treating each of these actors part and parcel with each other, you appear to hold each of them equally culpable for all the mistakes you’ve listed here, which I’ve covered in this, and my other, longer comment, as false in myriad ways. Were you make clear in your conclusion who you are holding responsible for each respective factor in the total outcome of the negative consequences of the exaggerated cost-effectiveness estimates, your injunctions for how people should change their behaviour in the face of how they should respond to these actors differently would have rung more true.
I didn’t know that, but neither do I mind the experience of having a comment deleted. I would mind:
that Benquo might moderate this thread to a stringent degree according to a standard he might fail to disclose, and thus can use moderation as a means to move the goal posts, while under the social auspices of claiming to delete my comment because he is saw it as wilfully belligerent, without substantiating that claim.
that Benquo will be more motivated to do this than he otherwise would be with on other discussions he would moderate on LW, as he has initiated this discussion with an adversarial frame, and is one that Benquo feels personally quite strongly about (e.g., it is based on a long-lasting public dispute he has had with his former employer, and Benquo here is not shy here about his hostility to at least large portions of the EA movement).
that were he to delete my comment on such grounds, there would be no record by which anyone reading this discussion would be able to hold Benquo accountable to the standards he used to delete my comments, unduly stacking the deck against an appeal I could make that in deleting my comment Benquo had been inconsistent in his moderation.
Were this to actually happen, of course I would take my comment and re-post it as its own article. However, I would object to how Benquo would have deleted my comment in that case, not the fact that he did do it, on the grounds I’d see it as legitimately bad for the state of discourse LW should aspire to. By checking what form Benquo’s moderation standard specifically takes beyond a reign of tyranny against any comments he sees as vaguely annoying or counterproductive, I am trying to:
1. externalize a moderation standard to which Benquo could be held accountable.
2. figure out how I can write my comment so it meets Benquo’s expectations for quality, so as to minimize unnecessary friction.
These are my thoughts as a CFAR workshop alumnus. I don’t have funds to donate right now, so my perspective isn’t backed up by action of donation, or a conscious choice not to donate. Feel free to put however much weight on my opinion as (any of) you like. I figure I would comment because providing more data is better than less data. I don’t claim for my perspective to be typical of CFAR workshop alumni.
After I attended a workshop, realizing its cost for participants as revenue for the CFAR, I did a Fermi estimate of how much revenue CFAR actually achieves. It included an estimate of the revenue and cost of each participant, multiplied by the number of participants, minus the CFAR’s operating costs. I concluded that at best the CFAR would only be making ends meet if their only source of revenue was its workshops. As expensive as the workshops may seem, reading about the CFAR’s finances in this post made me realize how seriously the CFAR’s takes their own goal of providing and testing their minimal viable product. Regarding theiri finances and operations, they’re not goofing around.
The CFAR workshop I attended was a great experience for me. I mention to some friends they seem like the sort who would get a lot out of it. However, I don’t give them a full recommendation. This is because the cost is often prohibitively expensive for those in or just out of university. My friends tell me this, and I’m well aware of it. Grand hopes for the future aside, I hope that if the CFAR received enough donations that it could offer their workshops at a lower cost. I hope this not only for my friends, but also for all others who aren’t attending because of costs, yet would benefit both themselves, the CFAR, and its alumni community. This is personally why I respect their fundraising efforts.
Hooray to the CFAR for being one of few (non-profit) organizations who admit “we tried some stuff that didn’t work well. we’ll be rejigging and testing and improving efforts in the future!” Kudos! This earnestness is refreshing.
The CFAR is taking being part of effective altruism quite seriously. It didn’t seem to me they were treating this association as seriously one year ago. They might have felt as serious, but I wasn’t receiving the signal. I am now. Also, I like their honesty in expressing how they’re not just identifying with, but trying to reach the standard of what, effective altruism ought to be.
I’d propose the following three:
vegan
vegetarian
reduced meat intake
The latter, ‘reduced meat intake’ is intended to represent pescetarian, flexitarian, and meat reduction all in one.
I did the survey! This is the second time I’ve completed an iteration of this survey, but this year was the first time I answered all the questions. I also did all the extra credit except for the digit ratio question.