I’m working on Impact Markets – markets to trade nonexcludable goods.
If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.
Pronouns: Ideally they. But he/she and gender-neutral neopronouns are fine too.
As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.
However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.
You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.
However, the alternative might also be:
That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.
However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.