I wouldn’t be surprised if a lot of EAs see my takes here as a slippery slope to warm glow thinking and wanton spending that needs to be protected against.
I didn’t have this reaction at all. The four lessons you present are points about execution, not principles. IMO a lot of these ideas are cheap or free while being super high-value. We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
But I do think they’ll require an EA spin. For example, EA can’t eschew high-value cause areas (like X-risk) because it would look weird to be associated with them. But we can and should take reputation into account when selecting interventions (i.e. we should have weighed the benefits of a chance at getting an EA-aligned congressman with the reputational risk that stemmed from putting millions of cryptobucks into a congressional election, not that realistically we had any control over SBF’s actions or identity as an EA).
For hospitality, I think one thing EAs can do is to distinguish the “controlling reason” we do an intervention vs. the “felt reason” we do it. What do I mean by that? An EA may choose to donate to Against Malaria Foundation for coldhearted cost-benefit analysis reasons. But that EA can also have other motivations, feelings and values alongside the analysis—being able to tell a visceral, vivid, felt story about why they personally feel connected to that cause is a way to come across as not borglike.
We can donate a little money locally just to project warmth and connection to the people around us, because we do believe in helping locally—we just try to prioritize helping globally even more. But if people are concerned that we’ve shut off our compassion and feel alienated from EA on that basis, this is a way we can counteract that impression in a way that might even help improve EA engagement, since it’s honestly a little difficult to relentlessly reject local appeals for aid in order to give 100% of your charity to EA causes. Like, donate 9% of your income to EA-aligned charities and 1% of your income to local charities. If you make $80,000/year, that’s still $800 and an average American’s annual charitable donation on its own. And now, instead of the story being “you give zero dollars to local charities so you can do borglike optimization for X-risk-related donations” the story can be “you give as much as the next person to local charities, while also donating a very substantial portion of your income to X-risk-related charities.”
To me this just seems like the same line of thinking that leads us to limit the EA donation appeal to 10% of the typical person’s income, instead of demanding that people donate until they’re living like the global poor. We relax the demands we make on our members in order to make our movement human-compatible. Encouraging a fraction of EA donations to be local or warm-fuzzy-optimized is another way of being human-compatible while still doing a huge amount of good.
not that realistically we had any control over SBF’s actions or identity as an EA
Agree little could be done then. But since then, I’ve noticed the community has an attitude of “Well I’ll just keep an eye out next time” or “I’ll be less trusting next time” or something. This is inadequate, we can do better.
I’m offering decision markets that will make it harder for frauds to go unnoticed, prioritizing crypto (still experimenting with criteria). But when I show EAs these, I’m kind of stunned by the lack of interest. As if their personal judgment is supposed to be less-corruptible at detecting fraud, than a prediction market. This has been very alarming for me to see.
But who knows—riffing off the post, maybe that just means prediction markets haven’t built up enough reputation for LW/EA to trust it.
Thanks for your super thought out response! I agree with all of it, especially the final paragraph about making EA more human-compatible. Also, I really love this passage:
We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
I didn’t have this reaction at all. The four lessons you present are points about execution, not principles. IMO a lot of these ideas are cheap or free while being super high-value. We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
But I do think they’ll require an EA spin. For example, EA can’t eschew high-value cause areas (like X-risk) because it would look weird to be associated with them. But we can and should take reputation into account when selecting interventions (i.e. we should have weighed the benefits of a chance at getting an EA-aligned congressman with the reputational risk that stemmed from putting millions of cryptobucks into a congressional election, not that realistically we had any control over SBF’s actions or identity as an EA).
For hospitality, I think one thing EAs can do is to distinguish the “controlling reason” we do an intervention vs. the “felt reason” we do it. What do I mean by that? An EA may choose to donate to Against Malaria Foundation for coldhearted cost-benefit analysis reasons. But that EA can also have other motivations, feelings and values alongside the analysis—being able to tell a visceral, vivid, felt story about why they personally feel connected to that cause is a way to come across as not borglike.
We can donate a little money locally just to project warmth and connection to the people around us, because we do believe in helping locally—we just try to prioritize helping globally even more. But if people are concerned that we’ve shut off our compassion and feel alienated from EA on that basis, this is a way we can counteract that impression in a way that might even help improve EA engagement, since it’s honestly a little difficult to relentlessly reject local appeals for aid in order to give 100% of your charity to EA causes. Like, donate 9% of your income to EA-aligned charities and 1% of your income to local charities. If you make $80,000/year, that’s still $800 and an average American’s annual charitable donation on its own. And now, instead of the story being “you give zero dollars to local charities so you can do borglike optimization for X-risk-related donations” the story can be “you give as much as the next person to local charities, while also donating a very substantial portion of your income to X-risk-related charities.”
To me this just seems like the same line of thinking that leads us to limit the EA donation appeal to 10% of the typical person’s income, instead of demanding that people donate until they’re living like the global poor. We relax the demands we make on our members in order to make our movement human-compatible. Encouraging a fraction of EA donations to be local or warm-fuzzy-optimized is another way of being human-compatible while still doing a huge amount of good.
Agree little could be done then. But since then, I’ve noticed the community has an attitude of “Well I’ll just keep an eye out next time” or “I’ll be less trusting next time” or something. This is inadequate, we can do better.
I’m offering decision markets that will make it harder for frauds to go unnoticed, prioritizing crypto (still experimenting with criteria). But when I show EAs these, I’m kind of stunned by the lack of interest. As if their personal judgment is supposed to be less-corruptible at detecting fraud, than a prediction market. This has been very alarming for me to see.
But who knows—riffing off the post, maybe that just means prediction markets haven’t built up enough reputation for LW/EA to trust it.
Thanks for your super thought out response! I agree with all of it, especially the final paragraph about making EA more human-compatible. Also, I really love this passage:
Yes. You get me :’)
You inspired me to write this up over at EA forum, where it’s getting a terrible reception :D All the best ideas start out unpopular?
https://forum.effectivealtruism.org/posts/mBTvWNj9EXxyMM9TS/eas-should-donate-2-to-warm-fuzzy-causes-and-8-to-ea-causes