hmm, I like the diagnosis of issues with the EA worldview, but I don’t really buy that they’re downstream of issues with consequentialism and utilitarianism itself.
I would say it’s more like: Effective Altruism has historically embraced a certain flavor of utilitarianism and naive consequentialism that attempts to be compatible with pre-existing vibes and (to some degree) mainstream politics. Concretely, EAs are (to their credit) willing to bite some strange bullets and then act on their conclusions, and are also generally pro-market compared to mainstream Democratic politics. But they’re still very, very Blue Tribe-coded culture-wise, and this causes them to deviate from actually-correct versions of consequentialism and utilitarianism in predictable directions.
Or: in my view, “Utilitarianism and non-naive consequentialism with guardrails” is pretty close to correct philosophy for humans; the issue is that the EA worldview systematically selects for the wrong guardrails[1]. But better ones are available; for example Eliezer wrote this nearly 20 years ago: Ends Don’t Justify Means (Among Humans)
I’d be interested in hearing what kind of criticism you have of the posts in that sequence, and whether your issues with EA are more about a lack of emphasis and embrace of some of those principles, or that the ideas in that sequence are incomplete or even fundamentally mistaken or leading people astray.
hmm, I like the diagnosis of issues with the EA worldview, but I don’t really buy that they’re downstream of issues with consequentialism and utilitarianism itself.
I would say it’s more like: Effective Altruism has historically embraced a certain flavor of utilitarianism and naive consequentialism that attempts to be compatible with pre-existing vibes and (to some degree) mainstream politics. Concretely, EAs are (to their credit) willing to bite some strange bullets and then act on their conclusions, and are also generally pro-market compared to mainstream Democratic politics. But they’re still very, very Blue Tribe-coded culture-wise, and this causes them to deviate from actually-correct versions of consequentialism and utilitarianism in predictable directions.
Or: in my view, “Utilitarianism and non-naive consequentialism with guardrails” is pretty close to correct philosophy for humans; the issue is that the EA worldview systematically selects for the wrong guardrails[1]. But better ones are available; for example Eliezer wrote this nearly 20 years ago: Ends Don’t Justify Means (Among Humans)
I’d be interested in hearing what kind of criticism you have of the posts in that sequence, and whether your issues with EA are more about a lack of emphasis and embrace of some of those principles, or that the ideas in that sequence are incomplete or even fundamentally mistaken or leading people astray.
ETA: And IMO this systematic selection is downstream of culture, not utilitarianism