This is actually something I’ve been trying to articulate for a long time. It’s fantastic to finally have a scientific name for it, (emotional vs cognitive empathy) along with a significantly different perspective.
I’d be inclined to share this outside the rationalist community. Ideally, me or someone else would weave most of the same concepts into a piece intellectuals in general as a target audience. (NOT someone associated directly with EA though, and not with too much direct discussion of EA, because we wouldn’t want to taint it as a bunch of straw Vulcans.)
However, this is well written and might suffice for that purpose. The only things I think would confuse random people linked to this would be the little Hanson sitting on your shoulder, the EY empathy/saving the world bit, and the mention of artificial intelligence. It might also not be clear that your argument is quite narrow scope. (You’re only criticizing some forms of emotional empathy, not all forms, and not cognitive empathy. You aren’t, for instance, arguing against letting emotional empathy encourage us to do good in the first place, but only against letting it overpower the cognitive empathy that would let us do good effectively.)
So, does anyone have any thoughts as to whether linking non-nerds to this would still be a net positive? I guess the value of information is high here, so I can share with a few friends as an experiment. Worst case is I spend a few idiosyncrasy credits/weirdness points.
I’m actually not a fan of the bit I’ve written about Eliezer, I should probably remove it if that will allow you to share it with more people. That paragraph doesn’t do a lot for the piece.
This is actually something I’ve been trying to articulate for a long time. It’s fantastic to finally have a scientific name for it, (emotional vs cognitive empathy) along with a significantly different perspective.
I’d be inclined to share this outside the rationalist community. Ideally, me or someone else would weave most of the same concepts into a piece intellectuals in general as a target audience. (NOT someone associated directly with EA though, and not with too much direct discussion of EA, because we wouldn’t want to taint it as a bunch of straw Vulcans.)
However, this is well written and might suffice for that purpose. The only things I think would confuse random people linked to this would be the little Hanson sitting on your shoulder, the EY empathy/saving the world bit, and the mention of artificial intelligence. It might also not be clear that your argument is quite narrow scope. (You’re only criticizing some forms of emotional empathy, not all forms, and not cognitive empathy. You aren’t, for instance, arguing against letting emotional empathy encourage us to do good in the first place, but only against letting it overpower the cognitive empathy that would let us do good effectively.)
So, does anyone have any thoughts as to whether linking non-nerds to this would still be a net positive? I guess the value of information is high here, so I can share with a few friends as an experiment. Worst case is I spend a few idiosyncrasy credits/weirdness points.
I’m actually not a fan of the bit I’ve written about Eliezer, I should probably remove it if that will allow you to share it with more people. That paragraph doesn’t do a lot for the piece.