The epistemic standard for LW posts is higher than this. The post doesn’t adhere to any of the default comment guidelines:
Aim to explain, not persuade
Try to offer concrete models and predictions
If you disagree, try getting curious about what your partner is thinking
Don’t be afraid to say ‘oops’ and change your mind
Separately, to repost my comment from another thread which was cross-posted from the EA forum:
I think the main issue here is that Less Wrong is not Effective Altruism, and that many (at a guess, most) LW members are not affiliated with EA or don’t consider themselves EAs. So from that perspective, while this post makes sense in the EA forum, it makes relatively little sense on LW, and to me looks roughly like being asked to endorse or disavow some politician X. (And if I extend the analogy, it’s inevitably about a US politician even though I live in another country.)
So this specific EA forum post is just a poor fit for reposting on LW without a complete rewrite.
In this particular case, the post does mention “LessWrong style jedi mindtricks”, but as it’s fundamentally confused about what EA is (“Akin to how EA is an optimization of altruism with “suboptimal” human tendencies like morality and empathy stripped from it”—what is it even supposed to mean to have altruism without morality?), I’m very skeptical that it accurately attributes whatever harm was done to LW content.
And separately, I’m just tired of the pattern of posts accusing community X of having a sexism/racism/whatever problem, without making the slightest effort to argue that community X does in fact worse on those dimensions than whatever an appropriate reference class would be.
Once again, if there’s a version of this post that’s epistemically sound and doesn’t require me to trust the author’s accusations on blind faith, I’d be interested to read that.
The epistemic standard for LW posts is higher than this. The post doesn’t adhere to any of the default comment guidelines:
Separately, to repost my comment from another thread which was cross-posted from the EA forum:
In this particular case, the post does mention “LessWrong style jedi mindtricks”, but as it’s fundamentally confused about what EA is (“Akin to how EA is an optimization of altruism with “suboptimal” human tendencies like morality and empathy stripped from it”—what is it even supposed to mean to have altruism without morality?), I’m very skeptical that it accurately attributes whatever harm was done to LW content.
And separately, I’m just tired of the pattern of posts accusing community X of having a sexism/racism/whatever problem, without making the slightest effort to argue that community X does in fact worse on those dimensions than whatever an appropriate reference class would be.
Once again, if there’s a version of this post that’s epistemically sound and doesn’t require me to trust the author’s accusations on blind faith, I’d be interested to read that.