[R]ationalist opinion leaders are better able to . . . give up faster when things don’t work.
Why is this a good thing? It seems to me that people give up too easily just as much as—if not more than—the opposite, especially when they’re trying something that they don’t expect to work. You have to stick with it long enough to collect a reasonable amount of data.
The deeper danger is in allowing your de facto sense of rationalist community to start being defined by conformity to what people think is merely optimal, rather than the cognitive algorithms and thinking techniques that are supposed to be at the center.
This is true. Wouldn’t it be beneficial, though, for any particular community to focus on upholding rationalist principles? If the LW community is specifically committed to rationality, other communities should be committed to rationality as a side effect—as an optimization heuristic.
The effective altruism community, for example, already does a pretty good job of this. Effective altruists tend to be aware of the sorts of biases that get in the way of effective giving. On the other hand, most charities and charity-based communities don’t have this focus on rationality. The breast cancer movement, for instance, does not give the same attention to rationality as the effective altruism movement.
Of course, if the breast cancer movement did give attention to rationality, it probably wouldn’t be the breast cancer movement anymore—if would be the effective altruism movement. If you’re looking for the optimal method for preventing breast cancer, why not generalize that and just look for the optimal method for helping people (which is almost certainly not breast cancer research)?
Why is this a good thing? It seems to me that people give up too easily just as much as—if not more than—the opposite, especially when they’re trying something that they don’t expect to work. You have to stick with it long enough to collect a reasonable amount of data.
This is true. Wouldn’t it be beneficial, though, for any particular community to focus on upholding rationalist principles? If the LW community is specifically committed to rationality, other communities should be committed to rationality as a side effect—as an optimization heuristic.
The effective altruism community, for example, already does a pretty good job of this. Effective altruists tend to be aware of the sorts of biases that get in the way of effective giving. On the other hand, most charities and charity-based communities don’t have this focus on rationality. The breast cancer movement, for instance, does not give the same attention to rationality as the effective altruism movement.
Of course, if the breast cancer movement did give attention to rationality, it probably wouldn’t be the breast cancer movement anymore—if would be the effective altruism movement. If you’re looking for the optimal method for preventing breast cancer, why not generalize that and just look for the optimal method for helping people (which is almost certainly not breast cancer research)?