One possible strategy for making this easier is explicitly having sub-communities for each optimal thing, that all explicitly include some non-rationalists and exclude some rationalists. Just based on the naive model that people want to identify their behaviour with a community or it will feel odd, and that there is some pressure not to have overlapping signals of membership in different tribes since it be confusing.
I like that idea, but i think there can be too much granularity. The feeling of ‘People who agree with me on X also agree with me on completely unrelated Y’ is awesome.
I posted a comment with a similar sentiment. I think it’s not necessarily important to explicitly include non-rationalists in communities (although I’m not sure that’s what you’re saying, so forgive me if I misinterpreted you). But I do think it’s a good idea to promote rationalist leanings in groups that don’t necessarily identify as rationalist.
In fact, that’s how I discovered LW. I participate in the utilitarianism community, and a large proportion of utilitarians (on the internet, at least) also identify as rationalist. I started reading LW as an indirect result of my reading about utilitarianism. Utilitarians certainly seem to perform better as rationalists, and other communities should, too.
One possible strategy for making this easier is explicitly having sub-communities for each optimal thing, that all explicitly include some non-rationalists and exclude some rationalists. Just based on the naive model that people want to identify their behaviour with a community or it will feel odd, and that there is some pressure not to have overlapping signals of membership in different tribes since it be confusing.
I like that idea, but i think there can be too much granularity. The feeling of ‘People who agree with me on X also agree with me on completely unrelated Y’ is awesome.
The halo effect may be awesome … but it’s deadly!
The halo effect is not necessarily either a cause or a consequence of the quoted phenomenon.
Do you agree then that it is a potential explanation? If so, what’s a more plausible one? It may limitations of my imagination, but I don’t see one.
Try.
I smell a recommender system. Think of what sites like amazon.com do with “people who like X also liked Y”.
This is just an observation. I’m not saying that we should go out and build a system to match these people and these Xs and Ys.
I posted a comment with a similar sentiment. I think it’s not necessarily important to explicitly include non-rationalists in communities (although I’m not sure that’s what you’re saying, so forgive me if I misinterpreted you). But I do think it’s a good idea to promote rationalist leanings in groups that don’t necessarily identify as rationalist.
In fact, that’s how I discovered LW. I participate in the utilitarianism community, and a large proportion of utilitarians (on the internet, at least) also identify as rationalist. I started reading LW as an indirect result of my reading about utilitarianism. Utilitarians certainly seem to perform better as rationalists, and other communities should, too.