Thanks for the link! So it’s about that “miricult” website.
Now I feel like rationality itself is an infohazard. I mean, rationality itself won’t hurt you if you are sufficiently sane, but if you start talking about it, insufficiently sane people will listen, too. And that will have horrible consequences. (And when I try to find a way to navigate around this, such as talking openly only to certifiably sane people, that seems like the totally cultish thing to do.)
@PhilGoetz’s Reason as memetic immune disorder seems relevant here. It has been noted many times that engineers are disproportionately involved in terrorism, in ways that the mere usefulness of their engineering skills can’t explain.
Perhaps there should be some “pre-rationality” lessons. Something stabilizing you need to learn first, so that learning about rationality does not make you crazy.
There are some materials that already seem to point in that direction: adding up to normality, ethical injunctions. Perhaps the CFAR workshops should start with focusing on these things, in a serious way (like, spend at least one day only debating this, check that the participants understood the lesson, and maybe kick out those who didn’t?).
Because, although some people get damaged by learning about rationality, it seems to me that many people don’t (some of them only because they don’t change in any significant way, but some of them internalize the lessons in a good way). If we could predict who would end up which way, that could allow us to reduce the damage, while still delivering the value.
Of course this only applies to the workshops; online communication is a different questions. But seems to me that the bad things mostly happen offline.
Now I feel like rationality itself is an infohazard. I mean, rationality itself won’t hurt you if you are sufficiently sane, but if you start talking about it, insufficiently sane people will listen, too. And that will have horrible consequences. (And when I try to find a way to navigate around this, such as talking openly only to certifiably sane people, that seems like the totally cultish thing to do.)
There is an alternative way, the other extreme: get more and more rationalists. If the formed communities do not share the moral inclinations of LW community, those might form some new coordination structures[1]; if we don’t draw from the circles of desperate, those structures will tend to benefit others as well (and, on the other hand, having a big proportion of very unsatisfied people would naturally start a gang or overthrow whatever institutions are around).
(It’s probably worth exploring in a separate post?)
I claim non-orthogonality between goals and means in this case. For some community with altruistic people, its structures require learning a fair bit about people’s values. For a group which wants tech companies to focus on consumers’ quality-of-life more than currently, not so.
From my experience, the rationality community in Vienna does not share any of the craziness in Bay Area that I read about, so yeah, it seems plausible that different communities will end up significantly different.
I think there is a strong founder effect… the new members will choose whether they join or not depending on how comfortable they feel among the existing members. Decisions like “we have these rules / we don’t have any rules”, “there are people responsible for organization and safety / everyone needs to take care of themselves” once established, easily become “the way this is done here”.
But you are also limited by the pool you are recruiting the potential new members from. Could be, there are simply not enough people to make a local rationality community. Could be, the local memes are so strong (e.g. positive attitude towards drug use, or wokeness) that in practice you cannot push against them without actively rejecting most of wannabe members, which would be a weird dynamic. (You already need to push strongly against people who simply do not get what rationality means, but are trying to join anyway.)
Thanks for the link! So it’s about that “miricult” website.
Now I feel like rationality itself is an infohazard. I mean, rationality itself won’t hurt you if you are sufficiently sane, but if you start talking about it, insufficiently sane people will listen, too. And that will have horrible consequences. (And when I try to find a way to navigate around this, such as talking openly only to certifiably sane people, that seems like the totally cultish thing to do.)
@PhilGoetz’s Reason as memetic immune disorder seems relevant here. It has been noted many times that engineers are disproportionately involved in terrorism, in ways that the mere usefulness of their engineering skills can’t explain.
Teaching rationality the shallow way—nope; knowing about biases can hurt people
Teaching rationality the deep way—nope; reason as a memetic immune disorder
:(
Perhaps there should be some “pre-rationality” lessons. Something stabilizing you need to learn first, so that learning about rationality does not make you crazy.
There are some materials that already seem to point in that direction: adding up to normality, ethical injunctions. Perhaps the CFAR workshops should start with focusing on these things, in a serious way (like, spend at least one day only debating this, check that the participants understood the lesson, and maybe kick out those who didn’t?).
Because, although some people get damaged by learning about rationality, it seems to me that many people don’t (some of them only because they don’t change in any significant way, but some of them internalize the lessons in a good way). If we could predict who would end up which way, that could allow us to reduce the damage, while still delivering the value.
Of course this only applies to the workshops; online communication is a different questions. But seems to me that the bad things mostly happen offline.
There is an alternative way, the other extreme: get more and more rationalists.
If the formed communities do not share the moral inclinations of LW community, those might form some new coordination structures[1]; if we don’t draw from the circles of desperate, those structures will tend to benefit others as well (and, on the other hand, having a big proportion of very unsatisfied people would naturally start a gang or overthrow whatever institutions are around).
(It’s probably worth exploring in a separate post?)
I claim non-orthogonality between goals and means in this case. For some community with altruistic people, its structures require learning a fair bit about people’s values. For a group which wants tech companies to focus on consumers’ quality-of-life more than currently, not so.
From my experience, the rationality community in Vienna does not share any of the craziness in Bay Area that I read about, so yeah, it seems plausible that different communities will end up significantly different.
I think there is a strong founder effect… the new members will choose whether they join or not depending on how comfortable they feel among the existing members. Decisions like “we have these rules / we don’t have any rules”, “there are people responsible for organization and safety / everyone needs to take care of themselves” once established, easily become “the way this is done here”.
But you are also limited by the pool you are recruiting the potential new members from. Could be, there are simply not enough people to make a local rationality community. Could be, the local memes are so strong (e.g. positive attitude towards drug use, or wokeness) that in practice you cannot push against them without actively rejecting most of wannabe members, which would be a weird dynamic. (You already need to push strongly against people who simply do not get what rationality means, but are trying to join anyway.)