It’s a solidarity argument: subtle signaling is a form of free-riding that imposes costs on more vulnerable women.
My first objection is: this is an intrasexual competition—women compete against other women for potential mates—so why should any woman care that her actions impose costs on her competitors?
Generally, solidarity arguments are likely to fall flat in zero-sum competitions. That seems to be the fundamental mistake in the reasoning of the imagined wannabe social engineer.
(Okay, not completely zero-sum. It is better for women in general if they sacrifice less to Moloch, and get closer to the Pareto boundary. But in general, these things are difficult to coordinate about.)
A bad equilibrium with consensus beats a bad equilibrium with friction. Norms work through shared expectations, and unilaterally defecting from a norm you correctly identify as suboptimal doesn’t improve the norm — it just removes you from the system’s benefits while imposing costs on everyone around you.
Yeah, sometimes “proposing a new norm” is the easy part of the problem; the difficult part is to coordinate people to adopt the new norm. And you need to think about that part explicitly, not just keep saying: “but my solution is obviously better, why don’t people see it?”.
(By the way, this is something I associate with Mensa. In my experience, rationalists are usually smarter than this. Or maybe it’s because in my area there are not enough teenage rationalists; Bay Area may be different.)
When I try to think about specific instances of proposing new norms, it feels like often:
a) The new norm is actually not better, but its inventor refuses to listen to the (often obvious) arguments against it. For example, someone may propose to solve poverty by abolishing money (not an actual rationalist example), and considers the problem mostly solved (the remaining part is to spread this meme, and help overthrow the evil people who would object against it), refusing to think about question such as, how would the production and distribution coordinated in the new world (without reinventing some equivalent of the money). The solution seems better along some specific axis that the inventor finds aesthetically pleasing, and there ends the analysis.
b) The new norm, even if better in general, may be worse for some people who have the power to stop its adoption. The inventor is either unaware of the opposition, or has no battle plan (and perhaps even doesn’t consider it necessary). Plans inconvenient for the rich people will be opposed by lobbying and think tanks. Plans inconvenient for the socially savvy people will be opposed by ridicule. Plans inconvenient for people with institutional or physical power will be opposed by force. Plans inconvenient for people who break the rules will be ignored. Plans inconvenient for the majority will be opposed by democratic votes. Each of these obstacles could be solved in principle, strategically, but that won’t happen automatically.
Chesterton’s Fence makes an appearance, but it’s drowned out by the heroic narrative.
Yes.
This is not what the Sequences explicitly say. Eliezer wrote about Chesterton’s Fence. There are posts about respecting existing equilibria. But the gestalt — the thing you walk away feeling after reading 2,000 pages about how humans are systematically irrational and how thinking clearly gives you superpowers — pushes overwhelmingly in the direction of “if you can see that the fence is inefficient, you should tear it down.”
I’d say the Sequences are about what you should do as an individual, not about how to organize a society. Even the parts about coordination and cooperation are about your individual capacity to coordinate and cooperate. I think only the “well-kept gardens die by pacifism” chapter is about policing someone else’s actions.
For example, there are good explanations of why you should resist the temptations to take power. But nothing about preventing other people from taking the power. It’s taken for granted that the society is mostly insane and you can’t change it, except for being a small voice of sanity yourself… and maybe if others join you, you could later figure out something together.
This is good strategy for creating a movement. Some individuals will accept the message and change themselves accordingly; they gain the ability to recognize each other; and then they can make an online community, and downvote/ban anyone who violates the rules.
(In such environment, proposing new social norms is relatively easy: you write your proposal, everyone reads it, people say their arguments for or against, and maybe your proposal wins or maybe it won’t; and there is probably an admin who announces the final decision.)
The Sequences alone are insufficient for the offline space that has more opportunity for conflict that cannot be avoided by clicking a button. They won’t tell you what to do when e.g. Zizians move next door to you.
Then again, we shouldn’t expect the Sequences to contain an answer to everything. They contain the meta: how to find out the truth, and that we should cooperate. That should be enough to start the process of figuring out the rest. And if, as AnthonyC suggests, we take “everything written by Scott Alexander” as a continuation of the Sequences, then many of these important things are there. Rationalists who don’t read Scott Alexander, especially the teenagers, may be making a huge mistake. (Perhaps we should tell them more explicitly.)
But you should be surprised if you’re right, not surprised if you’re wrong.
Yeah. Not infinitely surprised, maybe not even dramatically surprised, but it is still suspicious if there are several billion people on this planet, which includes several millions with high IQ and good education, and even none of those has solved the problem that you did right now.
It is possible that you have a competitive advantage. Or that the problem is relatively unimportant on the global scale (so that everyone who could have solved it is working on something else instead). But “I am smart and I have spent an afternoon thinking about the problem” is probably not a sufficient advantage.
Here are some heuristics for when acting on your analysis is more likely to go well:
I would add a point here: There is a middle way between “doing something yourself” and “trying to change the entire society”, and it’s “trying to convince a small group to adopt this as a norm”.
For example, creating a radical honesty group, where the members are only radically honest to each other, and maybe even during their official meetups, is easier than trying to convince everyone, and has a more limited downside compared to unilaterally becoming radically honest with everyone, everywhere.
But then you need to realize that even if a rule works well in your group, it is still unlikely that it would work in the society at large; because your group is not representative of the general population (too rich, too sane, does not contain 1% of psychopaths, etc.), and because some things are easier to coordinate in small groups than in large ones.
By the way, the rationalist community itself is already large enough, so if you want to experiment with a new norm, you better find a subgroup.
Writing some comments as I read...
My first objection is: this is an intrasexual competition—women compete against other women for potential mates—so why should any woman care that her actions impose costs on her competitors?
Generally, solidarity arguments are likely to fall flat in zero-sum competitions. That seems to be the fundamental mistake in the reasoning of the imagined wannabe social engineer.
(Okay, not completely zero-sum. It is better for women in general if they sacrifice less to Moloch, and get closer to the Pareto boundary. But in general, these things are difficult to coordinate about.)
Yeah, sometimes “proposing a new norm” is the easy part of the problem; the difficult part is to coordinate people to adopt the new norm. And you need to think about that part explicitly, not just keep saying: “but my solution is obviously better, why don’t people see it?”.
(By the way, this is something I associate with Mensa. In my experience, rationalists are usually smarter than this. Or maybe it’s because in my area there are not enough teenage rationalists; Bay Area may be different.)
When I try to think about specific instances of proposing new norms, it feels like often:
a) The new norm is actually not better, but its inventor refuses to listen to the (often obvious) arguments against it. For example, someone may propose to solve poverty by abolishing money (not an actual rationalist example), and considers the problem mostly solved (the remaining part is to spread this meme, and help overthrow the evil people who would object against it), refusing to think about question such as, how would the production and distribution coordinated in the new world (without reinventing some equivalent of the money). The solution seems better along some specific axis that the inventor finds aesthetically pleasing, and there ends the analysis.
b) The new norm, even if better in general, may be worse for some people who have the power to stop its adoption. The inventor is either unaware of the opposition, or has no battle plan (and perhaps even doesn’t consider it necessary). Plans inconvenient for the rich people will be opposed by lobbying and think tanks. Plans inconvenient for the socially savvy people will be opposed by ridicule. Plans inconvenient for people with institutional or physical power will be opposed by force. Plans inconvenient for people who break the rules will be ignored. Plans inconvenient for the majority will be opposed by democratic votes. Each of these obstacles could be solved in principle, strategically, but that won’t happen automatically.
Yes.
I’d say the Sequences are about what you should do as an individual, not about how to organize a society. Even the parts about coordination and cooperation are about your individual capacity to coordinate and cooperate. I think only the “well-kept gardens die by pacifism” chapter is about policing someone else’s actions.
For example, there are good explanations of why you should resist the temptations to take power. But nothing about preventing other people from taking the power. It’s taken for granted that the society is mostly insane and you can’t change it, except for being a small voice of sanity yourself… and maybe if others join you, you could later figure out something together.
This is good strategy for creating a movement. Some individuals will accept the message and change themselves accordingly; they gain the ability to recognize each other; and then they can make an online community, and downvote/ban anyone who violates the rules.
(In such environment, proposing new social norms is relatively easy: you write your proposal, everyone reads it, people say their arguments for or against, and maybe your proposal wins or maybe it won’t; and there is probably an admin who announces the final decision.)
The Sequences alone are insufficient for the offline space that has more opportunity for conflict that cannot be avoided by clicking a button. They won’t tell you what to do when e.g. Zizians move next door to you.
Then again, we shouldn’t expect the Sequences to contain an answer to everything. They contain the meta: how to find out the truth, and that we should cooperate. That should be enough to start the process of figuring out the rest. And if, as AnthonyC suggests, we take “everything written by Scott Alexander” as a continuation of the Sequences, then many of these important things are there. Rationalists who don’t read Scott Alexander, especially the teenagers, may be making a huge mistake. (Perhaps we should tell them more explicitly.)
Yeah. Not infinitely surprised, maybe not even dramatically surprised, but it is still suspicious if there are several billion people on this planet, which includes several millions with high IQ and good education, and even none of those has solved the problem that you did right now.
It is possible that you have a competitive advantage. Or that the problem is relatively unimportant on the global scale (so that everyone who could have solved it is working on something else instead). But “I am smart and I have spent an afternoon thinking about the problem” is probably not a sufficient advantage.
I would add a point here: There is a middle way between “doing something yourself” and “trying to change the entire society”, and it’s “trying to convince a small group to adopt this as a norm”.
For example, creating a radical honesty group, where the members are only radically honest to each other, and maybe even during their official meetups, is easier than trying to convince everyone, and has a more limited downside compared to unilaterally becoming radically honest with everyone, everywhere.
But then you need to realize that even if a rule works well in your group, it is still unlikely that it would work in the society at large; because your group is not representative of the general population (too rich, too sane, does not contain 1% of psychopaths, etc.), and because some things are easier to coordinate in small groups than in large ones.
By the way, the rationalist community itself is already large enough, so if you want to experiment with a new norm, you better find a subgroup.