Setting prevalence aside and taking your case study as representative of some subset, there are some other things that might be going on.
First, a desire to have someone else initiate maps to the Allowing quadrant of the Wheel of Consent, which minimizes effort while maximizing feeling desired. That said, true Allowing should still be compatible with giving clear responses, so this doesn’t by itself explain the aversion you are seeing.
Second, emotional reactions follow the pattern: event ⇒ meaning (via priors) ⇒ affect ⇒ narrative. Suppose this woman holds strongly negative priors about men’s motivations. A consent request is not simply coordination, it’s an implicit demand for legibility. But if she sees the interaction as inherently adversarial, that’s giving you leverage. And if you do all the right things, that can be perceived as just more manipulation.
Now consider the internal conflict. She feels good about you initiating, then has a negative reaction to the consent request...while also consciously endorsing the belief that asking for consent is a Good Thing. Add the background tension of wanting to interact with men while viewing them as partially adversarial...and social advice to “trust your intuition” combined with long-term dissatisfaction with her relationship status and wanting to change it. That’s substantial cognitive dissonance with no widely shared conceptual handles. Hence the shutdown.
So the behavior you describe may be better explained by Allowing plus aversion to legibility (under distrust), rather than by a desire for nonconsent.
Other, non-substantive notes:
LessWrong may have high decoupling norms, but on charged topics like this, disclaimers may help prevent contextualizers from inferring views you likely don’t endorse.
Watch for selection effects! Women who give clear signals and are comfortable with explicit consent often pair off quickly. The women who remain visible in dating contexts—and thus command more of your attention—are disproportionately those who communicate more ambiguously.
In Emergence of Simulators and Agents, my AISC collaborators and I suggested that whether consequentialist or simulator-like cognition (which one could describe as a subcategory of process-based reasoning) emerges depends critically on environmental and training conditions, particularly the “feedback gap” (the delay, uncertainty, or inference depth between action and feedback). Large feedback gaps select for instrumental reasoning and power-seeking; small feedback gaps select for imitation and compression. As examples, LLMs are trained primarily via SSL (minimal feedback gap) and display predominantly simulator-like behavior, whereas RL-trained AlphaZero is clearly agentic.
The dynamic you describe of patterns steering toward states where they have more steering capacity outcompeting other patterns is real, but may be context dependent. If so, CCCT requires both: (1) the conditions for consequentialist reasoning being advantageous being inevitable and (2) consequentialism being inevitable given those conditions.
Claim 1, regarding conditions, is the part that needs defending. The “consequentialism is inevitable” argument requires showing either:
Market/competitive forces will inevitably push toward large-feedback-gap deployments (agentic AI doing long-horizon tasks), or
Even in small-feedback-gap contexts, consequentialist subpatterns will somehow emerge and take over.
Without establishing one of these (1 seems plausible to me, but that’s an intuitive claim), the convergence thesis describes a risk contingent on our choices, not an inevitability. Of course, process-based reasoning is not the same as “safe” by any means, but that shifts the terrain of the argument.