Courtesans and the First Move
[Edit: This was a test to see if anyone would call me out for writing a post with AI. Good job Eye You for spotting it and calling me out! After 17 people voted, you were the first one to confidently call this post out for the slop that it is.]
[Edit #2: To clarify, the core ideas of this post, including the Courtesan framing device, are something I’ve been working on for many years. The “slop” is just individual sentence structure. This is not something that has been posted with disregard for truth. See comment for details. (That said, this post does contain one anomalous detail that a sufficiently advanced somatic practioner should be detect.)]
You notice her because she doesn’t hesitate.
She enters the room, lets her eyes move once across it—not searching, not lingering—and sits down as if the decision had already been made somewhere else. There’s no adjustment afterward. No shifting. No checking whether the chair is right. It’s done.
At first, that’s all. Later, you realize the conversation seems to be happening more cleanly than usual. People finish sentences. Jokes either land or don’t, and nobody rescues them. When there’s a pause, it doesn’t itch. It just waits. You find yourself saying what you meant to say the first time, without circling.
She reaches for a glass and puts it back down without looking at it. You notice this only because nothing else moves when she does—no shoulder lift, no extra breath. The motion ends exactly where it should, and your attention slides past it without catching.
At some point you try to steer the conversation. You don’t remember deciding to. It either works immediately or fizzles out before it starts. There’s no resistance, no awkwardness—just a quiet sense that something was already decided.
What’s strange is how normal it feels. You leave thinking the interaction went well, clear-headed, slightly lighter. Later, replaying it, you can’t quite tell why some topics never came up or why you didn’t push certain points. It all felt like your choice at the time.
Only afterward do you notice that nothing needed fixing. No apologies. No recalibration. No lingering tension. You don’t think of it as being influenced. You think of it as one of those rare interactions where everything just worked.
That’s the part you don’t see.
An Invisible Skill
What’s operating here isn’t temperament, and it isn’t calm. It isn’t confidence either, though that’s the word people reach for when they don’t have a better one. It’s a skill—but not the kind that announces itself where skills are usually noticed.
Most skills show up at execution. You can see someone choose words, manage tone, regulate emotion, adjust posture. You can watch effort appear and resolve. This one doesn’t live there. By the time anything looks like action, the relevant work is already finished.
Its effects show up indirectly, in what never quite becomes necessary. Lines of argument that don’t form. Reactions that don’t escalate. Tension does arise, but it doesn’t pile up. It dissipates—often through humor—before it hardens into something that needs repair.
From the outside, this reads as ease. From the inside, it doesn’t feel like control. There’s no sense of holding back, no moment of restraint. Fewer things simply come online in the first place.
This is why the skill is easy to misread. People assume the person is exercising judgment in the moment—choosing better responses, applying tact, managing themselves carefully. But judgment lives at the surface. It’s what awareness reports after a much larger process has already done its work.
Awareness isn’t slow. It’s downstream. What’s happening here takes place at the level where interpretation, readiness, and response are assembled long before they’re noticed. By the time something reaches awareness, it already carries momentum. This training works by shaping what gets built upstream, not by correcting what appears downstream.
At that level, influence doesn’t look like influence. Nothing is imposed. Nothing is argued. Certain possibilities gain traction; others never quite cohere. What the other person experiences is clarity—and clarity feels like freedom.
Somatics, Rhetoric, and Psychology
Courtesan training rests on three foundations. They are distinct, and they do not sit comfortably together.
Somatics is the training of perception. Not posture, and not movement quality. Perception itself: sensation, tension, impulse, imagery, affect—the texture of experience before it hardens into meaning. Whatever appears before you decide what it is or what to do about it. This is the only place where commitments can still be seen before they’re made.
That’s why somatic work often looks inert. There’s nothing to apply and nothing to improve. Attention is allowed to reach what is already present, and much of what once felt necessary quietly dissolves—not because it was suppressed, but because it never survives being clearly seen.
Train somatics in isolation and a predictable failure appears. Real perceptual clarity develops, internal resistance fades—and epistemic restraint goes with it. The person starts making confident claims about the world that don’t survive contact with basic competence. You’ll hear things like “science is just another belief system,” or “reality is whatever you experience, so there’s no objective truth.” These aren’t metaphors. They’re offered as literal explanations, usually with calm certainty and a faint implication that disagreement reflects fear or attachment. What’s gone wrong isn’t sincerity or intelligence; it’s category collapse. The disappearance of inner friction is mistaken for authority. With no felt signal left to mark overreach, the person feels grounded while saying things that are obviously, structurally false.
Rhetoric works at a different layer. It’s the training of how words function as instruments. Not eloquence, not persuasion, not argument. Words activate frames, carve conceptual space, and decide which interpretations are even allowed to exist. Timing matters. Naming matters. Silence matters.
When somatic alignment is absent, rhetoric is brittle. Even correct arguments feel like attacks. Even gentle correction produces resentment. Truth lands as threat. But when somatic alignment is present, those constraints disappear entirely. People will tolerate being led somewhere they did not expect. They will tolerate sharp reframes, public contradiction, even being laughed at—because the body has already decided “friend.” Rhetoric gains extraordinary freedom. Without that grounding, it produces enemies even when it wins.
Psychology sits on a third axis. It’s the training of how people actually behave: status, reassurance, threat, face, incentives. What makes someone comply. What makes them open up. What makes them back down. Trained alone, psychology produces manipulation. Even when it’s subtle, even when it’s well-intentioned, people feel handled. Outcomes happen, but they don’t feel mutual. Compliance occurs, and trust erodes. From the inside, this is baffling: the right moves were made, the right buttons pressed, and yet something soured.
Each of these skills confers real leverage—not abstract leverage, but practical leverage. The kind that shapes what becomes salient, what feels safe, and what never quite starts. And each of them, trained in isolation, produces a predictable kind of damage.
Not an Accident
That combination is not impossible. It does occasionally arise. But when it does, it happens despite the available training paths, not because of them.
What’s rare isn’t compatibility between the people these trainings produce. In fact, they often get along quite well. What’s rare is compatibility between the trainings themselves. Each one, taken seriously, shapes perception, motivation, and behavior in ways that directly interfere with the others. The conflict isn’t social. It’s structural.
Traditions that train deep perception reward dissolution: non-grasping, quiet, the refusal to commit prematurely. Taken seriously, they produce clarity—and a deep suspicion of rhetoric and instrumental social skill. The cost is familiar: people who see clearly and can’t reliably move anything once language enters the room.
Traditions that train rhetoric reward commitment: precision, force, timing, inevitability. Taken seriously, they produce people who can shape meaning cleanly and win arguments decisively, while quietly accumulating enemies they never quite notice.
Traditions that train practical psychology reward understanding of how people actually behave: incentives, reassurance, threat, status, face. Taken seriously, they produce effectiveness. The failure is not that prediction replaces understanding—prediction comes from understanding—but that understanding is applied asymmetrically. One party is seen clearly; the interaction itself is not. The result feels like manipulation even when no deception is intended.
Each path works. Each produces real competence. And each one prunes away the conditions needed for the others. This is why even getting two of these in the same person is unusual. Someone who dissolves meaning rarely wants to practice steering it. Someone who steers meaning fluently rarely tolerates dissolving it. Someone who learns to move people reliably often stops attending to whether those movements are felt as mutual.
None of this is moral failure. It’s structural.
Courtesan training exists to fill the gap left by that structure.
I definitely want to either strong-upvote or strong-downvote this, but I’m not sure which.
For strong-upvote: I believe Rationality in general and LW in particular really really need more tests. And unannounced guerilla tests are more holistic measures of applied rationality than tests called in advance. And being able to tell whether & to what extent something is slop is a useful skill.
For strong-downvote: Insofar as this is slop, it’s spam. And insofar as it’s not slop (you did set the topic, structure the essay, and decide the result was good enough to be a functional test), it’s teaching us “remember to shun anyone who seems like they’re using AI to help get their points across”, which blocks out quite a lot of potentially valuable testimony from our already-pretty-insular community. And while I 100% believe you planned this as a test, “haha I was just testing you” is a classic dodge up there with “this was all a social experiment”, so it’s kind of bar-lowering to not have pre-registered your test with an independent third party & then revealed that once the game is up.
In conclusion, I give this post two thumbs up, but also two thumbs down.
I want to clarify something: I wasn’t testing whether an AI can generate nonsense and the community will eat it up. I wasn’t posting bad ideas to see if they would be trusted. I was posting good ideas, but presenting them badly, because I wanted to know how well they would be received when packaged this way. This sort of experiment is something I’ve done many times before.
In the past, I measured this phenomenon by writing badly. However, my rhetorical skill has advanced to the point where this method no longer works. To solve this problem, I let AI do the bad writing instead of me.
There are two ways that a post can be slop:
It can be slop at the level of ideas.
It can be slop at the level of sentences.
The core ideas of this post are the fruits of a project I started 15 years ago and have been working on intensely for 9 of those 15 years. The main ideas of this post are the opposite of slop. The “Courtesan” framing device is fictional, and entirely of my own design (not AI’s), and therefore does not constitute slop either.
What is slop is the individual sentence structure.
I believe this post is precisely the opposite.
As for the spam concern…
Spam
Here’s how I think about spam: Even when I don’t use AI, I feel like >5% of what I write is slop. Not AI. Just really bad human-generated slop. The reason for this is that deliberately trying out weird and risky things has a long-tailed payoff curve that’s solidly positive in the long run, even though it results in some slop in the short run. (Why can’t I just write it and not post? Because posting weird experiments is how I get signal on them.) <1% of what I write is experiments like this one. With ratios this low, the risk of “spam” doesn’t seem significant to me. Honesty, trust and integrity is what’s important to me.
Tests
You are correct to call me out on this. If this post had been well-received, I would have 100% continued using AI. Not in the sense of writing posts with disregard for truth (which I wasn’t doing here in the first place), nor in the sense of producing spam (which I also wasn’t doing here). Rather, I would have offloaded the writing of boilerplate sentences to AI the same way I offload writing boilerplate software to AI.
I already use AI in my posts in the following ways:
Research.
Error checking. (Not error responsibility. It’s just one layer of defense-in-depth.)
Anticipating likely counterarguments.
Checking spelling and sentence flow.
I have not been and do not intend to use AI for:
Abnegating ethical responsibility for the truth or signal of my posts.
This was a test of:
Do I have to write every sentence myself or can I spend many hours talking to an AI first and then have the AI do the drudge work of writing the individual sentences? Would readers prefer this? In 2025, the answer is no. I wasn’t intentionally testing the community’s skill level. It is simply that there is a variable I needed to know the value of, and it produced a Rationality test as an unintended side effect that I acknowledged retroactively.
I have several ideas for proper Rationality tests that I’d like to try out. However, all of them take significant time investment. That’s why efficiency in writing is so important. If I can find efficiency gains, then that frees up bandwidth for things like deliberate Rationality tests.
You want more tests? Here is a test.
lsusr was this written by AI? BTW, I’ve read a lot of your posts—I think you have some really good ones! One think I value about your posts is your writing (thinking?) style. I’d describe it as terse and imperative but also Zen and Socratic.
Anyways: I’m pretty sure this post was written by an AI! ‘I know it when I see it’, I can recognize style. Basically every paragraph is very AI-y. Look at these passages:
- “—not searching, not lingering—”
- “Jokes either land or don’t, and nobody rescues them. When there’s a pause, it doesn’t itch. It just waits.”
- “Only afterward do you notice that nothing needed fixing. No apologies. No recalibration. No lingering tension.”
- “There’s no resistance, no awkwardness—just a quiet sense that something was already decided.”
Yup! This was an experiment. I wanted to find out if anyone would notice and call me out on it. Good job!
The high level structure was mine, but all the individual sentences were written by AI. That’s why there’s no jokes. In all of my experiments so far, ChatGPT has been completely unable to emulate my humor style. It can’t even do indirection.
I had not twigged that. I am no longer interested in any sequel.
I also googled “Courtesan training”. Apart from this article, the results were all tacky BDSM fantasies.
I’d ask “why that title?”, but the final sentence is a cliffhanger which I anticipate being continued in a subsequent article.
When googling “Courtesan training” your post is the second result and the rest doesn’t clarify anything.
Similarly with your taxonomy of “Somatics, Rhetoric, Psychology”, in your last section you mentioned “traditions” that train each of the three skills, what traditions are you referring to? Is this an esoteric buddhist thing? The only thing that comes to my mind is that maybe pickup artistry somewhat fits with your “Psychology” concept, but other than that,[1] your definitions are too atypical to make searching practicable.
In essence, I’m asking—”Can you point to any prior work/traditions or are you the first person to propose this frame?”
GPT-5.2 suggested Gendlin’s focusing might fit into “Somatics”
Esoteric Buddhist practice the most powerful tradition I know of for training somatic perception.
Was this sentence also written by AI? In contrast to this other comment, it has the same interesting-if-true and does-not-quite-fit-together-ness as the OP.
From a human, my response would be that this, if true, is something I did not know and would like to hear more about.
From an AI, it’s trash. I have no reason to suppose there is anything behind it, nor, if I asked it to elaborate, would I expect to see anything but more of the same.