Decoupling vs Contextualizing Norms
John Nerst: “To a contextualizer, decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a decoupler, the contextualizer’s insistence that this isn’t possible looks like naked bias and an inability to think straight”[1]. |
A cultural divide
A particularly thorny—yet very common—way for a discussion to break down is when participants strongly disagree about the correct scope of a discussion. If neither side is willing to compromise, progress often becomes impossible.
John Nerst identifies a difference in expectations that is particularly prone to causing such issues:
What these norms entail:[2] | |
---|---|
Decoupling norms | People have a right to expect the truth of their claims to be considered on their own merits, with no obligation to pay heed to worries about the “broader context” or the “implications” of such speech. Insisting on raising these issues, despite a firm request to consider an idea in isolation, is likely a sign of careless reasoning or an attempt at deflection. |
Contextualizing norms | Being a responsible actor necessarily involves considering certain contextual factors and implications when deciding what kinds of statements are acceptable. Not taking these factors into account most likely reflects limited awareness, a lack of care, or even deliberate evasion — especially if the speaker ignores an explicit request. |
An example: Even-numbered birth year murderers
Suppose data showed that people born in even-numbered years committed murders at twice the rate of the general population. Can you state this directly and, if so, must you issue a disclaimer?
A decoupler would tend to see it as unreasonable to object to a direct statement of facts. Here’s an example of how someone with this viewpoint might think:
Surely, as a citizen in a free society, I should just be able to state the truth directly? After all, we’re adults. Additionally, we shouldn’t have to issue disclaimers all the time. This kind of compelled speech makes it hard to speak frankly. They amount to soft censorship in the short term and risk creating a slippery slope toward harsher censorship in the long term. Furthermore, it impedes the scientific and intellectual progress that has raised both living conditions and moral standards.
However, contextualizers tend to see the situation quite differently. Here’s one possible expression of this:[3]
It would be deeply irresponsible to make statements that risk creating a stigma around even-numbered folk. Besides, is there any point in doing so? After all, you can’t just assume that people born in an even year are criminal by default! At the very least, you should issue a disclaimer to prevent bad actors from using your words to push bad faith narratives. It’s not as if that’s difficult! I’m not demanding that you say anything untrue, just that you exercise prudence with what you say regarding a few particularly charged issues.
A word of caution: Against dogmatism
For both norms, it’s easy to think of situations where insisting on them seems dogmatic. Scott Alexander’s excellent post, 📖 Weak men are superweapons
, lays out how true statements can be weaponized to destroy a group’s credibility. If you have good reason to believe that someone is using this strategy against you, with the intent to cause serious harm, it would be shockingly naive to let them force you into strict adherence to decoupling norms.
On the other hand, it’s a very common strategy[4] to frame every disliked action as part of someone’s agenda (neoliberal, cultural Marxist, far-right—take your pick).
Agendas are real, but wielding “universal counter-arguments” is one of the easiest ways to “mindkill” yourself, so I strongly encourage you to be wary here.
My position: Ultimately, it all comes down to wisdom
Contextualizers are correct that it would be rather naive to make certain true statements in a situation that is sufficiently highly charged. But what counts as sufficiently charged and what limitations are reasonable in such a case?
Unfortunately, there isn’t a simple answer here. It would be nice if there were, but I suspect that making the right choice ultimately requires wisdom.
Even if it is best for some conversations not to be maximally public, it still seems important for society’s epistemics to preserve at least some spaces for decoupling-style conversations.[5] Such spaces create sites of resistance against cultural limitations arbitrarily imposed for political advantage, rather than genuinely serving the common good.
❦ |
🎁 Extras
|
Executive summary[6]: In short, both decoupling and contextualizing norms have merit, but each also has flaws. Navigating this tension doesn’t come down to rules alone, but requires wisdom. That said, it’s important to preserve at least some spaces for decoupled truth-seeking conversations to maintain a society’s long-term epistemic health. |
❦ |
Key points: • Decoupling norms: Ideas should be evaluated purely on their merits, without requiring disclaimers or concern for the broader implications of having the discussion—objections to this may be interpreted as political bias or deflection. • Contextualizing norms: Responsible communication requires considering the possible social or political consequences of the speech act, and ignoring them could be considered naive, careless, or evasive. • Illustrative example: The claim that “people born in even-numbered years commit more murders” highlights the clash—decouplers defend the right to state facts directly, while contextualizers worry about stigma and how malicious actors might weaponize such statements. • Cautions against dogmatism: Both approaches can be weaponized—strict decoupling can, in the worst case, mean standing aside while people coordinate towards genocide, while overzealous contextualizing can be used to derail discussions by invoking claims of hidden agendas. • Author’s stance: Context matters in highly charged situations, but the judgment of what counts as “excessively charged” ultimately requires wisdom rather than fixed rules. • Importance of decoupling spaces: Even if some discussions should be constrained, preserving decoupled forums is vital for epistemic health and as a safeguard against politically motivated suppression. |
📚 Recommended reading — delve deeper
A Deep Dive into the Harris-Klein Controversy—John Nerst’s original (and excellent!) post |
❦ |
Putanumonit—Explores the relationship between decoupling and mistake/conflict theory |
❦ |
Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary—Argues that the real distinction isn’t how much people contextualize, but what they consider to be “relevant context”. “The concept of “contextualizing norms” has the potential to legitimize derailing discussions for arbitrary political reasons by eliding the key question of which contextual concerns are genuinely relevant, thereby conflating legitimate and illegitimate bids for contextualization. Real discussions adhere to what we might call “relevance norms”: it is almost universally “eminently reasonable to expect certain contextual factors or implications to be addressed.” Disputes arise over which certain contextual factors those are, not whether context matters at all.” |
👏 Acknowledgements — credit where credit is due
Hat tip to prontab for sharing this article. He actually uses low decoupling/high decoupling, but I prefer avoiding double negatives. Both John Nerst and prontab passed up the opportunity to post on this topic here, so I decided to pick up the baton. |
- ^
This quote is slightly edited. It also serves as a TL;DR.
- ^
Honestly, this is more of a spectrum than a binary. However, it is easier to explain as a binary.
- ^
I’m sure many people will want to point out that this does not really represent the average view held by contextualizers. Sure, this only represents a more sympathetic contextualizer, but I think that’s perfectly fine as it makes sense to engage with the most defensible version of a viewpoint.
- ^
“Strategy” — I don’t mean to imply that it’s always, or even typically, consciously chosen.
- ^
Eliezer’s Local Validity as a Key to Sanity and Civilisation articulates the importance of such conversations well.
- ^
This recap is a hand-edited version of the SummaryBot output.
- Truthseeking is the ground in which other principles grow by 27 May 2024 1:09 UTC; 251 points) (
- Policy discussions follow strong contextualizing norms by 1 Apr 2023 23:51 UTC; 231 points) (
- Elements of Rationalist Discourse by 12 Feb 2023 7:58 UTC; 226 points) (
- Ruling Out Everything Else by 27 Oct 2021 21:50 UTC; 198 points) (
- Every “Every Bay Area House Party” Bay Area House Party by 16 Feb 2024 18:53 UTC; 182 points) (
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- Truthseeking is the ground in which other principles grow by 27 May 2024 1:11 UTC; 107 points) (EA Forum;
- Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary by 22 Nov 2019 6:18 UTC; 100 points) (
- Postmortem to Petrov Day, 2020 by 3 Oct 2020 21:30 UTC; 97 points) (
- What should experienced rationalists know? by 13 Oct 2020 17:32 UTC; 88 points) (
- The Root Cause by 17 Jun 2025 7:46 UTC; 78 points) (EA Forum;
- In My Culture by 7 Mar 2019 7:22 UTC; 77 points) (
- The Computational Anatomy of Human Values by 6 Apr 2023 10:33 UTC; 74 points) (
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 68 points) (EA Forum;
- 31 Aug 2020 11:24 UTC; 52 points) 's comment on Some thoughts on the EA Munich // Robin Hanson incident by (EA Forum;
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by 20 Sep 2023 12:46 UTC; 52 points) (EA Forum;
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 48 points) (
- [Valence series] 5. “Valence Disorders” in Mental Health & Personality by 18 Dec 2023 15:26 UTC; 45 points) (
- 23 Jul 2019 7:21 UTC; 43 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- Actually, “personal attacks after object-level arguments” is a pretty good rule of epistemic conduct by 17 Sep 2023 20:25 UTC; 37 points) (
- Thinking of Convenience as an Economic Term by 5 May 2023 19:09 UTC; 28 points) (EA Forum;
- 2 May 2024 3:36 UTC; 25 points) 's comment on The Guardian calls EA “cultish” and accuses the late FHI of “Eugenics on Steroids” by (EA Forum;
- 4 Oct 2024 19:04 UTC; 22 points) 's comment on Vladimir_Nesov’s Shortform by (
- 30 Apr 2021 21:14 UTC; 21 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (EA Forum;
- 28 Jan 2020 2:33 UTC; 19 points) 's comment on 2018 Review: Voting Results! by (
- 13 Sep 2023 22:34 UTC; 18 points) 's comment on Contra Yudkowsky on Epistemic Conduct for Author Criticism by (
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by 20 Sep 2023 12:46 UTC; 16 points) (
- 25 May 2018 20:01 UTC; 15 points) 's comment on Duncan Sabien on Moderating LessWrong by (
- 7 Feb 2023 17:18 UTC; 13 points) 's comment on The number of burner accounts is too damn high by (EA Forum;
- Understanding rationality vs. ideology debates by 12 May 2024 19:20 UTC; 13 points) (
- 7 Sep 2023 8:21 UTC; 11 points) 's comment on Sharing Information About Nonlinear by (EA Forum;
- The Ontics and The Decouplers by 24 Mar 2022 14:04 UTC; 11 points) (
- A distillation of Ajeya Cotra and Arvind Narayanan on the speed of AI progress by 22 Jul 2025 14:59 UTC; 9 points) (
- Thinking of Convenience as an Economic Term by 7 May 2023 1:21 UTC; 6 points) (
- Principled vs. Pragmatic Morality by 29 May 2018 4:31 UTC; 6 points) (
- 19 May 2019 8:58 UTC; 6 points) 's comment on Comment section from 05/19/2019 by (
- 23 Feb 2023 18:58 UTC; 6 points) 's comment on The male AI alignment solution by (
- 5 Jun 2024 20:16 UTC; 5 points) 's comment on Eric Neyman’s Shortform by (
- 25 Jun 2023 21:34 UTC; 4 points) 's comment on Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell? by (
- 3 Oct 2018 20:18 UTC; 4 points) 's comment on What the Haters Hate by (
- 2 Jul 2023 7:18 UTC; 4 points) 's comment on Forum Karma: view stats and find highly-rated comments for any LW user by (
- Strategies for Inducing Decoupling—Los Angeles LW/SSC Meetup #128 (Wednesday, September 25th) by 25 Sep 2019 19:53 UTC; 3 points) (
- 5 Aug 2020 19:01 UTC; 3 points) 's comment on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? by (
- 16 Aug 2023 17:32 UTC; 2 points) 's comment on George Hotz vs Eliezer Yudkowsky AI Safety Debate—link and brief discussion by (
- 20 Apr 2021 22:33 UTC; 2 points) 's comment on Iterated Trust Kickstarters by (
- 25 Nov 2020 1:34 UTC; 2 points) 's comment on MikkW’s Shortform by (
- 24 Feb 2023 9:13 UTC; 2 points) 's comment on The male AI alignment solution by (
- 18 Sep 2020 0:48 UTC; 2 points) 's comment on Dagon’s Shortform by (
- Countering Self-Deception: When Decoupling, When Decontextualizing? by 10 Dec 2020 15:28 UTC; 2 points) (
- 5 Aug 2020 17:48 UTC; 2 points) 's comment on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? by (
- 14 Nov 2023 22:01 UTC; 1 point) 's comment on Announcing Athena—Women in AI Alignment Research by (EA Forum;
- 29 Jan 2020 17:04 UTC; 1 point) 's comment on Comment section from 05/19/2019 by (
- 12 Dec 2020 12:08 UTC; 1 point) 's comment on Countering Self-Deception: When Decoupling, When Decontextualizing? by (
- 20 Jan 2023 16:35 UTC; 0 points) 's comment on Does EA understand how to apologize for things? by (EA Forum;
Two years later, the concept of decoupled vs contextualizing has remained an important piece of my vocabulary.
I’m glad both for this distillation of Nerst’s work (removing some of the original political context that might make it more distracting to link to in the middle of an argument), and in particular for the jargon-optimization that followed (“contextualized” is much more intuitive than “low-decoupling.”)
This post has been object-level useful, for navigating particular disagreements. (I think in those cases I haven’t brought it up directly myself, but I’ve benefited from a sometimes-heated-discussion having access to the concepts).
I think it’s also been useful at a more meta-level, as one of the concepts in my toolkit that enable me to think higher level thoughts in the domain of group norms and frame disagreements. A recent facebook discussion was delving into a complicated set of differences in norms/expectations, where decoupled/contextualizing seemed to be one of the ingredients but not the entirety. Having the handy shorthand and common referent allowed it to only take up a single working-memory slot while still being able to think about the other complexities at play.
Can you give specific examples? I’ve basically only seen “contextualizing norms” used as a stonewalling tactic, but you’ve probably seen discussions I haven’t.
The most recent example was this facebook thread. I’m hoping over the next week to find some other concrete examples to add to the list, although I think the most of the use cases here were in hard-to-find-after-the-fact-facebook-threads.
Note that much of the value add here is being able to succinctly talk about the problem, sometimes saying “hey, this is a high-decoupling conversation/space, read this blogpost if you don’t know what that means”.
I don’t think I’ve run into people citing “contextualizing norms” as a reason not to talk about things, although I’ve definitely run into people operating under contextualizing norms in stonewally-ways without having a particular name for it. I’d expect that to change as the jargon becomes more common though, and if you have examples of that happening already that’d be good to know.
(Hmm – Okay I guess it’d make sense if you saw some of our past debates as something like me directly advocating for contextualizing, in a way that seemed harmful to you. I hadn’t been thinking there through the decoupled/contextualized lens, not quite sure if the lens fits, but might make sense upon reflection)
It still seems like having the language here is a clear net benefit though.
If the jargon becomes more common. (The Review Phase hasn’t even started yet!) I wrote a reply explaining in more detail why I don’t like this post.
Cool! I found your new post pretty helpful. Will probably have more thoughts later.
This is one of the Major splits I see in norms on LW (the other being combat vs. Nurture). Having a handy tag for this is quite useful for pointing at a thing without having to grasp to explain it.
My nomination seconds the things that were said in the first paragraphs of Raemon’s nomination.