Tabooing “Frame Control”
“Frame Control” is a colloquial term people have used to describe “Someone is doing something rhetorically fishy that somehow relates to frames.” I think it’s a fairly loaded phrase, and hasn’t really been used consistently. I’m not sure we should actually use the phrase – it seems easy to weaponize in unhelpful ways. But it does seem like it’s getting at something important that I want to understand and talk about.
Aella’s post on the topic focused on particularly abusive dynamics. I think abusive frame control is an important central example. But I think there are many times when “something rhetorically fishy is going on with frames”, and it isn’t particularly abusive but still is worth talking about.
In this post I want to try* and taboo frame control, as well as draw more of a distinction between “the cluster of patterns that is ‘frame control’”, and “the cluster of patterns that is ‘abuse’ and ‘manipulate’.”
*in practice, I still needed to refer to “the gestalt cluster of things that feel centrally ‘frame control-y’” and I didn’t have a better word for that than “frame control” although I tried to mostly put it in quotes.
First, a quick recap on frames.
A frame is a colloquial term for “what someone sees as important, what sort of questions they ask or what they’re trying to get out of a conversation.” I think it’s often used in a fuzzy metaphorical way, and there are slightly different metaphors people were unconsciously using, including picture frames, window frames and frameworks.
John Wentworth explores a more technical approach to frames in his post Shared Frames Are Capital Investments in Coordination. There, he defines a frame as way of conceptualizing a problem or solution space. A frame suggests which types of questions to ask, and which type of answers to look for.
Previously, I’ve discussed how sometimes people have different assumptions about what frame they’re in. The result can be annoying, confused conversations that take years to resolve. Noticing those different frames is an important communication skill.
Okay. So what’s “Frame Control?”
People use “Frame control” differently. I assume they all roughly means, well, “someone is trying to control your frame”. Possibly unconsciously, possibly deliberately, their actions are shaping what sort of questions you’re able to ask and think about, and what you think is important.
But, just as people had originally used the word “frame” in an ambiguous way that led to some confusion, I think people have used the phrase “frame control” inconsistently. I’m about to share my own ontology of “what concepts ‘frame control’ breaks down into.” If you’ve experienced something-you-call-frame-control, you may want to take a moment to think through your own conceptions of it.
(here is you having some space to think through your own experiences and ontology. Feel free to leave your own takes in the comments)
When I reflect on the times something “frame-control-ish” has happened to me, four distinctions that strike me are:
Holding a frame, at all. i.e. having a sense of how you’re trying to think or communicate, and what sort of questions or goals you’re trying to address.
This is super normal and reasonable.
Presenting a strongly held/presented frame, such as by speaking confidently/authoritatively (which many people who don’t hold their own frames very strongly sometimes find disorienting)
Persistently insisting on a frame. such that when someone tries to say/imply ‘hey, my frame is X’ you’re like ‘no, the frame is Y’. And if they’re like ‘no, it’s X’ you just keep talking in frame Y and make it socially awkward to communicate in frame X.
Frame manipulation, where you change someone else’s frame in a subtle way without them noticing, i.e. presenting a set of assumptions in a way that aren’t natural to question, or equivocating on definitions of words in ways that change what sort of questions to think about without people noticing you’ve done so. The literal framing effect literature is relevant here.
Frame coercion/threat, where you imply (maybe through body language) that if someone doesn’t accept your frame, you will do something bad. Maybe you’ll make them look dumb in front of their friends, maybe you’ll ostracize them from the group. The line between this and merely having a strong friend can be blurry.
#2, #3 and #4, and #5 can be mixed and matched.
The places where people tend to use the word ‘frame control’ seem to most often refer to #3 and #4, frame-insistence and frame-manipulation.
I’m a bit confused about how to think about ‘strong frames’ – I think there’s nothing inherently wrong with them, but if Alicia is ‘weaker willed’ than Brandon, she may end up adopting his frame in ways that subtly hurt her. This isn’t that different from, like, some people being physically bigger and more likely to accidentally hurt a smaller person. I wouldn’t want society to punish people for happening-to-be-big, but it feels useful to at least notice ‘bigness privilege’ sometimes.
That said, strongly held frames that are also manipulative or insistent can be pretty hard for many people to be resilient against, and I think it’s worth noticing that.
Frame Control vs Manipulativeness vs Abuse
Aella wrote a post on frame control that was focused on a particular abusive version of it, where someone is systematically manipulating what you can think about on the order of days/months/years, in a way that has really harmful longterm effects. That’s a very bad thing that can happen. But I think many aspects of frame control are subtler and basically fine in small doses. For comparison: lightly tapping someone on the shoulder is fine, and for many people some rough-and-tumble backslaps or roughhousing can be healthy and good, but violently assaulting someone is generally quite bad. Analogously, mild versions of frame control can be part of a mundane interaction, but more extreme versions can be harmful.
Meanwhile there are tons of ways you can be manipulative or abusive without being frame controlling. If someone lies to you, or threatens you, or convinces you to become financially dependent on them, that can fuck with you in a way that isn’t (necessarily) “frame-control-y”.
But, I think the reason that “frame control” comes up as a concept is that it can be pretty mindfucky. In a technical, information-flow sense, frame control can distort your ability to process information. But in a phenomenological and psychological sense, frame control can feel disorienting, or nauseating, or leave you feeling trapped/confused.
Examples to explore
“Ray, this all super abstract. WTF do any of these words actually look like in the real world?”
Yeah, fair. Let’s dig into some examples. All of these deal with some manner of “something a bit iffy relating to frames is going on”. Some might feel more “frame control-y” than others. Some might feel coercive or abusive and some might not.
Note: in these examples I refer to “you”. “You” is a character that changes from example to example. I use it to convey what an experience feels like from the inside and encourage empathy, but I expect you won’t necessarily resonate with all the examples.
i. Your really opinionated colleague
He always talks loudly and confidently about libertarianism and cryptocurrency, and he’ll often respond to conversations with a lens of: “Of course the problem here is [coordination failure] [lack of autonomy] [ authority figures throwing their weight around ].”
Not that the thing that makes this about frame is not the topic of libertarianism or bitcoin, but the underlying generator for why he thinks libertarianism and bitcoin are important (i.e. tending to see problems as coordination failures which should be solved with technology).
He’ll drop the subject if you ask, and is capable of listening to you when you prompt him to, but he keeps gravitating towards his frame in a confident tone of voice that makes it feel awkward to disagree with.
This person has a strong frame, and is somewhat insistent about it, but is not manipulative.
ii. Your annoying stereotypical mother-in-law
She’s fairly soft-spoken, but always talks through the same set of assumptions and keeps returning the conversation to the same topics. She asks “Why haven’t you had kids yet?”, and notes “You should buy a house in this neighborhood!”, when you’ve made it clear you and your partner prioritize your careers over kids and don’t care about the things that make that neighborhood good. She’ll do this even when literally two sentences ago you spelled it out for her.
This person doesn’t present her frame very strongly. And she’s kind of socially clumsy so she’s not very manipulative (although maybe she’s trying to be?). But she sure is insistent, and if you’re not paying attention you might drift into thinking the way she’s pushing towards.
iii. Your well-meaning friend who always jumps in with her frame before you have time to think
She doesn’t have any particular frame she insists on, and she’s not manipulative about it. But she has a lot of individual frames that seem important/clear to her, and she’s quick to decide on new frames when she encounters a situation. Her conversational style moves very quickly, and she’s not very good at listening.
She might have good advice, but depending on your psychology she might be disorienting to talk to, and might cause you to unconsciously adopt her frames without noticing.
iv. The manipulative guru
Previous examples were fairly mild. Let’s look at a more central example of “frame control.”
The guru has the answer to everything. They have a worldview that seems at least somewhat compelling to you, which seems helpful for making sense of things. They point out lots of things wrong with your other relationships and employment situation and the world at large.
Whenever you notice something off about the guru’s arguments, they immediately have an answer. The answer doesn’t always quite feel right to you, but they speak confidently and reassuringly. At first maybe you try to argue with them about it. But over time, a) you find yourself not bothering to argue with them, b) even when you do argue with them, they’re the ones choosing the terms of the argument. If they think X is important, you find yourself focused on argue whether-or-not X is true, and ignoring all the different Ys and Zs that maybe you should have been thinking about.
Over time, some things start to give you a bad feeling. It feels like the guru is making a mistake somewhere, but it’s hard for you to pin down.
Years later looking back, you might notice that they always changed the topic, or used various logical fallacies/equivocations, or took some assumptions for granted without ever explaining them.
The guru presents their frame strongly, persistently, and manipulatively.
Meanwhile the guru might also be doing a cluster of non-frame-control things. When they argue with you, they imply (maybe in a kind but firm voice, maybe with an undertone of social threat) that you’re kinda stupid for disagreeing for them. It’s clear they might stop inviting you to their social scene, which had been providing a lot of meaning in your life. Maybe you’ve let other friendships atrophy (in part because the guru argued those friendships were bad for you), such that if you stopped getting invited you’d feel very alone.
The guru can be a literal cult leader, who systematically cut off your social ties. Or they could be a fairly ordinary charismatic leader of a friend group, and you ended up in this position sort of unintentionally.
The guru can leave you deeply confused and maybe scarred. Some of their actions make sense to me to characterize as “frame control”, and some does not.
v. The weak, distorted friend
The guru had an undertone of competence and strength. But one of the most important lessons I’ve learned is that frame distortions can happen from weak friends who seemed helpless and alone.
The weak friend is depressed, you’re one of the few people they turns to. They clearly traumatized by something that’s left her thinking kind of distorted, she’s constantly talking about ways society has hurt her. Their thinking is so obviously distorted, and they seems pathetic enough, that you don’t take it seriously.
They has a lot of good qualities. You’re sure that if they climbed out of their depressive spiral they’d be a good friend who was helpful to your shared community. So, you put a lot of effort into helping them.
But the weak friend has a strongly held frame, and they persistently insist on it. When you try to talk them out of it, they responds with distorted frames and subtle errors that you don’t always catch. You frequently try to engage them in their frame, to try to make one-small-point (“Okay I can maybe see your case for A, B and C, but those don’t necessarily imply D”). The result is that you find yourself naturally thinking in her frame even though you think it’s clearly coming from trauma and isn’t very accurate or useful. And like the guru, you find yourself thinking about propositions A, B, C and D instead of L, M, N, O, P when those would be more helpful.
The weak friend is genuinely exactly what they appears to be, a hurt person who desperately needs help and maybe someday could climb out of her depression pit. Nontheless, the weak, distorted friend can fuck with you just as badly as the guru. You end up spending years of your life trying to help her, and it’s never enough. Your thinking can get distorted and less helpful.
Sometimes the line between the weak friend and the guru is blurry. “The guru” brings to mind a charismatic leader. The weak friend could assemble a small friend group who are all trying to help her and gravitate around her, while seeming extremely uncharismatic. The dynamics can all be similar.
vi. The boss with a strong (mostly healthy) vision
You work for a company with a vision, with a leader who has a clear conception of what they want to accomplish in the world, and why. They have a strong frame that guides all their decisions, and by proxy, it guides all your projects and how you get evaluated at the company.
They are not inordinately manipulative, but like most humans, they sometimes accidentally equivocate, draw false dichotomies, etc. Their strong, persistent frame, combined with baseline-amounts-of-frame-manipulation, adds up to you feeling a bit disorienting and hard to think.
This… isn’t necessarily wrong—shared frames are capital investments in coordination, after all. The strong shared frame allows a team of people to work on a difficult task, with a clear understanding of how all their work fits together. If the boss also encourages a healthy work-life balance, and they listen when you periodically bring up alternative views (although it sometimes takes a bit of effort to get them to realize you’re debating their frame), this can avoid the extremes of how The Guru might have affected you.
Nonetheless, this can affect how you think in a longterm way, which persists after you leave the company. In some sense, you might think of “getting a bit frame controlled” here as one of things you are getting paid for. The boss’s frame isn’t crazy, just tunnel-visioned, so this isn’t exactly harmful. But it maybe changes the sorts of decisions you might make for years to come.
vii. A pervasive standard of beauty and meaning
No one person is in charge or even necessarily doing this on purpose. But, advertisements, books and movies throughout your entire life have told you to be attractive in particular ways, to pursue particular kinds of relationships, to grow up, get married, have a big wedding, have 2.1 kids with a white picket fence, work 9-5, retire, etc.
Society shapes what sort of thoughts that are natural to think. What sort of questions and goals you’ll consider.
Maybe all your friends in your social group agree this is kinda silly… but you were all raised in the same culture. And even though you don’t endorse it, you might find yourselves all all giving each other a little bit of side-eye if you’re gaining weight, or been single too long, or working a low-status job.
The Social Frame is persistent, and while it’s not overwhelmingly manipulative, it’s propagated by people who are still baseline human manipulative (which is again, nonzero). This adds up to some lifelong distortions for many people that they may find hard to disentangle.
But… what do you think?
I was awkwardly conscious while writing this that… I sure had a strong frame about how frame control works. And each of my examples has a bit of an undertone of “what Raemon thinks was important”, beyond merely illustrating the example.
I have more thoughts about what frame control looks like, and what sorts of things are useful for countering frame control. But given the nature of the topic, it’d feel particularly ironic if I rushed to propose solutions before giving people time to think about the problem, do original seeing on it, and reflect on their own experiences.
So, let’s stop here for now. What have your experiences been like? What seems important about this to you?
Most-but-not-all of these were linked earlier, but for convenience:
Noticing Frame Differences, by me.
Shared Frames Are Capital Investments in Coordination, by John Wentworth.
Smuggled frames, by Artyom Kazak.
Frame Control, by Aella
In addition to the post there are many good comments. I wrote an overview-comment summarizing my favorites here.
Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions) by Duncan Sabien
(this is about a phenomena that is mostly unrelated to frame control, but one of the examples of that phenomena focuses on a kind of frame control, and some applications thereof.)
proto essay on defense against strong frames and false n-chotomies by Logan Strohl.
Logan Strohl wrote a good shortform post about how listening to someone else’s ontology about an experience can make it harder to think about it. This seems particularly important (or at least, particularly ironic to cause?) in the case of discussing Frame Control.
- 17 Apr 2023 9:00 UTC; 8 points)'s comment on Moderation notes re: recent Said/Duncan threads by (
- 25 Apr 2023 1:54 UTC; 4 points)'s comment on Moderation notes re: recent Said/Duncan threads by (
One of the ways this whole discussion feels frame controlled is that it implies an axis that ranges from “doesn’t frame control” to “frame controls a lot,” with roles of dominance and submission or victimization being most salient.
My experience of social life is a series of frame contests, frame improv, frame play, frame ignoring, frame neglect, frame fights, frame pandering, frame curiosity. Frames can establish a hierarchy, and that’s an important function, but I feel like the subtle implication here is that people are just going around being unwitting victims of the dominant frames of dominant people—an implication of a lot of inauthenticity, of some problematic psychology that needs to be resisted.
What I think is problematic is that some people are able to make genuine threats to get their way, enforcing compliance with their values and language and preferences and norms without the other person feeing as thought they’ve consented. They intimidate others, and if they are seen by the group to have succeeded, it becomes common knowledge that they call the shots. Frame control is the result, but it’s straightforward intimidation and threats that were the original sin in this case. We see that with the guru as well as the weak friend.
One of my main points here is that I think we probably should call threatening behavior “threatening” and maybe “coercive” or “abusive” or whatever seems appropriate for the situation, and only use the phrase ‘frame control’ when the most relevant thing is that someone is controlling a frame. (And, maybe, even then try to say a more specific thing about what they’re doing, if you can)
And meanwhile, talk about “frame-whatever” when you’re talking about frame-whatever, whether that be frame-manipulation, or frame-curiosity.
I actually think the thing that makes frame control a particular important component of abuse or coercion is it’s one of the tools that let’s an abuser make it ambiguous whether someone consented. Where Alice ends up feeling like they sort of consented, but something feels wrong about it and they don’t have good words for it and there’s a bunch of social pressure and they end up thinking ‘well… I guess I consented so I have to do this now...?’ but feel sick in their stomach about it.
(I think there are also ways to invoke this effect without frame manipulation. I’m not sure if there’s a good name for the generalize effect. ‘manipulated consent?’, ‘ambiguous consent?’)
This exact implication isn’t frame control, but the common thing I’ve seen gurus do that is more subtle is assert why you disagree with them in a way that reinforces their frame.
“Kinda stupid” is overly crude, and might be spotted and feel off even among those who believe in them, but implying you just don’t “get” what they’re saying because you’re unenlightened or not ready for it is very effective at quieting dissent and maintaining their control.
In general this is why I dislike any attempts to assert with confidence what someone thinks or feels, as well as why. I may be one of the only therapists who hates psychoanalysis, but I maintain that it’s almost always a bad thing to to anyone who isn’t inviting it, and sometimes even then.
Accusations of frame control look like an example of this.
That sentence could be accused of being another exmaple.
As could that one.
And so on.
Even that one too.
I agree that “asserting what someone is doing” can also be considered frame control or manipulation. But I think it’s much less often so, or much less dark artsy, because it’s referencing observable behavior rather than unverifiable/unfalsifiable elements.
One response to frame-control-y situations is, instead of making accusations that as you say can lead to a he-said-she-said situation, to personally fall back to a more careful, defensive posture vis a vis framing, accepting that there seem to be strong framing differences among the people here, and communicating this posture to others. In other words, accepting when it seems to be too hard to directly create common knowledge about what is happening at the level of framing.
to give a specific example of guru frame control: Several sources on the cult NXIVM describe the “NXIVM flip”. Whenever someone brought up a complaint they weren’t merely told “that’s not true” or “that’s actually good”, they were told “the fact that you are bringing this up indicates a flaw in you” (and then they were punished for it, but I don’t think that’s required for it to be frame control). The frame control was in insisting that all complaints were facts about the complainer and not the thing they were complaining about.
I don’t think the things raemon describes are necessarily frame control. They’re broad descriptors that include frame control but also other forms of manipulation. Elsewhere he has said he didn’t mean to claim it was frame control, so seems like we’re on the same page.
Yeah this variant does feel more like explicit frame control (I think “frame manipulation”, although it feels like it strains a bit with the cluster I’d originally been thinking of when I described it)
A thing that occurs to me, as I started engaging with some comments here as well as on a FB thread about this:
Coercion/Abuse/Manipulation/Gaslighting* often feel traumatic and triggering, which makes talking about them hard.
One of the particular problems with manipulation is that it’s deliberately hard to take about or find words to explain what’s wrong about it. (if you could easily point to the manipulation, it wouldn’t be very successful manipulation). Successful manipulators tailor their manipulations towards precisely the areas where their marks don’t have the ability to see clearly or explain clearly what happened.
A particularly bad-feeling thing, that I’ve experienced when I’ve felt gaslit, and that other people have experienced from me when they felt gaslit, is: “try to explain what happened and why you’re upset, and people respond in a way that’s questioning everything you say, nitpicking your phrasing”, in way that’s sort of demanding you be fair when you’re still confused about what exactly went wrong. And it feels really invalidating and alienating at a time when you’re maybe doubting your own sanity because you were literally manipulated into doubting your own sanity.
(and sometimes when this happens you are the crazy one, but the point here is that it’s still a pretty awful feeling experience, and from inside it’s not clear whether you’re the crazy one)
I’ve had a really hard time figuring out how to respond to this when I’m the one asking the questions – often the person-who-feels-gaslit is making some kinda overreactive, unfair claims. I struggle sometimes with how to validate their general sense of self and respect that there is something real they are trying to work through, without necessarily agreeing with all of their frame in the process.
So… recapping, relevant here because a) this post is about frame control, and trying to draw better distinctions around it. b) the reason frame control is an important concept is largely because of how it relates to coercion, manipulation, abuse, etc. c) people discussing object level versions of that are likely to be triggered...
...and one of the things I’m doing with this post is trying to taboo some words, and make some distinctions, and potentially say “okay, this thing that happened maybe doesn’t make sense to call frame control, maybe it makes sense to call it X, maybe it makes sense to call it Y”.
And to a person who is in the middle of discussing something that was maybe traumatic that they haven’t quite worked through, having someone argue about what-exactly-to-call-the-experiences-they had may end up feeling like exactly the sort of pedantic invalidation that can be extra bad feeling.
(I don’t know that this has happened yet, but it seemed like it might happen suddenly)
So, uh, for now, just warning people to keep an eye out for this dynamic.
Meanwhile, I do want to say “even if I’m trying to do some original seeing on ‘what even is frame control’ and trying to figure out precise language for it”, I still want to reaffirm that if something happened to you that felt really bad, like, I agree that something bad happened, whatever words turn out to be right for describing it and whatever the exact causation turns out to be.
This sounds like when you have a pre-verbal understanding (felt sense) of something, and people are like: “if you immediately cannot translate it to legible words, it is not legit”. Problem is, even if you do your best to translate it to the words immediately, those words will most likely be wrong somehow. Pointing out the problem with the (prematurely chosen) words will then be used to dismiss the feeling as a signal.
You still know that the feeling is a signal of something, but under such circumstances is becomes impossible to figure out what exactly.
The nice thing would be instead to listen, and maybe collaborate on finding the words, which is an iterative process of someone proposing the words, and you providing feedback on what fits and what does not.
Yeah, basically agreed that this is what’s going on.
I agree that listening in a collaborative way is a good thing to do when you have a friend/colleague in this situation.
I’m not sure what to do in the context of this post, if the problem comes up organically. The collaborative listening thing seems to work best in a two-person pair, not an internet forum. I guess “wait for it to come up” is fine.
You are taking a university course on classical mechanics. The lecturer talks about how objects move, without reference to the emotions of people around them or what spirits think. The answers to the questions are always “it’s this simple equation”, and the methods are always differential equations. The relevant systems studied are always of a small number of objects of known mass and size, with a small number of interactions. It is strongly implied that this is relevant to how the things around you move, even though you have not studied friction or non-rigid objects or air resistance. There are assignments and tests, and it is implied that you should be trying to get good scores on these.
I think this is an interesting example of deliberate / consensual frame impartation as well as implicitly smuggling in some frame. I also think it’s worth noting that this is basically good and wholesome (altho one should be on the look-out for ways this frame can fail).
Something I like about this is that “without reference to the emotions of people around them” is actually legitimately a contender for “meaningful frame.” Like, cars move because people decide to drive them, soil gets moved around because humans wanted a nicer landscaping, dams get built because beavers decided to do it.
Eventually Jupiter might get disassembled because powerful AI decided to. This will not necessarily route through emotions, but, “the will and agency of goal-directed beings” is more like “emotions of people around them” than “because simple laws of said so”, and it’s interesting how either frame might be more relevant depending on what conversation you’re trying to have or thing you’re trying to figure out.
Probably some students will actually be quite bothered by this and be left with lingering, subtle confusion and discomfort. It is, in a sense, taking a shortcut past all the objections and alternatives that real humans had historically to these ideas. And IMO some students will be much better served by going the long way around, studying the ideas along with their history.
There’s also some kind of thing about “when is it okay to just have a frame and not particularly try to make space for other frames, and when isn’t it”. I think “in your own blog post” is probably a place where it’s basically fine to just have/present your own frame (ditto for, like, a song); in contrast to a conversation with another person where it’s supposed to be a collaborative thing and instead one person kinda sets the frame. Though I guess there are sometimes blog posts that strike me as excessively stuck in one frame and/or exert pressure to fall in line with that frame—just, the threshold for that is maybe higher than for behavior in conversations.
This line makes me realize I was missing one subcomponent of frame control. We have
Persistent Insistent Frames
Manipulating frames (i.e. tricking people into adopting a new frame)
But then there’s “pressure/threaten someone into adopting a frame”. The line between pressure and “merely expressing confidence” might feel blurry in some cases, but the difference is intended to be “there’s an implication that if you don’t adopt the frame, you will be socially punished”.
Interesting, I was thinking of that as basically in the same category as “persistent insistent frames”!
I’m not very active on LW and don’t really know how people here use the term “frame”, but this is not at all how I’d define it, personally. To me, an important part about a frame (as I understand and use the term) is that much of a frame is implicit and needs to be inferred. It’s a set of assumptions baked into communication, either/both about the content of the conversation as well as about the terms of the conversation itself. These assumptions may include statements about what roles each person is to take, what certain words mean, what the purpose of the conversation is, what is to be taken for granted, what should be paid attention to, what should be ignored, and so on. Some of these assumptions may of course be made explicit in the conversation.
A lot of the time, people pick up on each others frames and do some sort of implict/explicit navigating to figure out what to do: negotiate, compromise, agree to disagree, ignore differences, fight about it, etc. Sometimes one person consciously or unconsciously submits completely to the other person’s frame. Sometimes no negotiation seems to happen at all and people end up talking past each other.
Off the cuff, I’d define “frame control” as deceptive manipulation of another person’s frame (or, some agreed-upon “shared frame”) to some selfish end. Often this will include subtly introducing assumptions, deflecting attention, verbally saying one thing while nonverbally saying another, etc.
Hmm, I definitely meant my definition of frame to include (and primarily consist) of inferred things rather than explicit things. I’m not sure whether there are other major differences between our uses of the term, but fwiw when I read your comment I mostly thought “hmm, that’s basically what I meant to convey about what-a-frame-is”.
But, last time I had a convo like this the person was meaning something subtly different than what I was meaning. But you say some more words about why your definition of frame feels different from what I said in this post?
The main thing that sticks out at-a-glance is you emphasize “what role people play in a convo or interaction”, which I meant to imply at least somewhat by “what you’re trying to get out of a conversation”.
I would describe “frames” like this: Reality has a huge amount of detail. In order to communicate about reality, we need to choose some abstractions, which means that we focus on some things, and ignore other things. Frame is the choice of what to see and what not to see.
On one hand frames are inevitable—you can never describe the full details of reality. On the other hand, sometimes they are abused, and sometimes they are not.
Abuse is when you choose the abstractions in such way that your goals become clearly visible, and the other person’s goals become impossible to communicate. Like when you create a dilemma where the only options are “the other person does what I want” or “bad things happen”, and you describe the world as if these are the only possible options; and you refuse to consider the option “the other person does what they want, and I calm down”. Not just rejecting this option, but describing the world in a way where this option does not even exist.
Like the game of chicken, when the only options are “the other person swerves” or “we crash and both die”. But instead of removing your steering wheel physically, you do it on the conversational level, by refusing to admit that there is such thing as your steering wheel. There is only the other person’s steering wheel, and their choice between life and death.
Non-abusive use of frames is collaborative, where both partners can introduce abstractions, and see how they intersect. Where one player says “well, either you swerve, or not”, but the other player say “but also, either you swerve, or not”, and suddenly there are four options to explore.
Comparing with what you wrote, the option 3 is openly denying other person’s frame; option 2 sounds like kinda accepting but quickly conveniently forgetting the other person’s frame (hoping that the other person forgets it too), and option 4 is replacing the other person’s frame with a strawman version. Three different strategies of refusing the other person’s perspective; the same goal.
The “manipulative guru” example seems bad/confused. It seems like the culpably bad things about that scenario are only and precisely all of the things that aren’t “frame control”, i.e. all of this is clearly bad:
But all of this is clearly neutral:
This is exactly what you’d expect if someone has been seriously thinking about a topic (and discussing it with other intelligent people) for some time.
Should this “guru” pretend to be less confident than they are?
Whose fault is that, exactly…?
The adjective “manipulatively” here seems like it is not justified by the preceding description.
My objections to this example are similar to my objections to Aella’s post—namely, that it “lumps together obviously outright abusive behaviors with normal, unproblematic things that normal people do every day, and then declares this heterogeneous lump to be A Bad Thing”.
I maintain my overall objections to the entire concept of “frame control”.
FYI, I updated this post somewhat in response to some of your comments here (as well as some other commenters in other venues like FB and my workplace slack). The current set of updates is fairly small (adding a couple sentences and changing wordings). But there’s a higher level problem that I think requires reworking the post significantly. I’m probably just going to write a followup post optimized a bit differently.
In this post I was deliberately trying not to be too opinionated about which things “count as frame control”, “is frame control bad?” or whatnot. But a number of people either misinterpreted what I was saying, or just felt lost about what my thesis was.
A line that was originally in the post, I removed during an editing pass, and then added back in in-response to your comments was:
Which (I think?) was precisely the thing you were worried that this whole reification of frame control was pointed at. Part of the point of this post is that I disagreed with Aella’s framing, and don’t want to accidentally create a giant blob of stuff that gets vaguely tarred by “sometimes abusive people use this, so maybe it’s always Real Bad?”.
I changed the title to “Taboo ‘Frame Control’”, hoping to point more clearly in that direction.
I wrote the examples fairly quickly, and deliberately didn’t specify which things I thought were blameworthy in them (aiming to present them as more ‘raw data’ than ‘here’s a takeaway’), but, it does seem like a reasonable inference that if I’m bringing stuff up I maybe think it’s blameworthy, and to be on the lookout for that.
At the end of the day, my understanding is that you don’t really think frames are a useful concept in the first place, so I assume any analysis built on top of frames also won’t seem useful to you. So, I’m not really expecting there to be a version of this post you’d find satisfying, but I do hope to at least avoid the particular failure modes you seem most worried about.
This comment and (the last two paragraphs of) this comment may clarify my view on the matter somewhat.
Well, quite frankly, I think that the version of this post that I’d find most satisfying is one that actually tabooed “frames” and “frame control”, while attempting to analyze what it is that motivates people to talk about such things as these discussions of “frame control” tend to describe (in the spirit of “dissolving questions” by asking what algorithm generates the question, rather than taking the question’s assumptions for granted).
Indeed, I found myself sufficiently impatient to read such a post that I wrote it myself…
I remain unconvinced that there’s anything further that’s worth saying about any of this that wouldn’t be best said by discarding the entire concept of “frame control”, and possibly even “frames”, starting from scratch, and seeing if there’s remains any motivation to say anything.
So, in that sense, yes, I think your characterization is more or less correct.
Yeah I do think writing a post that actually-tabooed-frame-control would be good. (The historical reason this post doesn’t do that is in large part because I initially wrote a different post, called “Distinctions in Frame Control”. realized that post didn’t quite have enough of a purpose, and sort of clarified my goal at the last minute and then hastily retrofitted the post to make it work.)
FWIW I did quite appreciate that comment. I may have more to say about it later, but regardless, I thought it was a good exercise I found helpful to think about.
>Whose fault is that, exactly…?
I agree that nothing about the examples you quote is unacceptably bad – all these things are “socially permissible.”
At the same time, your “Whose fault is that, exactly...?” makes it seem like there’s nothing the guru in question could be doing differently. That’s false.
Sure, some people are okay with seeing all social interactions as something where everyone is in it for themselves. However, in close(r) relationship contexts (e.g. friendships, romantic relationships, probably also spiritual mentoring from a guru?), many operate on the assumption that people care about each other and want to preserve each other’s agency and help each other flourish. In that context, it’s perfectly okay to have an expectation that others will (1) help me notice and speak up if something doesn’t quite feel right to me (as opposed to keeping quiet) and (2) help me arrive at informed/balanced views after carefully considering alternatives, as opposed to only presenting me their terms of the argument.
If the guru never says “I care about you as a person,” he’s fine to operate as he does. But once he starts to reassure his followers that he always has their best interest in mind – that’s when he crosses the line into immoral, exploitative behavior.
You can’t have it both ways. If your answer to people getting hurt is always “well, whose fault was that?”
Then don’t ever fucking reassure them that you care about them!
In reality, I’m pretty sure “gurus” almost always go to great lengths convincing their followers that they care more about them than almost anyone else. That’s where things become indefensible.
Well, for one thing, I don’t see any of this “I care about you as a person” stuff in the OP’s description of the scenario. Maybe we can assume that, just on the basis of the term “guru”? I have no strong feelings about this, I suppose.
More importantly, though—what does caring about someone have to do with them “not bothering to argue” with you? Likewise “choosing the terms of the argument”, likewise “ignoring [things] you should have been thinking about. Caring about someone does not mean taking upon yourself their responsibility to think for themselves!
The intended justification is the previous sentence:
I’m surprised you don’t consider that sort of thing manipulative. Do you not?
I didn’t call attention to this in the grandparent comment, but: note that I used the phrase “culpably bad” (instead of simply “bad”) deliberately.
Of course it’s bad to commit logical fallacies, to equivocate, etc. As a matter of epistemic rationality, these things are clearly mistakes! Likewise, as a pragmatic matter, failing to properly explain assumptions means that you will probably fail to create in your interlocutors a full and robust understanding of your ideas.
But to call these things “manipulative”, you’ve got to establish something more than just “imperfect epistemic rationality”, “sub-optimal pedagogy”, etc. You’ve got to have some sort of intent to mislead or control, perhaps; or some nefarious goal; or some deliberate effort to avoid one’s ideas being challenged; or—something, at any rate. By itself, none of this is “manipulation”!
Now, the closest you get to that is the bit about “they always changed the topic”. That seems like it probably has to be deliberate… doesn’t it? Well, it’s a clearly visible red flag, anyway. But… is this all that’s there?
I suspect that what you’re trying to get at is something like: “having noticed a red flag or two, you paid careful attention to the guru’s words and actions, now with a skeptical mindset; and soon enough it became clear to you that the ‘imperfections of reasoning’ could not have been innocent, the patterns of epistemic irrationality could not have been accidents, the ‘honest mistakes’ were not honest at all; and on the whole, the guy was clearly an operator, not a sincere truth-seeker”.
And that’s common enough (sadly), and certainly very important to learn how to notice. But what identifies these sorts of situations as such is the actual, specific patterns of behavior (like, for instance, “you correct the guru on something and they accept your correction, but then the next day they say the same wrong things to other people, acting as if their conversation with you never happened”).
You can’t get there by gesturing vaguely at high-level, ubiquitous features of someone’s thinking like “they commit logical fallacies sometimes”. And you certainly can’t get there by entirely misleading heuristics like “you ask someone questions about their ideas, and they have answers”!
Related: In Defense of Punch Bug.
A heuristic I have is that identifying your frame and spelling it out as such, with cruxes, is epistemically cooperative and reduces frame control (although like everything else it can be misused). I think a lot of what happens with frame control is that Alice sets the terms of a conversation in ways that stifle Bob’s ability to notice or negotiate on terms.
Yeah. I had a goal with the “Keep your beliefs cruxy and your frames explicit” sequence to eventually suggest people do this for this reason (among others), but hadn’t gotten around to that yet. I guess this new post is maybe building towards a post on that.
It’s also hard because as you note elsewhere, demanding explicitness can be it’s own form of invalidation and disempowerment.
Sometimes we find ourselves in the situation of wanting to prevent some bad thing X, which, however, is difficult to reliably identify/track in any given case, or hard to specify precisely, or impossible to detect until it happens (and so bad that we would like to prevent it and not merely punish it after the fact), or otherwise not amenable to simply making, and effectively enforcing, a clear rule against X. So, we instead ban/punish/discourage Y, which is a correlate of X, and is much easier to specify/identify/track; Y is not directly bad, but preventing Y (which we can do much more easily) lets us in effect prevent X.
The possibility of such a solution relies on the existence of a suitable Y, which is (a) sufficiently well correlated with X that the costs we incur to enforce the rule against Y are justified by the preventative effect on X, and (b) not itself so good or desirable that the cure (banning Y) is not worse than the disease (allowing X to exist/continue).
In this case we have, ostensibly, some manipulative, deceptive, and generally nefarious persons, who, if left unchecked, engage in various manipulations and deceptions and so on, causing harm to individuals and the whole community and its goals etc.
We wish to thwart these malefactors. But X (the harmful behaviors and their effects) is very hard to specify precisely or identify reliably. We naturally seek for some correlate Y, which is easier to specify and identify, and which we can ban, punish, and otherwise discourage, thus effectively preventing X.
But by construction, we are dealing with people who have every incentive not to be thwarted; and in particular, they have the incentive to adapt and modulate their behaviors so as to de-correlate them from any suitable (i.e., harmless-to-ban) Y. Indeed the ideal scenario (for the malefactors!) is one where the only features Y of observable behavior which are highly correlated with the bad behaviors X are those which cannot be banned without doing more harm than letting X continue—because they are inherently desirable, and/or exhibited by those upon whom the community bestows, and wishes to bestow, high status.
This, finally, is how we come to learn that, e.g., having answers to questions people ask about your ideas is bad (or a bad sign or “red flag”); likewise being independent-minded and not seeking the approval of others; etc.
Yet this outcome also happens to be beneficial to those who wish to make “status plays” by attacking deservedly high-status members of the community, resulting in a sort of “baptists and bootleggers coalition” between those who want to prevent X (and thus are inconvenienced by the desirability of Y) and those who want to reduce the desirability of Y (and thus its status-bestowing power).
Anyone opposing such measures finds himself in a bind: agreeing that any of the given Y is bad (or at least not all that great, perhaps not so valuable that it can’t be sacrificed) seems both intrinsically terrible (and likely to result in bad consequences for the community and its goals) and also (if he himself exhibits behaviors/qualities Y) likely to reduce his own status. But arguing for the desirability of Y can be tarred as obstruction of the efforts to prevent X—which in fact it is (see the last paragraph of the previous section!), though of course that’s hardly the intent…
Examples: we wish to prevent stabbings, so we ban switchblades (on the theory that only those who plan to stab people will want a switchblade, though switchblades themselves are no more dangerous than any other kind of knife); we wish to prevent money laundering and other fraud, so we prohibit having a bank account under an assumed name (because having bank accounts under fake names makes it easier to do fraud, even though by itself it’s harmless); we wish to prevent reckless and dangerous driving, so we measure drivers’ blood alcohol level if they’re stopped by the cops for any reason (even though we don’t directly care about how drunk a driver is, only how badly he drives, for any reason).
The concept of “appearance of impropriety” is related.
That is, those (a substantial part of) whose high status comes from their exhibiting highly-valued behaviors Y.
Such as, for instance, advocates of strong encryption features in personal computing devices. After all, if you want your phone to be impervious to hacking by law enforcement, that really is evidence that you’re a criminal! And such features genuinely make it harder for well-meaning police to catch real bad guys. Of course, they also make it harder for civil-rights-violating shadowy government agencies to oppress and control honest citizens.
I had a discussion with on Facebook about this post, where someone felt my examples seemed pointed a different definition of frame control than them. After some back-and-forth and some confusion on my part, it seemed like their conception of frame control was something more like ‘someone is trying to control you, and they happen to be using frames to do it’, whereas my conception here was more like ‘someone is trying to control your frame.’
I’m not actually sure how different these turn out to be in practice. If someone is controlling your frame, they’re also controlling what thoughts you can most easily think, which is also controlling your actions. But I think there’s something of a difference between “someone’s goal is to change you” vs “someone’s goal is to have a comfortable frame for them”. It’s plausible to me that people can viscerally feel the difference, and the variants of frame control that feel particularly unsettling are the ones where it’s palpable that they’re optimizing to control you.
If it turns out we may need to talk separately about “controlling someone (with frames)” and “controlling someone’s frame”… man, we sure do have a language collision problem ripe for subtle misunderstandings.
Frames are a great frame for Scott’s flavor of charity. The basic difficulty in understanding strange points of view is adopting their natural frames, the rest is object level study performed from within those frames. The basic danger of being locked into a strange point of view is holding a frame strongly and not practicing other frames, which would otherwise reframe your usual object level study.
Fluency in reframing and in spinning up new frames seems to be a basic skill that can take on most of the heavy lifting from taking ideas seriously and changing your mind. Just practice an idea inside its natural frame, as seriously as it affords, but at the end of the day adopt other frames, and occasionally look into the same idea from those other frames. Let new frames grow with object level study, and keep an eye out for their adoption where previously in your experience different frames used to reign, or still do. And so you don’t get locked in, have a productive outlet for curiosity, and can change your worldview at the drop of a hat.
Can you explain what makes this a frame and not a belief? Is there a difference between beliefs and frames in your ontology?
So, recap that I think the word “frame” is used metaphorically for three different things:
“what parts of reality to pay attention to” (window frame)
“what’s the purpose of the conversation? the context? the goal?” (picture frame)
“what sort of structure are we talking about and what sort of things plug into it” (framework)
For “everything is coordination + cryptography” guy, I’m thinking mostly in terms of “framework” (although frameworks tend to also imply which-parts-of-reality-to-pay-attention-to).
The way they model society routes through a structure wherein coordination problems are salient and where crypto-solutions are considered first (over alternatives like ‘use leadership and charisma’ or ‘share feelings’ or ‘build literally any other kind of tool other than crypto’)
This is different from object level beliefs on “how does crypto work” and is still subtly different from having the object level belief that “crypto is really important”. Like, you can think crypto is really important, without organizing all of your thoughts or your conversations around that fact.
Compare the two:
(1) The difference between “bad frame control” and “good frame control” is that, in the latter, the frame matches physical reality and social reality.
Here, I use “social reality” in the sense of “insights about what types of actions or norms help people flourish.”
(2) The difference between lying and telling the truth is that, when someone doesn’t lie, what they say matches physical reality and social reality.
I feel like there’s a sense in which (1) is true, but it’s missing the point if someone thinks that this is the only difference. If you lie a lot around some subject matter, or if you manipulate someone with what aella calls frame control, there’s always an amount of friction around the subject matter or around the frame that you introduce. This friction wouldn’t be there if you’re going with the truth. The original post points out how frame controllers try to hide that sort of friction or bring down your defenses against it. Those subtleties are what’s bad about the bad kind of frame control. Noticing these subtleties is what it’s all about.
Someone might object as follows:
“Friction” can mean many things. If you try to push people to accomplish extraordinary feats with your weird-seeming startup, you have to motivate them and push against various types of “friction” – motivate your co-workers, make them okay with being seen as weird as long as your idea hasn’t succeeded, etc.
I agree with all that. Good leaders have to craft motivating frames and inspire others with their vision. But I still feel like that’s not the same thing as what happens in (the bad kind of) frame control. The word “control” is a clue about where the difference lies. It’s hard to pin down the exact difference. Maybe it’s something like this:
Good leadership is about offering frames to your followers that create win-win situations (for them and for the world!) by appealing to virtues that they already endorse, deliberately drawing attention to all the places where there’s friction from social conventions or inertia/laziness, but presenting a convincing vision about why it’s worth it to push against that friction.
By contrast, frame control (the bad, sneaky/coercive kind) is about guilting people into thinking it’s their fault if they struggle because of the friction, or trying to not have them notice that there are alternative frames for them.
We should also consider the proportions. Your examples of I, V and VII, make the vast, vast majority of cases, and in a way, VII exists due to the joint effort of I and V.
I think this is non-trivial, and if we somehow managed to put numbers on it, it would turn out that the Is and the Vs collectively do much more harm to society and each other than all the Manipulative Cult Leader Masterminds combined. In fact, I doubt the Supervillain Frame Controllers would even exist if not for the fertile ground of Option Is as their lackeys and Option Vs as their prey.
(I apologize for being, or skirting too close to the edges of being, too political. I accept downvotes as the fair price and promise no begrudgement for it.)
I have an observation that I want more widely appreciated by low-contextualizers (who may be high or low in decoupling as well; they are independent axes): insisting that conversations happen purely in terms of the bet-resolvable portion of reality, without an omniscient being to help out as bet arbiter, can be frame control.
Status quos contain self-validating reductions, and people looking to score Pragmatic Paternalist status points can frame predictable bet outcomes as vindication of complacence with arbitrary, unreasonably and bullyishly exercised, often violent, vastly intrinsic-value-sacrificial power, on the basis of the weirdness and demonstrably inconvenient political ambitiousness of fixing the situation.
They seem to think, out of entitlement to epistemic propriety, that there must be some amount of non-[philosophical-arguments]-based evidence that should discourage a person from trying to resolve vastly objectively evil situations that neither the laws of physics, nor any other [human-will]-independent laws of nature, require or forbid. They are mistaken.
If that sounds too much like an argument for communism, get over it; I love free markets and making Warren Buffett the Chairman of America is no priority of mine.
If it sounds too much like an argument for denying biological realities, get over it; I’m not asking for total equality, I’m just asking for moral competence on behalf of institutions and individuals with respect to biological realities, and I detest censorship of all the typical victims, though I make exception for genuine infohazards.
If you think my standards are too high for humanity, were Benjamin Lay’s also too high? I think his efforts paid off even if our world is still not perfect; I would like to have a comparable effect, were I not occupied with learning statistics so that I can help align AI for this guilty species.
If you think factory farmed animals have things worse than children… Yes. But I am alienated by EA’s relative quietude; you may not see it this way, but so-called lip service is an invitation for privately conducted accountability negotiation, and I value that immensely as a foundation for change.