LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon(Raymond Arnold)
Actual answer is that Eliezer has tried a bunch of different things to lose weight and it’s just pretty hard. (He also did a quite high-effort thing in 2019 which did work. I don’t know how well he kept the pounds off in the subsequent time)
You can watch a fun video where he discusses it after the 2019 Solstice here.
(I’m not really sure how I feel about this post. It seems like it’s coming from an earnest place, and I kinda expect a other people to have this question, but it’s in a genre that feels pretty off to be picking on individual people about and I definitely don’t want a bunch more questions similar to this on the site. I settled for downvoting but answering the question)
and/or exert pressure to fall in line with that frame
This line makes me realize I was missing one subcomponent of frame control. We have
Strong frames
Persistent Insistent Frames
Manipulating frames (i.e. tricking people into adopting a new frame)
But then there’s “pressure/threaten someone into adopting a frame”. The line between pressure and “merely expressing confidence” might feel blurry in some cases, but the difference is intended to be “there’s an implication that if you don’t adopt the frame, you will be socially punished”.
Yeah, basically agreed that this is what’s going on.
I agree that listening in a collaborative way is a good thing to do when you have a friend/colleague in this situation.
I’m not sure what to do in the context of this post, if the problem comes up organically. The collaborative listening thing seems to work best in a two-person pair, not an internet forum. I guess “wait for it to come up” is fine.
I had a discussion with on Facebook about this post, where someone felt my examples seemed pointed a different definition of frame control than them. After some back-and-forth and some confusion on my part, it seemed like their conception of frame control was something more like ‘someone is trying to control you, and they happen to be using frames to do it’, whereas my conception here was more like ‘someone is trying to control your frame.’
I’m not actually sure how different these turn out to be in practice. If someone is controlling your frame, they’re also controlling what thoughts you can most easily think, which is also controlling your actions. But I think there’s something of a difference between “someone’s goal is to change you” vs “someone’s goal is to have a comfortable frame for them”. It’s plausible to me that people can viscerally feel the difference, and the variants of frame control that feel particularly unsettling are the ones where it’s palpable that they’re optimizing to control you.
If it turns out we may need to talk separately about “controlling someone (with frames)” and “controlling someone’s frame”… man, we sure do have a language collision problem ripe for subtle misunderstandings.
Notes: this was tagged ‘effective altruism’, but on LessWrong ‘effective altruism’ tag is used to talk about the movement at a meta level, and this post should be classified as ‘world optimization’.
A thing that occurs to me, as I started engaging with some comments here as well as on a FB thread about this:
Coercion/Abuse/Manipulation/Gaslighting* often feel traumatic and triggering, which makes talking about them hard.
One of the particular problems with manipulation is that it’s deliberately hard to take about or find words to explain what’s wrong about it. (if you could easily point to the manipulation, it wouldn’t be very successful manipulation). Successful manipulators tailor their manipulations towards precisely the areas where their marks don’t have the ability to see clearly or explain clearly what happened.
A particularly bad-feeling thing, that I’ve experienced when I’ve felt gaslit, and that other people have experienced from me when they felt gaslit, is: “try to explain what happened and why you’re upset, and people respond in a way that’s questioning everything you say, nitpicking your phrasing”, in way that’s sort of demanding you be fair when you’re still confused about what exactly went wrong. And it feels really invalidating and alienating at a time when you’re maybe doubting your own sanity because you were literally manipulated into doubting your own sanity.
(and sometimes when this happens you are the crazy one, but the point here is that it’s still a pretty awful feeling experience, and from inside it’s not clear whether you’re the crazy one)
I’ve had a really hard time figuring out how to respond to this when I’m the one asking the questions – often the person-who-feels-gaslit is making some kinda overreactive, unfair claims. I struggle sometimes with how to validate their general sense of self and respect that there is something real they are trying to work through, without necessarily agreeing with all of their frame in the process.
...
So… recapping, relevant here because a) this post is about frame control, and trying to draw better distinctions around it. b) the reason frame control is an important concept is largely because of how it relates to coercion, manipulation, abuse, etc. c) people discussing object level versions of that are likely to be triggered...
...and one of the things I’m doing with this post is trying to taboo some words, and make some distinctions, and potentially say “okay, this thing that happened maybe doesn’t make sense to call frame control, maybe it makes sense to call it X, maybe it makes sense to call it Y”.
And to a person who is in the middle of discussing something that was maybe traumatic that they haven’t quite worked through, having someone argue about what-exactly-to-call-the-experiences-they had may end up feeling like exactly the sort of pedantic invalidation that can be extra bad feeling.
(I don’t know that this has happened yet, but it seemed like it might happen suddenly)
So, uh, for now, just warning people to keep an eye out for this dynamic.
Meanwhile, I do want to say “even if I’m trying to do some original seeing on ‘what even is frame control’ and trying to figure out precise language for it”, I still want to reaffirm that if something happened to you that felt really bad, like, I agree that something bad happened, whatever words turn out to be right for describing it and whatever the exact causation turns out to be.
What I think is problematic is that some people are able to make genuine threats to get their way, enforcing compliance with their values and language and preferences and norms
One of my main points here is that I think we probably should call threatening behavior “threatening” and maybe “coercive” or “abusive” or whatever seems appropriate for the situation, and only use the phrase ‘frame control’ when the most relevant thing is that someone is controlling a frame. (And, maybe, even then try to say a more specific thing about what they’re doing, if you can)
And meanwhile, talk about “frame-whatever” when you’re talking about frame-whatever, whether that be frame-manipulation, or frame-curiosity.
without the other person feeing as thought they’ve consented
I actually think the thing that makes frame control a particular important component of abuse or coercion is it’s one of the tools that let’s an abuser make it ambiguous whether someone consented. Where Alice ends up feeling like they sort of consented, but something feels wrong about it and they don’t have good words for it and there’s a bunch of social pressure and they end up thinking ‘well… I guess I consented so I have to do this now...?’ but feel sick in their stomach about it.
(I think there are also ways to invoke this effect without frame manipulation. I’m not sure if there’s a good name for the generalize effect. ‘manipulated consent?’, ‘ambiguous consent?’)
The adjective “manipulatively” here seems like it is not justified by the preceding description.
The intended justification is the previous sentence:
Years later looking back, you might notice that they always changed the topic, or used various logical fallacies/equivocations, or took some assumptions for granted without ever explaining them.
I’m surprised you don’t consider that sort of thing manipulative. Do you not?
Yeah this variant does feel more like explicit frame control (I think “frame manipulation”, although it feels like it strains a bit with the cluster I’d originally been thinking of when I described it)
Next I asked it:
It responded with this image:
code:
<svg width="300" height="300" viewBox="0 0 300 300" xmlns="http://www.w3.org/2000/svg“> <!-- Background circle --> <circle cx=”150“ cy=”150” r=”140“ fill=”none” stroke=”black” stroke-width=”2”/> <!-- Body --> <ellipse cx=”150“ cy=”100” rx=”30“ ry=”40” fill=”none” stroke=”black” stroke-width=”2″/> <rect x=”140“ y=”140” width=”20“ height=”60” fill=”none” stroke=”black” stroke-width=”2″/> <line x1=”100“ y1=”140” x2=”200“ y2=”140” stroke=”black” stroke-width=”2″/> <!-- Legs --> <line x1=”150“ y1=”200” x2=”110“ y2=”270” stroke=”black” stroke-width=”2″/> <line x1=”150“ y1=”200” x2=”190“ y2=”270” stroke=”black” stroke-width=”2″/> <line x1=”150“ y1=”200” x2=”90“ y2=”270” stroke=”black” stroke-width=”2″/> <line x1=”150“ y1=”200” x2=”210“ y2=”270” stroke=”black” stroke-width=”2″/> <!-- Arms --> <line x1=”140“ y1=”140” x2=”70“ y2=”140” stroke=”black” stroke-width=”2″/> <line x1=”160“ y1=”140” x2=”230“ y2=”140” stroke=”black” stroke-width=”2″/> <line x1=”140“ y1=”140” x2=”60“ y2=”80” stroke=”black” stroke-width=”2″/> <line x1=”160“ y1=”140” x2=”240“ y2=”80” stroke=”black” stroke-width=”2″/> <!-- Head --> <circle cx=”150“ cy=”100” r=”20“ fill=”none” stroke=”black” stroke-width=”2”/> </svg>
Then I asked if it could do the Vitruvian Man as Ascii art, and it said:
O -|-|- | /|\ / \
The lecturer talks about how objects move, without reference to the emotions of people around them or what spirits think.
Something I like about this is that “without reference to the emotions of people around them” is actually legitimately a contender for “meaningful frame.” Like, cars move because people decide to drive them, soil gets moved around because humans wanted a nicer landscaping, dams get built because beavers decided to do it.
Eventually Jupiter might get disassembled because powerful AI decided to. This will not necessarily route through emotions, but, “the will and agency of goal-directed beings” is more like “emotions of people around them” than “because simple laws of said so”, and it’s interesting how either frame might be more relevant depending on what conversation you’re trying to have or thing you’re trying to figure out.
Here was the final one:
<svg viewBox="0 0 800 600" xmlns="http://www.w3.org/2000/svg“> <!-- Field --> <rect x=”0“ y=”0” width=”100%” height=”100%” fill=”#8BC34A”/> <!-- Sky and sun --> <rect x=”0“ y=”0” width=”100%” height=”40%” fill=”#90CAF9″/> <circle cx=”700“ cy=”100” r=”50″ fill=”#FFEB3B”/> <!-- Mountains --> <polygon points=”100,300 300,100 500,300″ fill=”#BDBDBD”/> <polygon points=”350,400 550,200 750,400″ fill=”#9E9E9E”/> <!-- Castle --> <rect x=”200“ y=”150” width=”200“ height=”200” fill=”#F5F5F5″/> <rect x=”250“ y=”200” width=”100“ height=”100” fill=”#BDBDBD”/> <rect x=”220“ y=”190” width=”60“ height=”60” fill=”#8BC34A”/> <rect x=”320“ y=”190” width=”60“ height=”60” fill=”#8BC34A”/> <rect x=”290“ y=”230” width=”20“ height=”60” fill=”#BDBDBD”/> <polygon points=”200,150 400,150 300,100“ fill=”#F5F5F5”/> <!-- Dragon --> <path d=”M 550 400 Q 600 300 700 300 Q 800 300 750 400 Z” fill=”#F44336″/> <ellipse cx=”600“ cy=”350” rx=”100“ ry=”50” fill=”#E53935″/> <ellipse cx=”660“ cy=”330” rx=”30“ ry=”20” fill=”#F5F5F5″/> <ellipse cx=”540“ cy=”330” rx=”30“ ry=”20” fill=”#F5F5F5″/> <circle cx=”620“ cy=”340” r=”5“ fill=”#000”/> <circle cx=”580“ cy=”340” r=”5“ fill=”#000”/> <path d=”M 600 380 Q 640 400 660 350“ stroke=”#000” stroke-width=”2″ fill=”none”/> <path d=”M 600 380 Q 560 400 540 350“ stroke=”#000” stroke-width=”2″ fill=”none”/> <path d=”M 520 330 Q 580 330 600 300“ stroke=”#000” stroke-width=”2″ fill=”none”/> <path d=”M 700 350 Q 680 320 680 340“ stroke=”#000” stroke-width=”2″ fill=”none”/> <path d=”M 700 350 Q 720 320 720 340“ stroke=”#000” stroke-width=”2″ fill=”none”/> <!-- Knight --> <path d=”M 250 450 L 300 350 L 350 450 L 325 500 L 275 500 Z” fill=”#BDBDBD”/> <path d=”M 325 500 L 325 550“ stroke=”#000” stroke-width=”10″ fill=”none”/> <path d=”M 275 500 L 275 550“ stroke=”#000” stroke-width=”10″ fill=”none”/> <circle cx=”312.5“ cy=”362.5” r=”37.5″ fill=”#8BC34A”/> <rect x=”290“ y=”375” width=”45“ height=”25” fill=”#BDBDBD”/> <line x1=”312.5“ y1=”400” x2=”312.5“ y2=”480” stroke=”#000“ stroke-width=”10”/> <circle cx=”312.5“ cy=”300” r=”50″ fill=”#BDBDBD”/> <line x1=”312.5“ y1=”300” x2=”312.5“ y2=”230” stroke=”#000“ stroke-width=”10”/> <line x1=”312.5“ y1=”250” x2=”350“ y2=”200” stroke=”#000“ stroke-width=”10”/> <line x1=”312.5“ y1=”250” x2=”275“ y2=”200” stroke=”#000“ stroke-width=”10”/> <line x1=”350“ y1=”200” x2=”375“ y2=”175” stroke=”#000“ stroke-width=”10”/> <line x1=”375“ y1=”175” x2=”425“ y2=”175” stroke=”#000“ stroke-width=”10”/> <line x1=”425“ y1=”175” x2=”450“ y2=”200” stroke=”#000“ stroke-width=”10”/> <line x1=”275“ y1=”200” x2=”225“ y2=”175” stroke=”#000“ stroke-width=”10”/> <line x1=”225“ y1=”175” x2=”175“ y2=”175” stroke=”#000“ stroke-width=”10”/> <line x1=”175“ y1=”175” x2=”150“ y2=”200” stroke=”#000“ stroke-width=”10”/> <line x1=”150“ y1=”200” x2=”175“ y2=”225” stroke=”#000“ stroke-width=”10”/> <line x1=”175“ y1=”225” x2=”225“ y2=”225” stroke=”#000“ stroke-width=”10”/> <line x1=”225“ y1=”225” x2=”275“ y2=”250” stroke=”#000“ stroke-width=”10”/> <line x1=”275“ y1=”250” x2=”350“ y2=”250” stroke=”#000“ stroke-width=”10”/> </svg>
I tried again, accidentally using GPT3.5 this time, which initially gave something really lame, but then said “more realistic please”, and it gave me:
Note that ASCII art isn’t the only kind of art. I just asked GPT4 and Claude to both make SVGs of a knight fighting a dragon.
Here’s Claude’s attempt:
And GPT4s:
I asked them both to make it more realistic. Claude responded with the exact same thing with some extra text, GPT4 returned:
I asked followed up asking it for more muted colors and a simple background, and it returned:
Do you have particular examples of non-profound ideas you think are being underexplored?
I wanna flag the distinction between “deep” and “profound”. They might both be subject to the same bias you articulate here, but I think they have different connotations, and I think important ideas are systematically more likely to be “deep” than they are likely to be “profound.” (i.e. deep ideas have a lot of implications and are entangled with more things than ‘shallow’ ideas. I think profound tends to imply something like ‘changing your conception of something that was fairly important in your worldview.’)
i.e. profound is maybe “deep + contrarian”
This post was oriented around the goal of “be ready to safely train and deploy a powerful AI”. I felt like I could make the case for that fairly straightforwardly, mostly within the paradigm that I expect many AI labs are operating under.
But one of the reasons I think it’s important to have a strong culture of safety/carefulness, is in the leadup to strong AI. I think the world is going to be changing rapidly, and that means your organization may need to change strategies quickly, and track your impact on various effects on society.
Some examples of problems you might need to address:
Accidentally accelerating race dynamics (even if you’re really careful to avoid hyping and demonstrating new capabilities publicly, if it’s clear that you’re working on fairly advanced stuff that’ll likely get released someday it can still contribute to FOMO)
Failing to capitalize on opportunities to reduce race dynamics, which you might not have noticed due to distraction or pressures from within your company
Publishing some research that turned out to be more useful for capabilities than you thought.
Employees leaving and taking insights with them to other less careful orgs (in theory you can have noncompete clauses that mediate this, but I’m skeptical about how that works out in practice)
AIs interacting with society, or each other, in ways that destabilize humanity.
I’m working on a longer comment (which will maybe turn out to just be a whole post) about this, but wanted to flag it here for now.
I’ve now tried this out, in a not-too-structured way. (I’ve tried two assistants, not 100% of the time for weeks on end, but for a couple individual days of work). It basically seems to be working so far and I’m hoping to try for longer periods soon. (It’s a bit tricky because I work on the LW team which has existing structure to work around)
I’m trying this out with an aim for “thinking-assistant” more than “personal assistant / productivity hack”. I’m hiring people who take notes about what I’m doing, see where I seem to get stuck, talk through problems, and hopefully help improve the quality of my cognition as well as just make me work harder.
Mostly so far it’s having the effect of keeping me in “strategic” mode all the time. I’m trying to have part of their job be to track my big picture and notice when I’m getting tunnel vision.
I don’t know how this’ll hold up over weeks. It also sure is fairly expensive.
Yeah, I didn’t mean to be responding to that point one way or another. It just seemed bad to be linking to a post that (seems to still?) communicate false things, without flagging those false things. (post still says “it can be as easy as creating a pdf of your post”, which my impression maybe technically true on rare occasions but basically false in practice?)
Yeah. I had a goal with the “Keep your beliefs cruxy and your frames explicit” sequence to eventually suggest people do this for this reason (among others), but hadn’t gotten around to that yet. I guess this new post is maybe building towards a post on that.