Yeah, I think the reason sexual abuse is wrong is because it has an unacceptably high risk of traumatizing someone, not because it always in all cases does. (Sort of like drunk driving.)
I think this is just one particular subcase of “strong urges are hard not to follow” (other examples: cravings for food one knows is long-term unhealthy; some instances of procrastination (choosing a short-term fun activity over a long-term beneficial one when you don’t endorse that); sexual arousal (separate from romantic feelings); being tired/sleepy when you endorse doing stuff that requires overriding that). It certainly is a notable subcase of that, though. I’ve sometimes described having crushes as having my utility function hijacked (though in a way I usually endorse—I tend to be pretty aligned across versions of myself on this axis).
I do think that if I did this my responses would be more biased than yours because I would not be willing to send the survey to all the people I have contact info for, in part due to concerns kind of like this. But even biased data would still be interesting and useful, probably.
I’m now tempted to run such a survey of my own...
Copying over some thoughts from a text conversation I had about this post, since that’s easier than writing them up properly. Adding section headers for readability; utterances not marked “[friend]: “ are mine.-------------------
[friend]: I like this! In particular I like the concept that it’s reasonable to have beliefs that you can’t prove on request, because the internet often assumes it’s not
[friend]: (but also yeah, it’s very important to note that if you have those then you shouldn’t expect other people to take them on faith)
I sort of think the second thing is more important in discourses I’m in
1. arguments which don’t acknowledge they’re about private beliefs
…I think in some disagreements there’s a lack of acknowledgement that the kinds of arguments being made are fundamentally not a kind of thing that can convince people in the absence of direct personal experience replicating those arguments?
that these are private-belief kinds of justifications masquerading as public-belief ones
and that it doesn’t make sense to have an argument about it where you think people disagreeing with you are doing something wrong
[friend]: yeah, absolutely
[friend]: I think which issue you run into more depends on specific bubbles
and it makes more sense to mutually acknowledge this
2. terminology request
...this makes me want terms for “private-belief kinds of justifications” vs. “public-belief kinds of justifications”
3. how common are truly public beliefs
also it kind of makes me wonder how common truly public beliefs really are
I kind of think it’s very common for beliefs to rest in part on personal experience that’s not super replicable or transferable by argument
even if the “personal experience” is like, reading papers in which X kind of thing repeatedly turns out to be true, or like, a doctor or nurse seeing a lot of patients who present a certain way and learning to have doomy feelings about some combinations of symptoms
(thinking in part here about some things [nurse friend] has said about her experience, re: the second thing)
4. when to rely on others’ private beliefs; converting private beliefs to ~public ones by demonstrating calibration
which is also now making me think of emergency situations and when you should act on someone else’s private belief they can’t fully justify to you
I guess if you have reason to think they’re in general well calibrated then that’s justified
though it’s still much iffier than coming to agree with a legible argument
also possibly people can convert some kinds of private beliefs into ~public ones by demonstrating being well calibrated?
5. how common are truly public beliefs, part 2
[friend]: I think there’s a fair amount of truly public beliefs, where I know something mostly because I looked it up
[friend]: or I guess even more cases where I kind-of-knew something because of illegible cultural osmosis but when I wanted to tell someone about it I looked it up and it turned out to be an easy wikipedia-findable fact
[friend]: also one can hope that if doctors/nurses see a lot of patients who present a certain way and this turns out to be a consistent sign of a specific problem, at some point someone will write this up and make it legible-ish
6. how useful are public vs. private beliefs
[friend]: … although there’s also the problem where you can make something a kind of public-belief by e.g. pointing to a paper about it, but then it turns out that people who know more about it than you have private-beliefs that actually most of the papers in that field are wrong, and this can get really complicated
[friend]: because we have a vague feeling that public-beliefs are more correct, but they aren’t always
This reminds me strongly of the concept of Radical Acceptance, which comes from Dialectical Behavior Therapy, and which I agree is often a necessary part of seeing and engaging with reality as it is. (Perhaps, more specifically, grieving as described here is an example of a way to achieve radical acceptance?)
This reads as a rewrite of (some parts of?) the punch bug post (which I didn’t like at the time) with several years’ more wisdom. I really appreciate the careful precise delineation of the exact things you do and don’t mean; I think this works very well here.
Sometimes kind of! Though I wouldn’t say it’s “bogus” for me exactly, just that there tends to be a tradeoff between time spent planning/reflecting vs. time spent taking concrete actions, and I’m somewhat prone to a bias in favor of the former—but I do think most of the time when I do this kind of thinking I do find it useful, it just isn’t always the most useful thing I could be doing.
Also sometimes the stuff on my mind that I feel I Must think about is not actually related to the stuff I’m trying to concretely make progress on, but is separately useful to think about. Here too I don’t always think this type of reflection is the most useful thing for me to do right then, but it’s sometimes hard not to.
Hmm, I agree that the thing you describe is a problem, and I agree with some of your diagnosis, but I think your diagnosis focuses too much on a divide between different Kinds Of People, without naming the Kinds Of People explicitly but kind of sounding (especially in the comments) like a lot of what you’re talking about is a difference in how much Rationality Skill people have, which I think is not the right distinction? Like I think I am neither a hyper-analytic programmer (certainly not a programmer) nor any kind of particularly Advanced rationalist, and I think I am not particularly susceptible to this particular problem (I’m certainly susceptible to other problems, just not this one, I think). I think it’s more that people doing the salvage epistemology thing can kind of provide cover for people doing a different thing where they actually respect and believe the woo traditions they’re investigating, and especially a lack of clear signposting beliefs makes this hard to navigate.
Sometimes I’ll be distracted by a thought or feeling or event and feel like I can’t move forward with whatever I was doing until I sit down and process it (usually in writing, often in a small Discord channel). Sometimes I will procrastinate on work and the way I will do that will be talking about whatever’s on my mind. In general I tend to have a strong urge to talk about/write down my thoughts and poke at them until they make more sense to me.
(It does also happen that instead I avoid doing this with some kinds of things, and it’s not good for me, and that does especially happen if I’m extra busy with other stuff, I guess.)
(which isn’t to say that I wouldn’t benefit from doing more of it, or that I don’t do more of it when I have more slack. but I don’t think it disproportionately suffers relative to my other priorities.)
Huh, interesting! I think to some extent the way my mind works forces me to fairly often spend time on #3 even when low on slack, even sometimes at the expense of the other things. So for me your initial reasoning feels more applicable.
Copying over some free-form thoughts from Discord about ways this post feels relevant to my life:
one thing this makes me think of is how my action menu shrunk to like five things in fall 2020
because there was covid and that severely limited the available activities, and then I moved and then, before I had unpacked, found out I have to soon move again, so I couldn’t access most of my stuff OR most of my space since it was taken up by boxes, and then also some of the time the air was poison so going outside was also not very much an option
and then I think.… the fact that a bunch of options got greyed out by actual circumstances and I got used to living a very limited life… also somehow ~broke my ability to think of new options that could potentially work even with the constraints that existed?
I guess this post is mostly about that kind of psychologically greyed-out options that aren’t driven by real constraints
but it was interesting how the real constraints had this additional effect
and then I moved again and the situation was somewhat better but I was so used to this very limited life that it was hard to actually expand my action menu much even though I theoretically could
I did eventually kind of
but much more after vaccines and reopening
though I’m still catching up, also
my stuff is STILL in boxes to an embarassingly large extent
- hmm, articulation of thought: one benefit of cleaning my room is expanding my action menu / un-graying-out options
because many actions require or at least benefit from various objects and if the objects are actually accessible that helps
another maybe interesting thing is I absolutely knew at the time [i.e. in fall 2020] this was the case [i.e. that I was having a greyed-out-options problem]
that there were probably things I could do that would be better
but it was so so hard to think of them let alone do them
I read this a few months ago and thought about it out loud in a Discord channel with the intent to turn my thoughts into a nicely structured comment here eventually, and then I never ended up doing that. So instead I’m going to do a lower-effort version of that, where I more or less copy my thoughts from Discord with only light editing, because that seems better than nothing. I’ll put in section headers for readibility, also.
this is a really good post imo
and one that’s relevant to me[due to the fact that I often have low/fluctuating levels of energy/spoons, and am not great at reliability]
I really appreciate the part at the beginning where Duncan patiently explains the value of having social norms at all
it feels very “nerd who has gone through the valley of bad rationality and come out the other side with a better understanding of why that fence was there”
some of the post’s object-level recommendations are things I have learned to do but have to sometimes keep reminding myself to do, and sometimes go through periods of failing harder than usual at when my capabilities change
I guess actually maybe nearly all of the things are things I already try to do? just without quite this much theory about it
I like having the theory about it though
I could treat it as a checklist
I think a thing that would be useful for me to do is to come up with a bunch of examples for each point
1. reliability equilibria & why lower-reliability ones are good sometimes
one thing that I think is missing from the post is discussion/acknowledgment of different… reliability equilibria
like, the first example that came to mind is that I just today cancelled a planned coworking session because I am too tired to do it without being miserable
but, it’s already understood between me and the other person that such things will be cancelled sometimes? I guess this probably hasn’t been explicitly established in words before but both of us have cancelled on each other before and yet continued to be willing to set up new plans in the future
and this is good, I think?
if such plans required 99% reliability I would be way less willing to make them
which for some things would be the correct tradeoff! there are plans I should be extremely hesitant to make because they legitimately require really high reliability
but coworking isn’t inherently such a thing; it could be a thing like that for some people but I think that, given two people who find low-certainty plans acceptable, making such plans creates more value than not making them
this makes me think of the recent discussion [in this server] about [sometimes] enjoying not being the most late person to a thing
I think one reason I sometimes enjoy this is that it’s an update on the level of reliability expected here?
in some contexts, lower-expected-reliability is a better fit for me personally. so evidence that I’m not wildly out of norm is really good for me
(e.g. today a coworker missed a meeting with me because she forgot about it and went to dinner. I can imagine contexts in which this would annoy me greatly but in this case it actually made me feel kind of relieved because a workplace in which this is an allowable level of [occasional, infrequent] fuckup is more hospitable to me than one where it isn’t. [since I’m posting this publicly I feel the need to clarify that this is not a type of fuckup I am personally prone to. but I have sometimes slept through an early morning meeting due to not hearing my alarm, when particularly sleep-deprived. not a Very Important one though.])
though there are certainly also contexts where updates in the “this is a lower-reliability-expectations equilibrium than I thought” direction are unpleasant!
like if I have plans with someone and I care about those plans a lot and they seem to be prioritizing the plans less than I am
2. proactive communication
also sometimes it annoys me if someone flakes on something at the last minute when they could have told me earlier
even if I’m fine with the thing being cancelled
though this kind of varies
and this is a thing I sometimes fail at myself
(one of Duncan’s prescriptions is in fact “loop the renegee in as soon as possible”)
this is a thing I’ve tried to get better at—e.g. I’ve mostly ingrained the habit of giving people I’m meeting up-to-date ETAs if I’m running notably late
in the past I would much more often delay telling them this as much as possible because I dreaded the moment when they find out how late I will be
also this part:
“I’m happy to schedule a 50% chance of a lunch on Saturday, if that works for you, but if you need a firm ‘yes’ or ‘no’ then I have to say ‘no.’”
I’ve started doing this^ much more often, though I’m still less systematic about it than this
I mostly do that during time periods when I have unusually low energy / bad mental health, and it sometimes takes a while to notice that this is the case
3. making fewer plans
Perhaps I’m agreeing to things too quickly in general, or not giving myself enough time to rest, or failing to acknowledge that no, it’s not just a bad week, this is just the new normal, at least for this month or this year.
so yeah, this:
If I can’t be confident that I’ve nailed the problem down, the next step is simply to increase my error bars. Make fewer commitments in general, through a top-down conscious effort, or make each individual commitment looser, giving people more notice that I might bail or flake.
though there’s a problem where it’s not always feasible for me to make only as many commitments as I can reliably handle
4. more proactive communication, reputation effects, self-prediction & calibration
in which case yeah one of my strategies is to try to inform people of this
it helps in a sense that I am already known to flake on things sometimes?
anyway it might be useful for me to make a more systematic habit of actually estimating how likely I am to succeed at various plans, and telling people those estimates, and scoring and calibrating my predictions over time
I’ve had this thought before but not followed up on it in a systematic way
though I do give people such estimates sometimes when it feels salient and appropriate
I read about half this post before realizing that this concept is intuitively familiar to me from the process of translating poems/songs: very often a poem or song will have certain specific bits that are going to be extra important to get exactly right (e.g. the title, or something conceptually loadbearing, or a particularly clever or emotionally impactful line) or unusually hard for some reason (e.g. it’s trying to get across a very specific or finicky or culturally specific concept, or using clever wordplay, or it’s self-referential like “a fourth, a fifth, a minor fall, a major lift”), and when considering a new translation it usually makes sense to start brainstorming from those bits, because every line you write will affect what can go near it, so it makes sense to start with the lines where I’ll be lucky if I can think of one good thing that scans, and then from there try to fill in the other lines (which hopefully have more degrees of freedom) with things that not only scan but also rhyme with the hard parts. (Also because sometimes you won’t think of anything for the hard parts, and then it might not make sense to invest a bunch of time in working on the rest of the thing.)
Strong +1 to Focus Mode; I’m not doing the rest of this but I do find it extremely valuable to be able to temporarily turn off things that are going to pull my attention in twenty different directions unexpectedly. I use it when I’m working and also sometimes when I go on walks where I want to be able to use my phone to take pictures and check maps but not to be distracted by the Internet.
Thanks for sharing this! I’d be interested to see the qualitative data sorted by whether the person was vaxed at the time.
I want to express some strong appreciation for the post including not just some indicators that frame control is occurring but also some indicators that frame control is NOT occurring, and also for trying to mitigate the likelihood that this concept will be misused in the future. I also appreciate that the comment section is full of people absorbing the concept and also working to set bounds on it and make it safer. I appreciate the epistemic environment that gives rise to this kind of caution.
One fairly central reaction I had to this post is not so much about the specific phenomenon of frame control but rather about the general observation that it’s quite common for the aspects of an abusive situation that are worst to experience to NOT be the same as the aspects that are most clear-cut bad and easiest to convey objectively to another person.
This seems true; I have heard multiple people with objectively horrifying stories of abuse report that actually they don’t really care about the objectively awful parts that their friends are horrified about, but instead they are really fucked up by some stuff that’s much harder to convey. (Probably in some cases that’s the same general phenomenon described in this post and in other cases it’s some other interpersonal fuckery.)
I have also heard people report that they experienced a situation as abusive and NOT have any clear-cut objectively awful behavior to point to. It makes perfect sense that this would happen in some cases—because the abuser is savvy enough about what people will object to to avoid those things, or because the abuser is actually trying to be good by following the ethical rules they know but is not managing to also be good in less legible matters, or for some other reason.
...It is also my experience that when humans make not-fully-objective reports about the beliefs/behaviors/words of other humans they disagree with and/or have some kind of adversarial relationship with, it is extremely common for such subjective accounts to be distorted in some way. For this reason, when I hear about an accusation of wrongdoing, I usually try to zero in on the objective claims being made, because (assuming I basically trust that the reporter is intending to be truthful) those are much less likely to be distorted or interpreted through a lens I think is unreasonable.
But this means that it’s very hard for me to tell, as an outsider, when illegible wrongdoing has occurred. (I was going to say “illegible harm” but actually accusations of interpersonal wrongdoing are much stronger evidence of harm than of wrongdoing per se; I only need a very basic level of trust in someone’s honesty to conclude they were harmed by a situation they’re describing as abusive.) Indeed this feels kind of epistemically hopeless to ever evaluate from the outside?
I don’t really know what to do with this thought but it felt important to note.
I think the examples are good but I wish there were more examples that aren’t highly controversial in some way, either politically or interpersonally. (The “parental control” example is the one that least pinged my “eek, drama here” sense, though certainly there are many who would disagree with your point there (but it doesn’t feel like a locally live issue).)