Definitely makes sense. A commonly cited example is women in an office workplace; what would be average assertiveness for a male is considered “bitchy”, but they still suffer roughly the same “weak” penalties for non-assertiveness.
With the advice-giving aspect, some situations are likely coming from people not knowing what levers they’re actually pulling. Adam tells David to move his “assertiveness” lever, but there’s no affordance gap available for David by moving that lever—he would actually have to move an “assertiveness + social skill W” lever which he doesn’t have, but which feels like a single lever for Adam called “assertiveness”. Not all situations are such; there’s no “don’t be a woman” or “don’t be autistic lever”. Sometimes there’s some other solution by moving along a different dimension and sometimes there’s not.
Feel similarly; since Facebook comments are a matter of public record, disputes and complaints on them are fully public and can have high social costs if unaddressed. I would not be worried about it in a small group chat among close friends.
I perceive several different ways something like this happens to me:
1. If I do something that strains my working memory, I’ll have an experience of having a “cache miss”. I’ll reach for something, and it won’t be there; I’ll then attempt to pull it into memory again, but usually this is while trying to “juggle too many balls”, and something else will often slip out. This feels like it requires effort/energy to keep going, and I have a desire to stop and relax and let my brain “fuzz over”. Eventually I’ll get a handle on an abstraction, verbal loop, or image that lets me hold it all at once.
2. If I am attempting to force something creative, I might feel like I’m paying close attention to “where the creative thing should pop up”. This is often accompanied with frustration and anxiety, and I’ll feel like my mind is otherwise more blank than normal as I keep an anxious eye out for the creative idea that should pop up. This is a “nothing is getting past the filter” problem; too much prune and not enough babble for the assigned task. (Not to say that always means you should babble more; maybe you shouldn’t try this task, or shouldn’t do it in the social context that’s causing you to rightfully prune.)
3. Things can just feel generally aversive or boring. I can push through this by rehearsing convincing evidence that I should do the thing—this can temporarily lighten the aversion.
All 3 of these can eventually lead to head pain/fogginess for me.
I think this “difficulty thinking” feeling is a mix of cognitive ability, subject domain, emotional orientation, and probably other stuff. Mechanically having less short-term memory makes #1 more salient whatever you’re doing. Some people probably have more mechanical“spark” or creative intelligence in certain ways, affecting #2. Having less domain expertise makes #1 and maybe #2 more salient since you have less abstractions and raw material to work with. Lottery of interests, social climate, and any pre-made aversions like having a bad teacher for a subject will factor into #3. Sleep deprivation worsens #1 and #2, but improves #3 for me (since other distractions are less salient).
I think this phenomenon is INSANELY IMPORTANT; when you see people who are a factor of 10x or 100x more productive in an area, I think it’s almost certainly because they’ve gotten past all of the necessary thresholds to not have any fog or mechanical impediment to thinking in that area. There is a large genetic component here, but to focus on things that might be changeable to improve these areas:
Using paper or software when it can be helpful.
Working in areas you find natively interesting or fun. Alternatively, framing an area so that it feels more natively interesting or fun (although that seems really really hard). Finding subareas that are more natively fun to start with and expand from; for instance, when learning about programming, trying out some different languages to see which you enjoy most. It’ll be easier to learn necessary things about one you dislike after you’ve learned a lot in the framework of one you like.
Getting into a social context that gives you consistent recognition for doing your work. This can be a chicken and egg problem in competitive areas.
Eliminating unrelated stressors, setting up a life that makes you happier and more fulfilled; I had worse brain fog about math when in bad relationships.
Eating different food. There’s too many potential dietary interventions to list (many contradictory); I had a huge improvement from avoiding anything remotely in the “junk food” category and trying to eat things that are in the “whole food” category.
Stimulant drugs for some people; if you have undiagnosed ADHD, try to get diagnosed and medicated.
I really wish I had spent more time in the past working on these meta problems, instead of beating my head against a wall of brain fog.
A note on this, which I definitely don’t mean to apply to the specific situations you discuss (since I don’t know enough about them):
If you give people stronger incentives to lie to you, more people will lie to you. If you give people strong enough incentives, even people who value truth highly will start lying to you. Sometimes they will do this by lying to themselves first, because that’s what is necessary for them to successfully navigate the incentive gradient. This can be changed by their self-awareness and force of will, but some who do that change will find themselves in the unfortunate position of being worse-off for it. I think a lot of people view the necessity of giving such lies as the fault of the person giving the bad incentive gradient; even if they value truth internally, they might lie externally and feel justified in doing so, because they view it as being forced upon them.
An example is a married couple, living together and nominally dedicated to each other for life, when one partner asks the other “Do I look fat in this?”. If there is significant punishment for saying Yes, and not much ability to escape such punishment by breaking up or spending time apart, then it takes an exceedingly strong will to still say “Yes”. And a person with a strong will who does so then suffers for it, perhaps continually for many years.
If you value truth in your relationships, you should not only focus on giving and receiving the truth in one-off situations; you should set up the incentive structures in your life, with the relationships you pick and how you respond to people, to optimally give and receive the truth. If you are constantly punishing people for telling you the truth (even if you don’t feel like you’re punishing them, even if your reactions feel like the only possible ones in the moment), then you should not be surprised when most people are not willing to tell you the truth. You should recognize that, if you’re punishing people for telling you the truth (for instance, by giving lots of very uncomfortable outward displays of high stress), then there is an incentive for people who highly value speaking truth to stay away from you as much as possible.
I think you’re looking for Thurston’s “On proof and progress in mathematics”: https://arxiv.org/abs/math/9404236
I think I may agree with the status version of the anti-hypocrisy flinch. It’s the epistemic version I was really wanting to argue against.
Ok yeah, I think my concern was mostly with the status version—or rather that there’s a general sensor that might combine those things, and the parts of that related to status and social management are really important, so you shouldn’t just turn the sensor off and run things manually.
… That doesn’t seem like treating it as being about epistemics to me. Why is it epistemically relevant? I think it’s more like a naive mix of epistemics and status. Status norms in the back of your head might make the hypocrisy salient and feel relevant. Epistemic discourse norms then naively suggest that you can resolve the contradiction by discussing it.
I was definitely unclear; my perception was the speaker’s claiming “person X has negative attribute Y, (therefore I am more deserving of status than them)” and that, given a certain social frame, who is deserving of more status is an epistemic question. Whereas actually, the person isn’t oriented toward really discussing who is more deserving of status within the frame, but rather is making a move to increase their status at the expense of the other person’s.
I think my sense that “who is deserving of more status within a frame” is an epistemic question might be assigning more structure to status than is actually there for most people.
I will see if I can catch a fresh one in the wild and share it. I recognize your last paragraph as something I’ve experienced before, though, and I endorse the attempt to not let that grow into righteous indignation and annoyance without justification—with that as the archetype, I think that’s indeed a thing to try to improve.
Most examples that come to mind for me have to do with the person projecting identity, knowledge, or an aura of competence that I don’t think is accurate. For instance holding someone else to a social standard that they don’t meet, “I think person X has negative attribute Y” when the speaker has also recently displayed Y in my eyes. I think the anti-hypocrisy instinct I have is accurate in most of those cases: the conversation is not really about epistemics, it’s about social status and alliances, and if I try to treat it as about epistemics (by for instance, naively pointing out the ways the other person has displayed Y) I may lose utility for no good reason.
As you say, there are certainly negative things that hypocrisy can be a signal of, but you recommend that we should just consider those things independently. I think trying to do this sounds really really hard. If we were perfect reasoners this wouldn’t be a problem; the anti-hypocrisy norm should indeed just be the sum of those hidden signals. However, we’re not; if you practice shutting down your automatic anti-hypocrisy norm, and replace it with a self-constructed non-automatic consideration of alternatives, then I think you’ll do worse sometimes.
This has sort of a “valley of bad rationality” feel to me; I imagine trying to have legible, coherent thoughts about alternative considerations while ignoring my gut anti-hypocrisy instinct, and that reliably failing me in social situations where I should’ve just gone with my instinct.
I notice the argument I’m making applies generally to all “override social instinct” suggestions, and I think that you should sometimes try to override your social instincts—but I do think that there’s huge valleys of bad rationality near to this, so I’d take extreme care about it. My guess is I think you should override them much less than you do—or I have a different sense of what “overriding” is.
One hypothesis is that consciousness evolved for the purpose of deception—Robin Hanson’s “The Elephant in the Brain” is a decent read on this, although it does not address the Hard Problem of Consciousness.
If that’s the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being deceptive will not help with that goal.
If somehow the easy and hard versions of consciousness separate (i.e., things which don’t functionally look like the conscious part of human brains end up “having experience” or “having moral weight”), then this might not solve the problem even under the deception hypothesis.
Some reader might be thinking, “This is all nice and dandy, Quaerendo, but I cannot relate to the examples above… my cognition isn’t distorted to that extent.” Well, let me refer you to UTexas CMHC:
Maybe you are being realistic. Just for the sake of argument, what if you’re only 90% realistic and 10% unrealistic? That mean’s you’re worrying 10% “more” than you really have to.
Not intending to be overly negative, but this is not a good argument for anything and also doesn’t answer the hypothetical question of not relating to the examples. It sounds like, “You’re not perfect along this dimension, so you should devote energy to it!” -- which is definitely not the case.
I appreciate the list of distortions; such lists are nice raw material.
For most questions you can’t really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision.
I don’t think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust. Sometimes explicitly reasoning about the thing makes you clearly worse at it, and you can account for this over time.
Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).
I also don’t think this is as clear cut as you’re making it sound; explicit reasoning is also subject to biases, and intuitions can be the things which offset biases. As a quick and dirty example, even if your explicit reasoning takes the form of mathematical proofs which are verifiable, you can have biases about 1. which ontologies you use as your models to write proofs about, 2. which things you focus on proving, and 3. which proofs you decide to give. You can also have intuitions which push to correct some of these biases. It is not the case that intuition → biased, explicit reasoning → unbiased.
Explicit reflection is indeed a powerful tool, but I think there’s a tendency to confuse legibility with ability; someone can illegibly to others or themselves have the capacity to do something (like use an intuition to correct a bias). It is hard to transmit such abilities, and without good external proof of their existence or transmissibility we are right to be skeptical and withhold social credit in any given case, else we be misled or cheated.
If you choose to “care more” about something, and as a result other things get less of your energy, you are socially less liable for the outcome than if you intentionally choose to “care less” about a thing directly. For instance, “I’ve been really busy” is a common and somewhat socially acceptable excuse for not spending time with someone; “I chose to care less about you” is not. So even if your one and only goal was to spend less time on X, it may be more socially acceptable to do that by adding Y as cover.
Social excusability is often reused as internal excusability.
Some reasons this is bad:
It’s false or not-even-wrong (“worthless parody of a human” is not something that I imagine epistemically applies to any human ever.)
It’s mixing epistemics and shoulds—even if you categorized yourself as a misery pit, this does not come close to meaning you should throw yourself under a bus.
Misery pits are a false framework, that may be useful for modeling phenomena, but may not be a useful model for people who would tend to identity themselves a misery pits. For instance, if they were likely to think the quoted thought, they’d be committing a lot of bucket errors.
I also dislike this comment because I think it’s too glib.
I think it’s a memetic adaptation type thing. I would claim that attempting to open up the group usage of NVC will also (in a large enough group) open up the usage of “language-that-appears-NVCish-even-if-against-the-stated-philosophy”. I think that this type of language provides cover for power plays (re: the broken link to the fish selling scenario), and that using the language in a way that maintains boundaries requires the group to adapt and be skillful enough at detecting these violations. It is not enough if you do so as an individual if your group does not lend support; it may be enough if as an individual you are highly skilled at defending yourself in a way that does not lose face (and practicing NVC might raise that skill level), but it’s harder than in the alternative scenario.
I’m definitely not trying to object to NVC in general, but I’m worried about it as a large social group style. I think the failures of it as a large group style would mostly appear as relatively silent status transfers to the less virtuous.
Also, these arguments are not super specific to NVC and Circling, so should probably be abstracted. I think any large scale group communication change has similar bad potential, and it’s an object level question whether that actually happens. With NVC, I’ve seen some such dynamics in churches that remind me of it, hence why I raise the worry. I think I would feel queasy and like I was being attacked if someone started using NVC language at me in a public setting in front of others; I definitely feel like I’ve been “fish-sold” before.
It’s entirely possible that there exist large groups with a high enough skill level or different values so that this is not a problem at all, and my experience is just too limited.
This is incorrect and I think only sounds like an argument because of the language you’re choosing; there’s nothing incoherent about 1. preferring evolutionary pressures that look like Moloch to exist so that you end up existing rather than not existing, and 2. wanting to solve Moloch-like problems now that you exist.
Also, there’s nothing incoherent about wanting to solve Moloch-like problems now that you exist regardless of Moloch-like things causing you to come into existence. Our values are not evolution’s values, if that even makes sense.
I’m not an expert, but I think MD5 isn’t the best for this purpose due to collision attacks. If it’s a very small plain-english ASCII message, then collision attacks are probably not a worry (I think?), but it’s probably better to use something like SHA-2 or SHA-3 anyways.
Yeah, this definitely seems like a bug; permalinks to comments shouldn’t require this. Unfortunately, I don’t see any obvious way to report a bug.
Upfront note: I’ve enjoyed the circling I’ve done.
One reason to be cautious of circling: dropping group punishment norms for certain types of manipulation is extremely harmful. From my experience of circling (which is limited to a CFAR workshop), it provides plausible cover for very powerful status grabs under the aegis of “(just) expressing feelings and experiences”; I think the strongest usual defense against this is actually group disapproval. If someone is able to express such a status grab without receiving overt disapproval, they have essentially succeeded unless everyone in the group truly is superhuman at later correcting for this. If mounting the obvious self-defense against the status grab is taken off the table, then you may just lose painfully unless you can out-do them.
Normalizing circling (or NVC) too much could lead to externalities, where this happens outside of an actual circling context. This could lead to people losing face who normally wouldn’t, along with arms races that turn an X community into a circling-skill community.
If people are allowed to fish sell you (https://www.lesserwrong.com/posts/aFyWFwGWBsP5DZbHF/circling#E9dqjhm8Ca3HkFRMZ), and walking away loses you social status, and other people look on expectantly for your answer as you are fish sold instead of saying “Stop, they don’t want to buy your fish”, then depending on the type of fish and what escape routes to other social circles you have available, you may be in a hellishly difficult situation.
Note that I think this is bad regardless of your personal skill at resisting social pressure. The social incentive landscape changing leads to worse outcomes for everyone, even if you can individually get better outcomes for yourself by better learning to resist social pressure. That better outcome may be moving to a different community instead of being continually downgraded in status, which is a worse outcome than the community never having that bad incentive landscape to begin with.
“Complaining about your trade partners” at the level of making trade decisions is clearly absurd (a type error). “Complaining about your trade partners” at the level of calling them out, suggesting in an annoyed voice they behave differently, looking miffed, and otherwise attempting to impose costs on them (as object level actions inside of an ongoing trade/interaction which you both are agreeing to) is not. These are sometimes the mechanism via which things of value are traded or negotiations are made, and may be preferred by both parties to ceasing the interaction.