Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
People have an annoying tendency to hear the word “rationalism” and think “Spock”, despite direct exhortation against that exact interpretation. But I don’t know of any source directly describing a stance toward emotions which rationalists-as-a-group typically do endorse. The goal of this post is to explain such a stance. It’s roughly the concept of hangriness, but generalized to other emotions.
That means this post is trying to do two things at once:
Illustrate a certain stance toward emotions, which I definitely take and which I think many people around me also often take. (Most of the post will focus on this part.)
Claim that the stance in question is fairly canonical or standard for rationalists-as-a-group, modulo disclaimers about rationalists never agreeing on anything.
Many people will no doubt disagree that the stance I describe is roughly-canonical among rationalists, and that’s a useful valid thing to argue about in the comments in proportion to how well you actually know many rationalists.
Central Example: “Hangry”
When we’re hangry, it feels like people around us are doing stupid, inconsiderate, or otherwise bad things. It feels like we’re justifiably angry about those things. But then we eat, and suddenly our previous anger doesn’t feel so justified any more.
When we’re hangry, our anger is importantly wrong, or false in some sense. The feelings are telling our brain that other people are doing stupid, inconsiderate, or otherwise egregious things. And later, on reflection, we will realize that our feelings were largely wrong about that; the feelings were not really justified by the supposed wrongdoings.
But the correct response is not to dismiss or ignore the feelings! Even if the feelings “tell us false things” in some sense, those feelings still result from an important unmet need: we need food! The correct response isn’t to ignore or dismiss the anger, the correct response is to realize that the anger is mostly caused by hunger, and to go eat.
The word “hangry” conveys this whole idea in two syllables. And crucially, the existence of “hangry” as a word normalizes the phenomenon—more on that later.
I consider the word “hangry” to be one of the main ways in which mainstream society has become more sane in the past ~10 years. In a single word, it perfectly captures the stance toward emotions which I want to describe. We just need to generalize hangriness to other emotions.
The Generalized Hangriness Stance
The stance itself involves three main pieces:
Emotions make claims, which can be true or false.
Even false claims convey useful information, just not the information they say they convey.
So, the core move is to ask “Setting aside what this emotion claims, what information does it actually give me?” (which, to be clear, will sometimes-but-not-consistently match what the emotion claims).
Emotions Make Claims, And Their Claims Can Be True Or False
Words have semantics. If someone tells me “there’s a bathroom down the hall around the corner”, then when I walk down the hall and turn the corner, I expect to see a bathroom. A physical bathroom being in that physical spot is the main semantic claim of the words.
Likewise, emotions have semantics; they claim things. Anger might claim to me that it was stupid or inconsiderate for someone to text me repeatedly while I’m trying to work. Excitement might claim to me that an upcoming show will be really fun. Longing might claim to young me “if only I could leave school in the middle of the day to go get ice cream, I wouldn’t feel so trapped”. Satisfaction might claim to me that my code right now is working properly, it’s doing what I wanted.
As with words, those semantic claims can be true or false.
If someone claims to me that there’s a bathroom down the hall around the corner, and then I go down the hall and around the corner and there’s no bathroom, I update that their claim was probably false. (Even more so if it turns out there is no corner, or possibly even no hallway.) If I go down the hall and around the corner and find a bathroom, then the claim was true.
If my anger claims to me that it was stupid or inconsiderate for someone to text me repeatedly while I’m trying to work, but on reflection I realize that I didn’t indicate I was busy and can’t reasonably expect them to guess I was busy, I update that my anger’s claim was probably false. If on reflection I have told the person many times before that texts during work hours are costly to me, then I update that my anger’s claim was probably true.
If my excitement claims to me that an upcoming show will be really fun, and the show turns out to be boring, then the claim was false. If the show turns out to be, say, the annual panto at the Palladium, then the claim was very conclusively true.
If my longing claims to young me “if only I could leave school in the middle of the day to go get ice cream, I wouldn’t feel so trapped”, and upon growing older and having the freedom to go get ice cream in the middle of the day I still feel trapped, I update that my longing’s claim was probably false. In fact I do now have the freedom to get ice cream in the middle of the day, and I generally do not feel trapped, so that’s an update toward my longing’s claim being true.
If my satisfaction claims to me that my code right now is working properly, and it turns out that an LLM simply overwrote my test code to always pass, then my satisfaction’s claim is false. If it turns out that my code is indeed working properly, then my satisfaction’s claim is true.
In general, if you want to know what an emotion is claiming, just imagine that the emotion is a person or cute animal who can talk, and ask what they say to you.
False Claims Still Contain Useful Information (It’s Just Not What They Claim)
Let’s say I feel angry, so I imagine that my anger is a character named Angie and I ask them what’s up. And Angie starts off on a rant about how this shitty software library has terrible documentation and the API just isn’t doing what I expected and I’ve been at this for three fucking hours and goddammit I’m just so tired of this shit.
So, ok, Angie claims to be angry about the shitty software library. Fair enough, most software libraries are in fact hot trash. But c’mon, Angie, usually we’re not this worked up about it. What’s really going on here? And Angie pauses for a moment and is like “Man, I am just so tired.”. Perhaps what is really needed is… a break? Perhaps a nap? Perhaps a snack or some salt (both of which often alleviate tiredness)?
In a case like this, my anger is making claims about the quality of a software library. And those claims are… probably somewhat exaggerated in salience, even if not entirely false. But even insofar as the claims themselves are false, they still convey useful information. The anger may be wrong about the quality of the software library, but it still contains useful information: I’m tired. As a rough general rule, strong emotions are strong because some part of me is trying to tell me something it thinks is important… just not necessarily the thing the emotion claims.
“Pretend the emotion is a person or cute animal who can talk” is a pretty great trick. Not just for checking what they say, but for checking what they don’t say. See, lots of people have good enough social instincts to ask “Is that what’s really bothering you?” when someone else is worked up, but it’s a harder skill to pose that question to oneself. Picturing the emotion as a person or animal triggers that external perspective, makes it easier to notice that maybe the emotion is bothered by something other than what it’s saying.
But you can also just ask yourself “What’s really generating this emotion? What can I actually guess from it, setting aside the claims the emotion makes?”.
… and once one starts down that path, very often the answer turns out to be “I’m scared of X and this emotion wants to protect me from X”. Often X is social disapproval of some sort (ranging from a glare to outright ostracism), or something the person has been burned by in the past. And that’s why so many rationalists end up down a rabbit hole of trauma processing, or relational practices meant to make people feel loved and supported, or other borderline woo-ish things. An awful lot of those woo-ish things are optimized to deal with exactly these sorts of emotion generators.
The Generalized Hangriness Stance as Social Tech
Arguably the best thing about the word “hangry” is that its existence normalizes hangriness.
20 years ago, if some definitely-hypothetical person suggested to their hypothetical romantic partner that perhaps the partner was not angry about the thing they were ranting about, but was instead grumpy from being hungry… yeah, uh, that would normally not go over well. The partner would feel like their completely-valid(-feeling) emotions were just being brushed off, with some nonsense about being hungry.
But with the word hangry, it’s a lot easier to say “You seem maybe hangry right now, how about you have something to eat and if you still feel this way after then we can talk about it”. That doesn’t always work; it might still feel like being brushed off if someone’s worked up enough and/or sufficiently terrible at understanding their emotions. (Also people might sometimes in fact try to brush off other peoples’ valid emotions by hypothesizing hangriness, but that’s a trick which only delays things for like 20 minutes if one responds with the obvious test of eating something.) But it works a lot better than trying to convey the same thing before hangriness was normalized as a concept.
Alas, there’s still no word which normalizes this kind of thing more generally.
Telling people in the moment that the things their emotions are telling them seem false, and perhaps their emotions convey some other information… is usually not the right move unless you’re very unusually good at making people feel seen while not not telling them they’re being an idiot. Because, yes, someone ranting due to hangriness is being an idiot, and no, directly telling them they’re being an idiot does not help. Either they need to have already bought into the idea that a very large chunk of most peoples’ emotions claim false things but nonetheless convey useful information, or they’ll need to feel seen before anything else works. And it’s very hard to make such a person feel seen without at least somewhat endorsing whatever idiocy their emotions are claiming.
Among other rationalists, I usually expect that people are on board with the Generalized Hangriness Stance, so it’s usually ok to say something like “Look, I think you feel like X, but I suspect your feeling is in fact coming from Y rather than X. And to be clear I could be wrong here, but I think this should at least be in our hypothesis space. We can look at A, B, C as relevant evidence, and maybe try D, and if that doesn’t work then I’ll update that the feeling probably is coming from Y after all.”. Where, to be clear, the explicit “I could be wrong here” is an extremely load-bearing part of what makes this all work. Another good wording I use frequently is “I’m not sure this is true, but here’s a model of what’s going on…” ideally peppered with frequent reminders that this is a model and I’m not asserting that it’s correct.
Point is, this Stance toward emotions isn’t just individually useful. Arguably most of its value is as social tech. When most people in a space are on board with the Generalized Hangriness Stance, it becomes possible-at-all to point out to people that maybe their emotions are claiming stupid things, without that necessarily coming across as an attack on the person (and triggering defensiveness). And it then also becomes possible to help someone figure out what information their emotions actually convey, and help them with what they actually need (like e.g. eating). Some skill is still required, but it’s much more tractable when there’s common knowledge that people are on board with the Generalized Hangriness Stance.
- </rant> </uncharitable> </psychologizing> by 1 Oct 2025 21:20 UTC; 54 points) (
- Monthly Roundup #33: August 2025 by 19 Aug 2025 12:40 UTC; 41 points) (
- Stop and check! The parable of the prince and the dog by 12 Jul 2025 17:45 UTC; 36 points) (
- Refining Generalized Hangriness: Emotional Processing as Thinking Tech by 21 Jul 2025 18:49 UTC; 10 points) (
- The Chinese Room re-visited: How LLM’s have real (but different) understanding of words by 24 Sep 2025 14:06 UTC; 6 points) (
- 14 Aug 2025 17:45 UTC; 3 points) 's comment on A Self-Dialogue on The Value Proposition of Romantic Relationships by (
- 30 Jul 2025 17:58 UTC; 3 points) 's comment on My Empathy Is Rarely Kind by (
- NO PARKING: A Short & Practical Guide To Thinking by 22 Jul 2025 15:44 UTC; 2 points) (
- 20 Jul 2025 14:44 UTC; 2 points) 's comment on Martin Randall’s Shortform by (
- Beyond Hangriness: A Deeper Framework for Emotional Clarity by 30 Jul 2025 23:59 UTC; -7 points) (
For readers who need the opposite advice: I don’t think the things people get hangry about are random, just disproportionate. If you’re someone who suppresses negative emotions or is too conflict averse or lives in freeze response, notice what kind of things you get upset about while hangry- there’s a good chance they bother you under normal circumstances too, and you’re just not aware of it.
Similar to how standard advice is don’t grocery shop while hungry, but I wouldn’t buy enough otherwise.
You should probably eat before doing anything about hangry thoughts though.
Unless you’ve observed that you tend to unendorsedly let things slide once you’re fed. In that case, better do something about the problem while you’re hangry.
Linking to the Reverse All Advice post is itself a way to label an action collaborative. Without it, I risk coming off as thinking the original author made a mistake or should have explicitly addressed my point.
Indeed, I myself sometimes need to listen to my emotions very carefully to understand that I am doing something that I shouldn’t as I am quite skillful in ignoring them. Objectifying your own emotions can only be done if you feel and discern them quite well to begin with.
This rhymes with how one treats feature recommendations from users. It is typically the case that a user advising you to make a change does indeed have a problem when using your product that they’re trying to solve, and you should figure out what that problem is, but their account of how to solve it (what ‘improvement’ to make) is usually worth throwing out the window.
See also:
(And, really, the rest of the comments on the post “Incorrect hypotheses point to correct observations”, as well as the post itself. Highly relevant!)
See also: https://en.m.wikipedia.org/wiki/XY_problem
I don’t think this stance is as rare as you think. My partner (who doesn’t care for rationalism in general and has never met a rationalist other than (I guess) me) regularly says things like “[general wrath] oh wait my period is starting, that’s probably why I’m raging, nevermind” and “have you considered that you’re only being depressive about [side project] because [main job] is going badly?”.
I will admit that selecting on “people who are in a relationship with me” is a pretty strong filter. Overall I’m hopeful for this social tech to become more common.
(In fact now I think about it, were the “you’re being hysterical dear” comments of old actually sometimes a version of this, as opposed to being—as is often now assumed—abhorrent levels of sexism)
Agreed, I don’t think it’s actually that rare. The rare part is the common knowledge and normalization, which makes it so much easier to raise as a hypothesis in the heat of the moment.
Trying to suggest that someone else’s bad mood might be caused by their period would be considered by most people horribly sexist. So you can only hope that they might notice it themselves… or very gently and non-specifically point towards the general idea of hangriness and hope that they can connect the dots...
And this is more likely to work if the concept is a frequently used common knowledge.
My stance towards emotions is to treat them as abstract “sensory organs” – because that’s what they are, in a fairly real sense. Much like the inputs coming from the standard sensory organs, you can’t always blindly trust the data coming from them. Something which looks like a cat at a glance may not be a cat, and a context in which anger seems justified may not actually be a context in which anger is justified. So it’s a useful input to take into account, but you also have to have a model of those sensory organs’ flaws and the perceptual illusions they’re prone to.
(Staring at a bright lamp for a while and then looking away would overlay a visual artefact onto your vision that doesn’t correspond to anything in reality, and if someone shines a narrow flashlight in your eye, you might end up under the impression someone threw a flashbang into the room. Similarly, the “emotional” sensory organs can end up reporting completely inaccurate information in response to some stimuli.)
Another frame is to treat emotions as heuristics – again, because that’s largely what they are. And much like any other rule-of-thumbs, they’re sometimes inapplicable or produce incorrect results, so one must build a model regarding how and when they work, and be careful regarding trusting them.
The “semantic claims” frame in this post is also very useful, though, and indeed makes some statements about emotions easier to express than in the sensory-organs or heuristics frames. Kudos!
Another example of this pattern that’s entered mainstream awareness is tilt. When I’m playing chess and get tilted, I might think things like “all my opponents are cheating, “I’m terrible at this game and therefore stupid,” or “I know I’m going to win this time, how could I not win against such a low-rated opponent.” But if I take a step back, notice that I’m tilted, and ask myself what information I’m getting from the feeling of being tilted, I notice that it’s telling me to take a break until I can stop obsessing over the result of the previous game.
Tilt is common, but also easy to fix once you notice the pattern of what it’s telling you and start taking breaks when you experience it. The word “tilt” is another instance of a hangriness-type stance that’s caught on because of its strong practical benefits—having access to the word “tilt” makes it easier to notice.
This strikes a chord with me. Another maybe similar concept that I use internally is “fried”. Don’t know if others have it too, or if it has a different name. The idea is that when I’m drawing, or making music, or writing text, there comes a point where my mind is “fried”. It’s a subtle feeling but I’ve learned to catch it. After that point, continuing working on the same thing is counterproductive, it leads to circles and making the thing worse. So it’s best to stop quickly and switch to something else. Then, if my mind didn’t spend too long in the “fried” state, recovery can be quite quick and I can go back to the thing later in the day.
I call it “bleary” when I want to connote that it’s fried-ness that isn’t from overwork. I have not known that “fried” is what I’m contrasting it to until I read your comment and the words you chose. Thanks!
I have never heard of this usage of “tilt” before. Do you perchance have any links to examples in the wild?
The most likely etymology I’m aware of is via pinball, where a pinball machine would disable its controls and drain the ball if it detected that something was applying too much unexpected physical force to the machine. Anecdotally, the generalization of that to the “losing one’s ability to make controlled plays after winding up in an unusual agitating situation” sort of meaning later made its way into poker, whence at least one LW-popular personality is definitely familiar with it. From Zvi’s “Book Review: On The Edge: The Gamblers”:
and:
If you have this same class of question in the future, Wiktionary is often a reasonable place to look for quotations; the page for “tilt” describes this sense as sense 8 (both noun and verb) and gives several examples.
I heard this usage of “tilt” a lot when I used to play League of Legends, but almost never heard it outside of that, so my guess is that it’s gamer slang.
It’s in live usage among Magic players, who may have gotten it from poker — or even from Zvi specifically for that matter.
That is surprising. We often used the word in high school ~10 years ago and I’m not even a native speaker. Example
This thesis on poker players has a section on it:
I think it’s clearer to say your emotions make you claim various potentially irrational things. This is one reason rationalists become particularly scared of their emotions, even though the behaviors your emotions induce might often be adaptive. (After all, they evolved for a reason.)
Emotions can motivate irrational behavior as well as irrational claims, so even people who aren’t as truth-inclined often feel the need to resist their own emotions as well, as in anger management. However, emotions are particularly good at causing you to say untrue things, hence their status as distinguished enemies of rationality.
(Edit: Or maybe our standards for truthful claims are just much higher than our default standards for rational behavior?)
That doesn’t sound quite right to me; my emotions might be claiming various things to me, even as the overall-system-that-is-me recognizes that those claims are incorrect and doesn’t let them change my overall behavior. (But there’s still internal effort being expended on the not-going-along thing.)
There’s two imporant things missing here:
1: You mainly advocated for solving projected negativity. Positive emotions “lie” as well, and they can cause the opposite of the hangriness. If you were logically consistent and indeed wished to maximize correct information, then you’d seek to destroy excessively positive emotions as well. And I don’t want to call you dishonest, but I don’t think that most rationalists would destroy a state of agape or happiness just because it’s “wrong”. Further more, positive emotions have utility, even if they’re wrong. This community does not seem to realize this yet, but only some ignorance and some delusion is harmful.
2: Emotions and experiences aren’t one-way, but two-way. Your emotions will tell you something about the world, but what you’re told about the world will affect your emotions. This leads to feedback-loops. Things like being hangry just makes this feedback loop more likely to go in a negative direction. Any valence in your body affects your experience of reality. If your body feels really good, then your experiences will all tend towards being pleasant. The reason I bring this up is that you’re trying to solve an equation which depends on itself, and which is affected by subjective things, and then update your belief about objective reality based on it. A lot of highly intelligent people have depression and struggle to escape this state with logic alone, and this is one of the insights which helped me break out personally.
2) Might even imply that it’s incorrect to generalize hangriness as “emotions”. Having a headache will also make all experiences less pleasant, but a headache is not an emotion. If I’m right, then painkillers could potentially improve your mood. This would make for even better social tech—if the person you’re talking to seems annoyed, maybe they’re just too warm or too cold, understimulated or overstimulated, etc.
If you take these ideas further, you can do fun things, like updating your beliefs about the world in order to improve your experiences (I think therapy is about doing this), start a feedback loop (for the sake of productivity), change your interpretations of facts in order to change negative valuence into positive valence (“Life is suffering (-)” → “Life is an undeserved gift (+)”), focus on positive sensations in your body and strengthen the pathways which allows for this so that you can notice when you’re hungry more quickly, and so that you can amplify positive valence on will/demand.
I entirely agree that “positive emotions ‘lie’ as well”; but I think that—often, likely usually, perhaps not “always” (though I’d have to give it some thought to be sure)—such false positive emotions are indeed dangerous and harmful, and ought to be “destroyed” (i.e., corrected).
For example, love for someone who does not deserve your love, treats you poorly, even abuses you—this is harmful, and you would be much better off to recognize the falsity of that emotion, and to bring it in line with reality. Misplaced affection, misplaced nostalgia, “rose-colored glasses”—these too are examples of “false positive emotions”. Satisfaction at a job well done, when in fact the job has been done poorly; pride, when in fact you’ve acted shamefully; anticipation of success, when in fact failure is nearly guaranteed (and the action to be taken is entirely optional); and many other examples… such positive emotions are irrational, in the literal sense of failing to systematically track reality; and they absolutely should be seen as mistakes to be corrected.
I can agree with “often”. I think there may be multiple classes of beliefs connected with emotions. The general rule is probably “Beliefs which results in a wrong map are dangerous”. The example I gave earlier of “Life is an undeserved gift” seems to add value to life (which results in gratitude) without any negative side-effects. Wait, wrong maps can be harmless as long as the territory is never explored. If you mistakingly believe that tigers are harmless, you won’t suffer as long as you never meet a tiger. This implies that belief in god (or belief in no god) won’t have any effects besides the psychological ones because the belief cannot have a consequence for you (unfalsifiable → something we cannot interact with → harmless)
You can also cheat social reality. If your emotions can get other people to believe that they’re true, you’ve basically won. For instance, if you feel like a victim, and manipulate other people into thinking that you’re the victim, they will put effort into “correcting” reality by compensating you for the damages you’ve suffered. None of this manipulation works on objective reality, though, it’s only social reality in which it can ever be effective.
There’s many reasons to believe that correct knowledge isn’t optimal—religion seems to have added fitness (the natural selection kind) to human socities, humans are naturally biased, and our brains intentionally lie to us (when you feel like you cannot do any more pushups, you’re only about halfway to your actual limit), plus, infohazard are a thing. When rational people figure out that knowing more is better, I think it’s because they know more relative to other people, which gives them an advantage. I don’t think that everyone having more information is necessarily a good thing. I actually think excessive information is the reason why Moloch exists.
Reframing your thoughts such that you don’t step on your own toes is great. I am not quite sure what you are trying to argue for with the religion point. Do you actually endorse this on a practical basis? Have you convinced yourself that you need to yourself belief in untrue things or that it is good if other people are believing untrue things in just the right way they cancel each other out? Believing in God seems a particularly bad example. Having high alieve you can solve a problem that lots of people haven’t solved before you might be fine for minutes or a few days. But I can’t see how you would get psychological benefits from believing in a god or fate and whatever and that not messing up your epistemics.
I believe that when reality and theory are in conflict, reality is the winner, even when it appears irrational. If religion wasn’t a net positive, it wouldn’t manifest in basically every culture to ever exist. I both believe that there exists false beliefs with positive utility, and that true knowledge about the world can be interpreted in multiple ways, some of which are harmful and some of which are beneficial. Preferences and interpretations seem much more important than knowledge and truth. All animals and all humans except the modern man have not had a good grasp on knowledge and rationality, but have thrived just fine without.
I think believing in god could, for instance, make you more resilient to negative events which would otherwise put you in a state of learned helplessness. But the belief that god is real, on its own, probably doesn’t do a large difference. Behaving as if god is real is probably more effective. A lot of people seem to have had positive outcomes from attempting this, and I think that the utility speaks for itself. I don’t personally do either, but only because I’ve combated nihilism and made myself immune to existential problems through other means.
I think that the perspectives from which knowledge and truth appear as the highest values are very neglectful of the other, more subjective and human aspects of life, and that neglecting these other aspects can put you in some very difficult situations (e.g. inexcapable through logic and reasoning alone, since it’s basically logic and reasoning which trap you)
If there are things which rational and intelligent people can’t do, which stupid people can do, then the rational and intelligent people are not playing optimally. Most rational people seem unable to resist dangerous incentives (e.g. building an AGI, because ‘otherwise, our competitors would build it and outcompete us!’), but I know many regular, average-IQ people who do not have problems like this. Their subjective preferences keep them from harmful exploitation, and because of the large ratio of likeminded people around them, they’re not put at a disadvantage by these preferences. Does this not seem weird? A group of less intelligent people have avoided a problem which is mathematically unsolvable (except maybe from the perspective of Repeated Games, but in reality you tend to get very few repetitions). Religion might even be one of the things keeping these dilemmas at bay. Chesterton’s fence and all that
Are you aware that transposons are a thing? Also prions?
Memetics is similar enough to biology in this regard that, even just on priors, we should expect the existence of purely parasitic memes, beliefs which propagate without being long-term net positive for the hosts (i.e. humans). And on examination of details, that sure does seem to be the case for an awful lot of memes, especially the ideological variety.
I’m not sure if that proves purely parasitic memes, but I do think that unhelpful memes can manifest unless they’re selected against.
That said, I think it’s a solid idea to judge things by their outcomes (a flawless looking theory is inferior to a stupid theory if it brings about worse outcomes). In the case of ideologies, which I mainly consider to be modern movements rather than traditional cultures, I think we can judge them as bad not because the people involved in them are being irrational and wrong (they are), but because they’re also deeply unhappy and arguably acting in pathological patterns. And in my view of the world, social movements aren’t memes or the results of them, they’re symptoms of bad mental development.
I judge self-reported well-being, and biological indicators of health to be the best metrics we have to judge the success of peoples. Anyone who uses GDP as a metric for improvement in the world will conclude things which are entirely in conflict with my own conclusions. If you ask me, the Amish are doing just fine, whereas the modern American is in poor shape both physically and mentally. But from what I gather, Amish people are much less educated and poor on average.
To complicate “Judge things by their outcomes” further, imagine two people:
Mr. A saves 100$ every week, he does not have a lot left over for fun because he plays it safe.
Mr. B is in a 100$ deficit every week. He enjoys himself and throws parties every now and then.
From an outside perspective, Mr. A will look like a poor person who can’t afford to enjoy himself, and Mr. B will seem like he’s in a comfortable position. When people look at society and judge how it’s doing, I believe they’re mislead by appearances in exactly this manner. Waste can appear as wealth, and frugality can appear as poverty.
I was engaging with this, because I thought maybe you are advocating for some kind of doubles think, and I might spell you out of that, but this doesn’t seem to be the case. I am not interested to get deep into that religion argument (too many different people and different religions). Yes, there are some topics like ethics where most people don’t benefit from reasoning about them explicitly, and even smart people tend to get very confused by them. I remember I was confused for days by my first introduction to ethics.
Some of the things mentioned on the doubles think page does apply here. As for talks about religion, the religions in question are unrelated. “Behaving as if god is real” is just a way of priming ones subconscious for a certain way of living. If one “has more than one god”, they might attempt to live by contradicting rules, which brings all sort of negative effects with it. Imagine a person trying to make a serious comedy movie—sticking to either genre would likely be better, not because one is better than the other, but because pure worldview have less conflict.
Anyway, many (about half) of the claims on the link you sent me are wrong. You can believe that the sky isn’t blue, and you don’t even need to lie to yourself (simply think like this: The color you see is only what is reflected, so the sky is actually every color except blue). You can unlearn things, and while happiness is often a result of ignorance, you could also interpret knowledge in way that it does not invoke unhappiness (acceptance is usually enough). That climbing takes more effort is unrelated—ignorance is not about avoiding effort. That there’s more to life than happiness is also unrelated—your interpretations of things decide how meaningful your life is. The link also seems to imply that biases are wrong—are they really? I think of them as locally right (and as increasingly wrong as you consider a larger scope life than your own local environment)
As a side note, even if rationality is optimal, our attempt to be rational might work so poorly that not trying can work out better. Rationalism is mostly about overcoming instinctual behaviour, but our instincts have been calibrated by darwinism, so they’re quite dangerous to overwrite. Many smart people hurt themselves in way that regular people don’t, especially when they’re being logical (Pascal’s wager, for instance). Ones model of the world easily becomes a shackle/self-imposed limitation
I think a decent chunk of rationalists (myself included) are very aware that positive emotions can be lying to you—the notion of metaphorically “wireheading” is in the water supply, as are manic episodes, as is the fact that SBF was taking lots of stimulants which probably caused him to take stupid risks, as are the notions of limerence and NRE.
On the other hand you have the jhana folks who seem to be actively trying to train their emotions to be less correlated with reality...
I’ve seen hangriness-style advice circulating on twitter (via Zvi, so perhaps in the rationalist milieu) and tiktok (not in the rationalist milieu afaict).
My rules of thumb:
If I’m too hungry to tell whether I should eat, I should eat.
If I’m too tired to tell whether I should take a nap, I should take a nap.
If I’m too drunk to tell whether I should take another drink, I should… actually no wait.
When doing introspection on where the source of your emotions come from, I think it’s important to have some sort of radical self acceptance/courage. As a human, you might dig deep and discover parts of yourself that you might not endorse. (For example, I don’t want to X because it means I’ll lose social status).
I think this is also another instance where some sort of high decoupling comes in handy. You want to be able to discover the truth of why you’re feeling a certain way, decoupled from judgement of like “oh man if I’m the type of person to feel X/want Y deep down, that means I’m a bad person.”
Great post!
telling the truth is a skill that parts can get better at. Part of the skill is with the part itself and part is on the listener side to not do any gaslighting or weird avoidant stuff back at the part.
It would make sense that you would like a show put on at the LW theaters.
Promoted to curated: I am a bit uncertain how to feel about the claims of how widespread or different the attitudes towards rationalists here are compared to the rest of the population, the explanation of the underlying emotional attitude seems very valuable to me. Indeed, I am probably somewhat of an outlier in the degree to which I find it important to have this attitude present in my social environment, and I particularly appreciate a writeup of it.
I don’t think this claim is correct. I have not noticed this being particularly common among rationalists relative to other similar populations, nor normative.
I think it’s probably unusually common among postrationalists, but those are a very different culture from rationalists, grounded primarily in not sharing any of the main assumptions common to rationalists.
Usually I also take emotions as a channel to surface unconscious preferences (either situational or longer term), which helps with making that preference conscious as well as evaluated, and thus helps with rational decisions.
This was one of the first LessWrong posts I’ve made it all the way through, and I appreciated the journey you took your thoughts through. I like the underlying idea we can extend deeper social grace when we have a) common terms and b) ready mental models for why someone is “acting out” that doesn’t beg some question to remove them. That hangry thoughts are ephemeral and easy to resolve lends to tolerance. I think that’s what some commenters are latching onto when they describe this as more commonly-held: extending grace for extraneous personal circumstances is ingrained pretty deeply in some if not most cultures.
If ‘hangry’ is unique as a category because its new, its digestible and doesn’t require much personal sacrifice on behalf of the person burdened with the choice of leniency, what other categories are there? Are we seeing a removal of them in real time? I think of groups like incels who bond over shared misery but are isolated from influencing norms because of distress behaviors that beg too much of others. I’ve seen some people joke about “being horny on main” as a way to jockey sympathy to the idea that desire is suppressed by fears of vulnerability, and I wonder if these concepts are related insofar as sympathy needs good social branding.
This makes emotions subservient to rationality, but I think a lot of the people who complain about the rationalist approach to emotions instead see rationality as a system to generate compromises between emotions. From the latter perspective, the rationalist approach only really works with infinitesimal emotions.
I’m in agreement with the spirit of your piece written here, but I think the claim that emotions make true/false claims is not true. I think it’s more reasonably to talk in terms of intentionality and sticking to the term ‘information’. That is, emotion is ‘about’ something. I am not merely angry, but my anger is directed at a particular things. We also express information about our psychological states. We then construct propositions in relation to our emotions. When one says ‘emotions are telling us something’, I think this is best understood metaphorically.
Note the distinction between these three utterances
1. “Arghhhh”
2. “I am angry at Y.”
3. “The cause of my anger is X”
The first expresses an emotion, the emotion generally being clear in the context. The second is a description of our mental state and intentionality, that is to say what our anger is directed towards. The third is a claim about the cause of the anger.
Now, when you say that our emotions may make ‘false claims’ or have ‘false information’. I think you’re really talking about utterance 2. The ‘hangry’ is in relation to utterance 3. The emotion expressed in utterance 1 is not truth-apt. It is a mere expression.
This may seem pedantic; perhaps it is pendantic. I suppose it depends on how strictly I’m supposed to take the idea that emotions can be ‘wrong’ or ‘right’ in the sense of being false/true.
It’s standard for parents to have these sorts of models about their kids’ emotions, e.g. “She’s cranky because she didn’t get her nap.”
Related: Feeling Rational
I also take this stance, but with a different framework. I acknowledge that emotions & thoughts are very different biological technologies in action. One is likely extremely older than the other in terms of evolutionary development. Given that, from my perspective, my inner monologue is just not very good at translating — so my self-awareness needs to translate the translation. To me, an emotion is never false; it’s always a valid reaction in some way or another. I just know it’s silly to agree with the first interpretation my mind makes of it.
Anger is a reasonable reaction to a certain level of hunger. It’s is our generalized boundary alarm. Anger over hunger says, “the boundary of normal function is being crossed, and I’ll have to release stress hormones to keep going if we don’t consume food soon”. Anger is a great emotional choice/trigger for this, because it usually prevents you from deeply focusing & staying still, instead motivating you to move towards a resolution of the feeling.
So in my framework, emotions don’t provide any false information. But my thoughts might. Thankfully that’s what mindfulness/self-awareness is for, being more ‘whole’ and not just living from tiny cone of awareness that is the inner monologue / surface layer of thought.
I’ve trained myself not to give too much weight to the thoughts that come bundled with certain emotions usually because those thoughts are stupid or unhelpful, whereas I suspect the emotion itself might not be. A friend of mine (who’s a clinical psychologist) often reminds me that there’s a difference between intellectualising an emotion and actually sitting with it with the goal of feeling it fully and seeing what it has to offer. I still find that hard to do. I get why people who intellectualise their emotions (myself included) might end up going down the trauma rabbit hole, trying to ‘figure out what happened’ while I not sitting and feeling the emotions.
This post really resonates because the thoughts accompanying low-valence, high-arousal states like anger are often false narratives. And perhaps the real move is to not mess with that narrative, but to let the emotion shift on its own, as the internal state changes (hunger, tiredness, etc).
Another way to put this is “emotions as sensations vs. emotions as propositional attitudes”. (Under this framing, the thesis of the post would be “emotions are always sensations, but should not always be interpreted as propositional attitudes, because propositional attitudes should not be unstable under short-term shifts in physiological circumstances—which emotions are”.)
you can just get good at this with practice
I appreciate the general thrust of this piece, but I find this aspect concerning because it fails to acknowledge that emotions (or their analogues) are likely to have evolved long before linguistics or the capability to assert and evaluate claims.
From introspection it seems possible that emotions can be triggered by non-linguistic situations (giant spider jumps on my child → anger), and also it is possible for emotions to not cause logical claims to form… (e.g. “why am I feeling this way?”)
That pre-linguistic/non-logical layer is super important, IMO. The rest of this piece is very useful for the higher linguistic + logical layer.
The kinds of claims that emotions make are often non-verbal and pre-linguistic, in my experience. If a giant spider jumps on your child, you might have some kind of visceral or embodied expectation of your child being harmed if you don’t do something. That’s still an implicit claim, even if it’s not in a linguistic form. You can use various introspective techniques to translate those kinds of implicit claims into language, but it’s also possible to work directly with the levels below language.
Though often translating those pre-verbal claims into language does make the process much easier. Not necessarily because the language would be required, but because doing so forces you to focus your attention on the lower layers of thought in a way that lets you extract the claims they’re making.
This is definitely not the main point or even an important point, but:
It is very unlikely that a giant spider will ever jump on your child. Any spider that is likely to jump on your child (or on anyone or anything) is overwhelmingly likely to be very small.
(Also, any spider than can jump on anything is almost always going to be completely harmless, but of course that generally doesn’t affect your visceral reactions.)
I think this awareness will be really helpful only when talking to family, friends… Or to people who are already used to do some self-analysis when hangry.
Even if “being hangry” becomes much more “normalized” , a random person will still take it as an attack if you tell him that he’s maybe hangry (the same way other cognitive bias are well-known but telling someone that he’s being misled by confirmation bias will not be received well). People can even start building a defence mechanism to reject the ” hangry remark ” every time they are confronted to it.
It’s the kind of things you can’t say several times to the same person. Even the nicest, most patient people you know will quickly get really really annoyed if you’re telling them regularly that they’re hangry (even if you are right).
And we’ll need to be careful, this can easily be abused as a tactic to silence people while discussing. Or we can start confusing ” being hangry ” with ” being wrong ”.
This seems to point to a strong suspicion of mine (to humbly avoid making bold claims too early) that emotions are fundamentally rooted in physiological sensations and impulses.
Hunger is an instinct and impulse to act towards satisfying an important daily basic need. Food.
A very similar observation might be in the experience or the observation of many people between lust and anger. There’s few better ways to get someone to hate you than to get in the way of them and their object of sexual arousal. Which might be because like hunger, it’s also an instinct and impulse to act towards satisfying another important biological imperative (from an evolutionary standpoint). Procreation.
In psychology the thing we’re talking about I believe is called emotional misattribution. It’s when you misinterpret the cause of your emotions.
A similar but still distinct mechanisms known to psychology is emotional displacement, where you vent an emotion at a safer target than the one which caused the emotion. That’s what we mean when we say “you’re taking it out on me”. Not a single convenient word but still very much established in everyday speak.
The difference is with displacement the cause is external and the displacement target is also external.
With hangriness or emotional misattribution, the cause is internal and the convenient target is external.
People seem to be well aware of displacement in operation, but maybe not as aware of the case where you’re taking things out on someone because of something happening in your internal physiology. But still aware to the degree that we have the term hangry, and the knowledge of how someones period can lead to vented anger, or how not getting any can lead to the ubiquitous phrase “sexual frustration”.
It may be that some misattributions are more common, like hunger turning to anger, sexual desire turning to general frustration, or normal hormonal cycles turning to angry mood. And therefore it may be that a convenient word or phrase for each one is fitting.
If we were to encapsulate the entire psychological concept of emotional misattribution, you would have to include not just misattributions between physiological states/impulses and emotions, but also emotions and emotions.
So in that case I would propose separating the two.
A term for when you experience any emotion which has physiological causes, leading to he misattribution. And another term for when you mistake one emotion for another. Like anger for embarrassment.
In general I find the loose application of the phrase you’re making it about something else” quite useful. Followed by an explanation like “you think you’re angry but you’re actually just embarrassed”. Or “I tempted to get angry but I know I’m really just scared”.
The whole topic seems to beg for a more clear and useful model of how emotions actually work in relation to both the body and the mind. Which I’ve been working on for years. But in a way that’s intuitive and doesn’t rely on reading book on psychology.
More crucially, what’s missing, to my mind, both in academia and popular knowledge, is a useful understanding of what emotions are, and what they’re for. Which I’ve also been working on.
My strong hunch is that this is true for almost any form of communication (internal and external) we receive: it conveys something we can extract value from if we are able to look past the surface (propositional content) of what we immediately infer.
And how difficult it is to remain open to the possibility that my first impression of a signal is “incorrect” (I got it wrong on my first attempt), given how frequently I have used my inference (first impression), and I am still alive (adaptive value of my past choice to not question my first impression)...
The best I can offer is to make it a regular but not constant practice to use, say, 15 to 30 minutes a day to go through some kind of “habitual though journal,” asking myself if and when some of my automatic inferences might have been wrong, mostly just to play with that possibility, so that those kinds of mental avenues become more readily available in the moment when I need them. It’s important to raise the stakes during that practice, so the more I can make it resemble the real deal (for instance by role playing situations with a conversational partner), the less “artificial” and more “transferable” does this learning become.
All in all an excellent primer on the issue and useful extensions!
I think the way of making someone feel seen, without needing to endorse what their emotions are claiming, is to reflect / validate / normalize the emotions themselves, rather than their assumed causes.
“That sounds really frustrating, it makes sense that you’re upset” (as a small random example) I imagine would make someone feel seen, without needing to endorse, or even discuss, the idiocy their emotions may be claiming.
I’m curious what comes up for you in reading that? Please let me know if I’m missing something :)
“That sounds really frustrating, it makes sense that you’re upset” pretty heavily endorses what the upset-ness is claiming. The central examples of hangriness, for instance, are cases where it does not make sense that the person is upset, because the things happening around them do not normally sound all that frustrating (relative to the strength of their upset-ness).
thank you John, that’s helpful & I see what you mean. I could have chosen a better example.
How about “I see that you’re really frustrated right now.” To me that’s just reflecting their emotional state and having them feel seen / heard, without endorsing their claim.
Do you agree? or have another suggestion on how to achieve that goal?
Yeah, that one is much better, from a “not necessarily endorsing” perspective.
It seems people are disagreeing with this—I would love it if you can comment to explain why & help me learn what I may be missing. Thanks in advance for your consideration.
I love this article & the premise!!
Small note on this section: It seems to me that this is attributing causality to your freedom to get ice cream in the middle of the day. If there were something else causing you to not feel trapped—for example, you enjoy your work more than you enjoyed school—couldn’t it still be that your longing’s claim was false?
If you hated your job, and didn’t have better alternatives for income, your ability to get ice cream may not relieve your feeling of being trapped.
Please let me know if I’m missing something :)
Indeed, that was why I sneakily worded it as “so that’s an update toward my longing’s claim being true” rather than “so my longing’s claim was true”.
sneaky indeed, I need to work on my reading comprehension ;) thank you!
When you mention “I could be wrong” as being the major load bearing part of the response to a hangry person—especially out of a sense of maintaining emotional stability—demonstrates that epistemic charity and humility is not only an intellectual virtue, but also an emotional and empathic one as well. I find that this would be the trickiest part: the balance between making one feel heard while also providing useful feedback to what might actually help them—trying to find the middle between appeasement and callous criticism.
Huh. Tried this on my social media cravings.
Couldn’t visualize them as an animal, but managed <a stream of energy between me and my laptop screen>. Managed to make the stream talk in my mind.
This behaved like a “talking lens” laid over my perception. As if the craving itself was live-reacting to objects on my screen while I clicked and scrolled.
Informative via making the involved needs concrete.
Good post! This is definitely the approach I use for these things, and it’s one of the most frequently-useful tools in my toolkit.
Who? Am I supposed to have heard of these people? I like what you’re saying about emotions, but tying it to these rushalist people is confusing and makes it more awkward to share this article with people who I think should read it.
If you want a post explaining the same concepts to a different audience, then go write a post explaining the same concepts to a different audience. I am well aware of the tradeoffs I chose here. I wrote the post for a specific purpose, and the tradeoffs chosen were correct for that purpose.
Come on, make your critiques in the straightforward way, and use normal words to express them. I think this being kind of socially focused is a valid critique, but you are coating it in sneering language that feels obnoxious.