Think of it as vaguely like I-am-juggling versus you-are-juggling.
Here, I can see how they would overlap to a reasonable degree—I don’t think this easily carries over to emotions. Emotions atleast feel like this weird, distinct thing such that any statement along the lines “I’m happy” does it injustice. Therefore I can’t see it being carried over to “She’s happy”, their intersection wouldn’t be robust enough such that it won’t falsely trigger for actually unrelated things. That is, “She’s happy” ≈ “I’m happy” ≉ experiencing happiness.
Facial cues (as one example, it makes sense that there would be other things like higher-pitched voices when enjoying oneself etc) eliminate this problem because opposed to something introspective being the link, a more objective state of the mind, like “He’s sad”, will be the learned link.
this might sound like I’m being unnecessarily picky about this, but imo these associations need to be very exact, else humans would be reward-hacking all day: it’s reasonable to assume that the activations of thinking “She’s happy” are very similar to trying to convince oneself “She’s happy” internally, even ‘knowing’ the truth. But if both resulted in big feelings of internal happiness, we would have a lot more psychopaths.
regarding micro expressions specifically, it’s definitely not a hill i want to die on, it kind of just popped in my mind as I was writing about facial cues and by micro I really mean ‘micro micro’ - e.g. smiles that aren’t perfectly symmetrical for quarter of a second, something I at least can’t really pick up on; what is their evolutionary advantage if they don’t atleast offer some kind of subconscious effect on conspecifics? But yea, if you can’t consciously pick up on it, linking the two is pointless or even bad.
I read the linked post roughly, but as I read neither so far, i probably can’t relate too well to it. seems reasonable (or honestly, obvious) though that it’s a mix rather than either of those extreme statements.
these associations need to be very exact, else humans would be reward-hacking all day: it’s reasonable to assume that the activations of thinking “She’s happy” are very similar to trying to convince oneself “She’s happy” internally, even ‘knowing’ the truth. But if both resulted in big feelings of internal happiness, we would have a lot more psychopaths.
I don’t think things work that way. There are a lot of constraints on your thoughts. Copying from here:
1. Thought Generator generates a thought: The Thought Generator settles on a “thought”, out of the high-dimensional space of every thought you can possibly think at that moment. Note that this space of possibilities, while vast, is constrained by current sensory input, past sensory input, and everything else in your learned world-model. For example, if you’re sitting at a desk in Boston, it’s generally not possible for you to think that you’re scuba-diving off the coast of Madagascar. Likewise, it’s generally not possible for you to imagine a static spinning spherical octagon. But you can make a plan, or whistle a tune, or recall a memory, or reflect on the meaning of life, etc.
If I want to think that Sally is happy, but I know she’s not happy, I basically can’t, at least not directly. Indirectly, yeah sure, motivated reasoning obviously exists (I talk about how it works here), and people certainly do try to convince themselves that their friends are happy when they’re not, and sometimes (but not always) they are even successful.
I don’t think there’s (the right kind of) overlap between the thought “I wish to believe that Sally is happy” and the thought “Sally is happy”, but I can’t explain why I believe that, because it gets into gory details of brain algorithms that I don’t want to talk about publicly, sorry.
Emotions…feel like this weird, distinct thing such that any statement along the lines “I’m happy” does it injustice. Therefore I can’t see it being carried over to “She’s happy”, their intersection wouldn’t be robust enough such that it won’t falsely trigger for actually unrelated things. That is, “She’s happy” ≈ “I’m happy” ≉ experiencing happiness
I agree that emotional feelings are hard to articulate. But I don’t see how that’s relevant. Visual things are also hard to articulate, but we can learn a robust two-way association between [certain patterns in shapes and textures and motions] and [a certain specific kind of battery compartment that I’ve never tried to describe in English words]. By the same token, we can learn a robust two-way association between [certain interoceptive feelings] and [certain outward signs and contexts associated with those feelings]. And this association can get learned in one direction (interoceptive model → outward sign] from first-person experience, and later queried in the opposite direction [outward sign → interoceptive model] in a third-person context.
(Or sorry if I’m misunderstanding your point.)
what is their evolutionary advantage if they don’t atleast offer some kind of subconscious effect on conspecifics?
Again, my answer is “none”. We do lots of things that don’t have any evolutionary advantage. What’s the evolutionary advantage of getting cancer? What’s the evolutionary advantage of slipping and falling? Nothing. They’re incidental side-effects of things that evolved for other reasons.
but I can’t explain why I believe that, because it gets into gory details of brain algorithms that I don’t want to talk about publicly, sorry.
somewhat random but I think I want to learn more about this field in general—from what I can tell, you didn’t learn about it in a normal academic setting (like doing a neuroscience B.Sc.) either; any tips for good resources?
Here, I can see how they would overlap to a reasonable degree—I don’t think this easily carries over to emotions. Emotions atleast feel like this weird, distinct thing such that any statement along the lines “I’m happy” does it injustice. Therefore I can’t see it being carried over to “She’s happy”, their intersection wouldn’t be robust enough such that it won’t falsely trigger for actually unrelated things. That is, “She’s happy” ≈ “I’m happy” ≉ experiencing happiness.
Facial cues (as one example, it makes sense that there would be other things like higher-pitched voices when enjoying oneself etc) eliminate this problem because opposed to something introspective being the link, a more objective state of the mind, like “He’s sad”, will be the learned link.
this might sound like I’m being unnecessarily picky about this, but imo these associations need to be very exact, else humans would be reward-hacking all day: it’s reasonable to assume that the activations of thinking “She’s happy” are very similar to trying to convince oneself “She’s happy” internally, even ‘knowing’ the truth. But if both resulted in big feelings of internal happiness, we would have a lot more psychopaths.
regarding micro expressions specifically, it’s definitely not a hill i want to die on, it kind of just popped in my mind as I was writing about facial cues and by micro I really mean ‘micro micro’ - e.g. smiles that aren’t perfectly symmetrical for quarter of a second, something I at least can’t really pick up on; what is their evolutionary advantage if they don’t atleast offer some kind of subconscious effect on conspecifics? But yea, if you can’t consciously pick up on it, linking the two is pointless or even bad.
I read the linked post roughly, but as I read neither so far, i probably can’t relate too well to it. seems reasonable (or honestly, obvious) though that it’s a mix rather than either of those extreme statements.
Thanks again for engaging :)
I don’t think things work that way. There are a lot of constraints on your thoughts. Copying from here:
If I want to think that Sally is happy, but I know she’s not happy, I basically can’t, at least not directly. Indirectly, yeah sure, motivated reasoning obviously exists (I talk about how it works here), and people certainly do try to convince themselves that their friends are happy when they’re not, and sometimes (but not always) they are even successful.
I don’t think there’s (the right kind of) overlap between the thought “I wish to believe that Sally is happy” and the thought “Sally is happy”, but I can’t explain why I believe that, because it gets into gory details of brain algorithms that I don’t want to talk about publicly, sorry.
I agree that emotional feelings are hard to articulate. But I don’t see how that’s relevant. Visual things are also hard to articulate, but we can learn a robust two-way association between [certain patterns in shapes and textures and motions] and [a certain specific kind of battery compartment that I’ve never tried to describe in English words]. By the same token, we can learn a robust two-way association between [certain interoceptive feelings] and [certain outward signs and contexts associated with those feelings]. And this association can get learned in one direction (interoceptive model → outward sign] from first-person experience, and later queried in the opposite direction [outward sign → interoceptive model] in a third-person context.
(Or sorry if I’m misunderstanding your point.)
Again, my answer is “none”. We do lots of things that don’t have any evolutionary advantage. What’s the evolutionary advantage of getting cancer? What’s the evolutionary advantage of slipping and falling? Nothing. They’re incidental side-effects of things that evolved for other reasons.
somewhat random but I think I want to learn more about this field in general—from what I can tell, you didn’t learn about it in a normal academic setting (like doing a neuroscience B.Sc.) either; any tips for good resources?