I’m also skeptical that those beliefs are better decribed as being about other people’s internal states rather than as about their social behavior.
Hmm. Continuing with the schadenfraude example, let’s say Alice stole my kettle and I would feel good if she burned her fingers on it. (Serves her right!) My introspection says, if Alice is alone when she burns her fingers, I’m still happy—that still counts. If I never see her again after that, that still counts. Heck, if she becomes a hermit and never sees another human again, that still counts. And therefore, that thought of Alice burning her fingers is pleasing in a way that is tightly connected to how I believe Alice feels, and disconnected from how I believe Alice is behaving socially, I think.
You mention “I imagine Alice acting happy, smiling and uncaring”. But I feel like the following two things feel very different to me:
“I imagine that Alice is acting happy, smiling and uncaring, and this is straightforwardly related to how she really feels”, versus
“I imagine that Alice is acting happy, smiling and uncaring, but on the inside she’s miserable, and she’s hiding how she really feels”.
What do you think?
I’m saying that it introspectively doesn’t feel like that is implemented via empathy (the same part of my world model that predicts my own emotions), but via a different part of my model (dedicated to modeling other people)
I don’t update much on that because I think almost all of the discourse and intuitions and literature surrounding the word “empathy” are not talking about the same thing that I want to talk about. Thus I tend to avoid the word “empathy” altogether where possible. I’ve been using other terms like “empathetic simulation” or “little glimpse of empathy”. I talk about that a bit in Section 13.5.2 here. More specifically, I’m guessing that it doesn’t “feel like empathy” when you imagine Alice burning her fingers on the kettle she stole from me, because that thought feels good, whereas empathizing with Alice would be unpleasant. Here, my model says “yes the thought feels good, and if that’s not what you think of as “empathy”, then the thing you think of as “empathy” is not what I’m talking about”.
When we think of emotion concepts / categories, the valence / arousal / etc. associated with them are central properties. E.g. righteous indignation has to have positive valence and high arousal, otherwise we would call it something else (and think of it as something else). So if you think a thought that involves lots of the same cortical neurons as you get in typical righteous indignation, but those neurons trigger negative valence and low arousal in the brainstem (because of the empathy-detector intervening, or whatever), it wouldn’t feel anything like righteous indignation introspectively. Or something like that.
Hmm. Continuing with the schadenfraude example, let’s say Alice stole my kettle and I would feel good if she burned her fingers on it. (Serves her right!) My introspection says, if Alice is alone when she burns her fingers, I’m still happy—that still counts. If I never see her again after that, that still counts. Heck, if she becomes a hermit and never sees another human again, that still counts. And therefore, that thought of Alice burning her fingers is pleasing in a way that is tightly connected to how I believe Alice feels, and disconnected from how I believe Alice is behaving socially, I think.
You mention “I imagine Alice acting happy, smiling and uncaring”. But I feel like the following two things feel very different to me:
“I imagine that Alice is acting happy, smiling and uncaring, and this is straightforwardly related to how she really feels”, versus
“I imagine that Alice is acting happy, smiling and uncaring, but on the inside she’s miserable, and she’s hiding how she really feels”.
What do you think?
I don’t update much on that because I think almost all of the discourse and intuitions and literature surrounding the word “empathy” are not talking about the same thing that I want to talk about. Thus I tend to avoid the word “empathy” altogether where possible. I’ve been using other terms like “empathetic simulation” or “little glimpse of empathy”. I talk about that a bit in Section 13.5.2 here. More specifically, I’m guessing that it doesn’t “feel like empathy” when you imagine Alice burning her fingers on the kettle she stole from me, because that thought feels good, whereas empathizing with Alice would be unpleasant. Here, my model says “yes the thought feels good, and if that’s not what you think of as “empathy”, then the thing you think of as “empathy” is not what I’m talking about”.
When we think of emotion concepts / categories, the valence / arousal / etc. associated with them are central properties. E.g. righteous indignation has to have positive valence and high arousal, otherwise we would call it something else (and think of it as something else). So if you think a thought that involves lots of the same cortical neurons as you get in typical righteous indignation, but those neurons trigger negative valence and low arousal in the brainstem (because of the empathy-detector intervening, or whatever), it wouldn’t feel anything like righteous indignation introspectively. Or something like that.