I didn’t think too hard about terminology and am open to brainstorming.
I’m concerned that the word “modeling” misses one of the important points. “Model” suggests “predictive model”; I think it’s possible (at least in principle, and probably in practice) to “model” a person in a way that is wholly disconnected from your suite of visceral reactions, just like you can “model” how a car engine works.
Instead, I would start with what you said, “when we observe facts relating to someone else that, if related to us, would make us feel a certain way”, but then add “…while actually activating those same ‘feelings’ in our own head”. Well, at least that would be closer. And I used the word “empathy” to convey that second part, I think.
I guess what you call “involuntary other-modeling” is what I call “a little glimpse of empathy”, and what you call “relatee-wise generalization” is what I’d call “the main (or only?) reason why the ‘little glimpse of empathy’ occurs”. But sorry if I’m misunderstanding.
I guess what you call “involuntary other-modeling” is what I call “a little glimpse of empathy”, and what you call “relatee-wise generalization” is what I’d call “the main (or only?) reason why the ‘little glimpse of empathy’ occurs”. But sorry if I’m misunderstanding.
Ok excellent, this is a succinct version of what I was getting from your original post, and is what my comment was trying to confirm. Thank you.
“relatee-wise generalization” is what I’d call “the main (or only?) reason why the ‘little glimpse of empathy’ occurs”
Right, and to me this seems like an important distinct claim. I think I understood from your original post that these were somewhat separate claims, but I guess my response is to advocate making that distinction as clear as possible, perhaps by coining some extra term(s) - because I think different evidence is required to precede them, and different conclusions follow from them.
(I suppose I should point out that the second claim, depending on the degree of ‘main (or only?)’, seems a lot bolder i.e. I require more convincing. Like, there might be substantial hardcoded circuitry which puts this stuff in, rather than it falling out of relatee-wise generalisation. But then again I can viscerally feel empathy for a hypothetical, or for obviously-non-kin animals, or whatnot, so this could be right.)
Like, there might be substantial hardcoded circuitry which puts this stuff in, rather than it falling out of relatee-wise generalisation.
I think this is tied up with learning-from-scratch. “Relatee-wise generalisation” is compatible with learning-from-scratch, and I can’t currently see any other option that’s compatible with learning-from-scratch. Can you? I’m not sure what you mean by “hardcoded circuitry”.
Then someone might say: “Yeah but if we throw out learning-from-scratch, then look at all these other possible ways that social instincts might work!” But I’m currently strongly disinclined to throw out learning-from-scratch, because I have a lot of other reasons for believing it.
So the premise of this post is something like “Is there any plausible explanation for social instincts that’s compatible with Posts #2–#7, and especially with the learning-from-scratch discussion in Post #2?” (That’s the “symbol grounding” thing of Section 13.2.2, see also the post title.) If yes, then I’d be willing to bet that that explanation for social instincts is the correct one, and I would want to prioritize fleshing it out and testing it. If no, then oops, guess I better throw out Posts #2–#7!!
Thanks for the comment!
I didn’t think too hard about terminology and am open to brainstorming.
I’m concerned that the word “modeling” misses one of the important points. “Model” suggests “predictive model”; I think it’s possible (at least in principle, and probably in practice) to “model” a person in a way that is wholly disconnected from your suite of visceral reactions, just like you can “model” how a car engine works.
Instead, I would start with what you said, “when we observe facts relating to someone else that, if related to us, would make us feel a certain way”, but then add “…while actually activating those same ‘feelings’ in our own head”. Well, at least that would be closer. And I used the word “empathy” to convey that second part, I think.
I guess what you call “involuntary other-modeling” is what I call “a little glimpse of empathy”, and what you call “relatee-wise generalization” is what I’d call “the main (or only?) reason why the ‘little glimpse of empathy’ occurs”. But sorry if I’m misunderstanding.
Ok excellent, this is a succinct version of what I was getting from your original post, and is what my comment was trying to confirm. Thank you.
Right, and to me this seems like an important distinct claim. I think I understood from your original post that these were somewhat separate claims, but I guess my response is to advocate making that distinction as clear as possible, perhaps by coining some extra term(s) - because I think different evidence is required to precede them, and different conclusions follow from them.
(I suppose I should point out that the second claim, depending on the degree of ‘main (or only?)’, seems a lot bolder i.e. I require more convincing. Like, there might be substantial hardcoded circuitry which puts this stuff in, rather than it falling out of relatee-wise generalisation. But then again I can viscerally feel empathy for a hypothetical, or for obviously-non-kin animals, or whatnot, so this could be right.)
Thanks.
I think this is tied up with learning-from-scratch. “Relatee-wise generalisation” is compatible with learning-from-scratch, and I can’t currently see any other option that’s compatible with learning-from-scratch. Can you? I’m not sure what you mean by “hardcoded circuitry”.
Then someone might say: “Yeah but if we throw out learning-from-scratch, then look at all these other possible ways that social instincts might work!” But I’m currently strongly disinclined to throw out learning-from-scratch, because I have a lot of other reasons for believing it.
So the premise of this post is something like “Is there any plausible explanation for social instincts that’s compatible with Posts #2–#7, and especially with the learning-from-scratch discussion in Post #2?” (That’s the “symbol grounding” thing of Section 13.2.2, see also the post title.) If yes, then I’d be willing to bet that that explanation for social instincts is the correct one, and I would want to prioritize fleshing it out and testing it. If no, then oops, guess I better throw out Posts #2–#7!!