What if sympathy depends on anthropomorphizing?

steven0461 (comment under “Preference For (Many) Future Worlds”):

In what sense would I want to translate these preferences? Why wouldn’t I just discard the preferences, and use the mind that came up with them to generate entirely new preferences in the light of its new, improved world-model? If I’m asking myself, as if for the first time, the question, “if there are going to be a lot of me-like things, how many me-like things with how good lives would be how valuable?”, then the answer my brain gives is that it wants to use empathy and population ethics-type reasoning to answer that question, and that it feels no need to ever refer to “unique next experience” thinking. Is it making a mistake?

Yvain (Behaviorism: Beware Anthropomorphizing Humans):

Although the witticism that behaviorism scrupulously avoids anthropomorphizing humans was intended as a jab at the theory, I think it touches on something pretty important. Just as normal anthropomorphism—“it only snows in winter because the snow prefers cold weather”, acts as a curiosity-stopper and discourages technical explanation of the behavior, so using mental language to explain the human mind equally halts the discussion without further investigation.

Eliezer (Sympathetic Minds):

You may recall from my previous writing on “empathic inference” the idea that brains are so complex that the only way to simulate them is by forcing a similar brain to behave similarly. A brain is so complex that if a human tried to understand brains the way that we understand e.g. gravity or a car—observing the whole, observing the parts, building up a theory from scratch—then we would be unable to invent good hypotheses in our mere mortal lifetimes. The only possible way you can hit on an “Aha!” that describes a system as incredibly complex as an Other Mind, is if you happen to run across something amazingly similar to the Other Mind—namely your own brain—which you can actually force to behave similarly and use as a hypothesis, yielding predictions.

So that is what I would call “empathy”.

And then “sympathy” is something else on top of this—to smile when you see someone else smile, to hurt when you see someone else hurt. It goes beyond the realm of prediction into the realm of reinforcement.

So, what if, the more we understand something, the less we tend to anthropomorphize it, and the less we empathize/​sympathize with it? See this post for some possible examples of this. Or consider Yvain’s blue-minimizing robot. At first we might empathize or even sympathize with its apparent goal of minimizing blue, at least until we understand that it’s just a dumb program. We still sympathize with the predicament of the human-level side module inside that robot, but maybe only until we can understand it as something besides a “human level intelligence”? Should we keep carrying forward behaviorism’s program of de-anthropomorphizing humans, knowing that it might (or probably will) reduce our level of empathy/​sympathy towards others?