I don’t think a philosophy of mind is necessary for this, no, although I can see why it might seem like it is if you’ve already assumed that philosophy is necessary to understand the world.
It’s enough to just be able to model other minds in the world to know how to show them compassion, and even without modeling compassion can be enacted, even if it’s not known to be compassionate behavior. This modeling need not rise to the level of philosophy to get the job done.
I do not think I communicated my point properly. Let me try again:
Showing compassion is not free. It has a cost. To show compassion for someone you might need to take action to help them, or refrain from taking some action that might harm them.
How much effort do you spend on showing compassion for a human being?
How much effort do you spend on showing compassion for an earthworm?
How much effort do you spend on showing compassion for a plant?
How much effort do you spend on showing compassion for an NPC in a video game?
I don’t know about you, but I am willing to expend a decent amount of effort into compassion for a human. Less for an earthworm, but still some, because I suspect the earthworm can experience joy and suffering. Even less for a plant, because I suspect that it probably cannot experience anything. And I put close to zero effort into compassion for an NPC in a video game, because I am fairly convinced that the NPC cannot experience anything (if I show the NPC compassion, it is for my own sake, not theirs).
But I might be wrong. If a philosophical argument could convince me that any of these things experience more or less than I thought they did, I would adjust my priorities accordingly.
I think it’s a mistake in many cases to let philosophy override what you care about. That’s letting S2 do S1′s job.
I’m not saying no one should ever be able to be convinced to care about something, only that the convincing, even if a logical argument is part of it, should not be all of it.
I don’t think a philosophy of mind is necessary for this, no, although I can see why it might seem like it is if you’ve already assumed that philosophy is necessary to understand the world.
It’s enough to just be able to model other minds in the world to know how to show them compassion, and even without modeling compassion can be enacted, even if it’s not known to be compassionate behavior. This modeling need not rise to the level of philosophy to get the job done.
I do not think I communicated my point properly. Let me try again:
Showing compassion is not free. It has a cost. To show compassion for someone you might need to take action to help them, or refrain from taking some action that might harm them.
How much effort do you spend on showing compassion for a human being?
How much effort do you spend on showing compassion for an earthworm?
How much effort do you spend on showing compassion for a plant?
How much effort do you spend on showing compassion for an NPC in a video game?
I don’t know about you, but I am willing to expend a decent amount of effort into compassion for a human. Less for an earthworm, but still some, because I suspect the earthworm can experience joy and suffering. Even less for a plant, because I suspect that it probably cannot experience anything. And I put close to zero effort into compassion for an NPC in a video game, because I am fairly convinced that the NPC cannot experience anything (if I show the NPC compassion, it is for my own sake, not theirs).
But I might be wrong. If a philosophical argument could convince me that any of these things experience more or less than I thought they did, I would adjust my priorities accordingly.
I think it’s a mistake in many cases to let philosophy override what you care about. That’s letting S2 do S1′s job.
I’m not saying no one should ever be able to be convinced to care about something, only that the convincing, even if a logical argument is part of it, should not be all of it.