If you want a plausabile theory as to how natural selection could produce sincere altruism, look at it from a game-theoretic perspective. People who could plausibly signal altruism and trustworthiness would get huge evolutionary gains because they could attract trading partners more easily. One of the more effective ways to signal that you possess a trait is to actually possess it. One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy. So it’s plausible that humans evolved to be genuinely nice, trustworthy, and altruistic, probably because the evolutionary gains from getting trade partners to trust them outweighed the evolutionary losses from sacrificing for others.
Altruism—at least in biology—normally means taking an inclusive fitness hit for the sake of others—e.g. see the definition of Trivers (1971), which reads:
Altruistic behavior can be defined as behavior that benefits another organism, not closely related, while being apparently detrimental to the organism performing the behavior, benefit and detriment being defined in terms of contribution to inclusive fitness
Proposing that altruism benefits the donor just means that you aren’t talking about genuine altruism at all, but “fake” altruism—i.e. genetic selfishness going a fancy name. Such “fake” altruism is easy to explain. The puzzle in biology is to do with genuine altruism.
the way you have written this makes it sound like you think utilitarians are cynically pretending to believe in utilitarianism to look good to others, but don’t really believe it in their heart of hearts. I don’t think this is true in most cases, I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia.
So: I am most interested in explaining behaviour. In this case, I think virtue signalling is pretty clearly the best fit. You are talking about conscious motives. These are challenging to investigate experimentally. You can ask people—but self-reporting is notoriously unreliable. Speculations about conscious motives are less interesting to me.
Altruism—at least in biology—normally means taking an inclusive fitness hit for the sake of others—e.g. see the definition of Trivers (1971).
I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism—taking a self-interest hit for the sake of others’ self interest. It’s quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance.
Proposing that altruism benefits the donor just means that you aren’t talking about genuine altruism at all, but “fake” altruism—i.e. genetic selfishness going a fancy name.
I wasn’t proposing that altruism benefited the donor. I was proposing that it benefited the donor’s genes. That doesn’t mean that it is “fake altruism,” however, because self interest and genetic interest are not the same thing. Self interest refers to the things a person cares about and wants to accomplish, i.e. happiness, pleasure, achievement, love, fun, it doesn’t have anything to do with genes.
Essentially, what you have argued is:
Genuinely caring about other people might cause you to behave in ways that make your genes replicate more frequently.
Therefore, you don’t really care about other people, you care about your genes.
If I understand your argument correctly it seems like you are committing some kind of reverse anthropomorphism. Instead of ascribing human goals and feelings to nonsentient objects, you are ascribing the metaphorical evolutionary “goals” of nonsentient objects (genes) to the human mind. That isn’t right. Humans don’t consciously or unconsciously directly act to increase our IGF, we simply engage in behaviors for their own sake that happened to increase our IGF in the ancestral environment.
Altruism—at least in biology—normally means taking an inclusive fitness hit for the sake of others—e.g. see the definition of Trivers (1971).
I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism—taking a self-interest hit for the sake of others’ self interest. It’s quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance.
So: I am talking about science, while you are talking about moral philosophy. Now that we have got that out the way, there should be no misunderstanding—though in the rest of your post you seem keen to manufacture one.
So: I am talking about science, while you are talking about moral philosophy.
I was talking about both. My basic point was that the reason humans evolved to care about morality and moral philosophy in the first place was because doing so made them very trustworthy, which enhanced their IGF by making it easier to obtain allies.
My original reply was a request for you to clarify whether you meant that utilitarians are cynically pretending to care about utilitarianism in order to signal niceness, or whether you meant that humans evolved to care about niceness directly and care about utilitarianism because it is exceptionally nice (a “niceness superstimulus” in your words). I wasn’t sure which you meant. It’s important to make this clear when discussing signalling because otherwise you risk accusing people of being cynical manipulators when you don’t really mean to.
Altruism—at least in biology—normally means taking an inclusive fitness hit for the sake of others—e.g. see the definition of Trivers (1971), which reads:
Proposing that altruism benefits the donor just means that you aren’t talking about genuine altruism at all, but “fake” altruism—i.e. genetic selfishness going a fancy name. Such “fake” altruism is easy to explain. The puzzle in biology is to do with genuine altruism.
So: I am most interested in explaining behaviour. In this case, I think virtue signalling is pretty clearly the best fit. You are talking about conscious motives. These are challenging to investigate experimentally. You can ask people—but self-reporting is notoriously unreliable. Speculations about conscious motives are less interesting to me.
I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism—taking a self-interest hit for the sake of others’ self interest. It’s quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance.
I wasn’t proposing that altruism benefited the donor. I was proposing that it benefited the donor’s genes. That doesn’t mean that it is “fake altruism,” however, because self interest and genetic interest are not the same thing. Self interest refers to the things a person cares about and wants to accomplish, i.e. happiness, pleasure, achievement, love, fun, it doesn’t have anything to do with genes.
Essentially, what you have argued is:
Genuinely caring about other people might cause you to behave in ways that make your genes replicate more frequently.
Therefore, you don’t really care about other people, you care about your genes.
If I understand your argument correctly it seems like you are committing some kind of reverse anthropomorphism. Instead of ascribing human goals and feelings to nonsentient objects, you are ascribing the metaphorical evolutionary “goals” of nonsentient objects (genes) to the human mind. That isn’t right. Humans don’t consciously or unconsciously directly act to increase our IGF, we simply engage in behaviors for their own sake that happened to increase our IGF in the ancestral environment.
Relevant
So: I am talking about science, while you are talking about moral philosophy. Now that we have got that out the way, there should be no misunderstanding—though in the rest of your post you seem keen to manufacture one.
I was talking about both. My basic point was that the reason humans evolved to care about morality and moral philosophy in the first place was because doing so made them very trustworthy, which enhanced their IGF by making it easier to obtain allies.
My original reply was a request for you to clarify whether you meant that utilitarians are cynically pretending to care about utilitarianism in order to signal niceness, or whether you meant that humans evolved to care about niceness directly and care about utilitarianism because it is exceptionally nice (a “niceness superstimulus” in your words). I wasn’t sure which you meant. It’s important to make this clear when discussing signalling because otherwise you risk accusing people of being cynical manipulators when you don’t really mean to.