Humans are not really free moral agents when deciding on a balance between collectivism & individuality. Given their particular mechanism for reproducing, and given a particular society with a particular distribution of their genetic relatedness to the people they typically encounter, you can compute how much an average person in that society can be expected to value their own life, vs. the lives of others. An individual human can reason about it, and choose a different valuation; but they’re then trying to act in a way inconsistent with their psychology, and a way that will lower their own genetic fitness.
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness. (Please do not respond with pointers to literature on how vegetarianism is healthy for you. For the sake of argument, presume you are a hunter-gatherer in a cold climate.) You can sit down and reason out that eating other creatures who have feelings and thoughts is bad, but when you then conclude that lions are evil for eating zebras, you’ve gone wrong somehow. I don’t buy the argument that the rules are different for humans because they can reflect on the rules.
(Also, please don’t respond by saying that genetic fitness shouldn’t matter to thoughtful modern people. Genetic fitness matters, because a moral system must be evolutionarily stable.)
A designer of a posthuman society is free, in a way that a human is not, to choose that balance. The problem presented by this post is that the parameter they would adjust to choose that balance, is also one of the key parameters to adjust to choose an exploration/exploitation tradeoff. The former is an ethical issue; the latter is an optimization issue. How much weight do you give to ethics vs. goal attainment? Does it make any sense for, let’s say, a singleton AI, to hold an ethical viewpoint that causes it to act less optimally?
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness.
Genetic fitness matters, because a moral system must be evolutionarily stable.
This implies that tragedies of the commons are acceptable under such moral system.
(Under the “tragedy of the commons” I mean a situation where “selfish” members of a population take advantage of a communally-maintained resource while not contributing to it, thus gaining free energy to outreproduce the “good citizens”, thereby increasing the frequency of the genes for cheating and reducing the frequency of the genes for maintenance of the common resource. Any population consisting only of “good citizens” is evolutionarily unstable because it is vulnerable to an invasion of “selfish” mutations.)
This implies that tragedies of the commons are acceptable under such moral system.
You have it exactly backwards. I made the statement because a moral system that encourages tragedies of the commons is not evolutionarily stable and hence not acceptable.
If you still think it does, please provide an explanation this time.
Thanks for the clarification. I think I have a somewhat clearer idea what you’re getting at now, but the distinction you are attempting to draw between ethics and goal attainment still seems wrongheaded to me. As the designer of a posthuman society, your ethics determine your goals; I don’t see how the two are supposed to come into conflict in the way you suggest. (Maybe the demands of evolution will place constraints on the psychologies of feasible post-humans, but that’s a rather different point.)
Nitpick:
An individual human can reason about it, and choose a different valuation; but they’re then trying to act in a way inconsistent with their psychology
As it’s currently stated, I don’t think this claim makes any sense. If they can do it, then it’s consistent with their psychology.
Yes, and furthermore, what does Phil mean by “evolutionarily stable”? I’m not asking for the definition of an evolutionarily stable state but rather an explanation for what Phil means by it in this context.
Because it won’t last if it isn’t. If you propose a moral system, knowing that the inevitable consequences of people adopting this system is that they will be exploited by defectors and the system will collapse, leaving an immoral and low-utility society of defectors, that’s not moral.
If you propose a moral system, knowing that the inevitable consequences of people adopting this system is that they will be exploited by defectors and the system will collapse, leaving an immoral and low-utility society of defectors, that’s not moral.
I think Phil may be saying that persistent moral systems must be evolutionary stable, though that raises the question why the moral system needs to be persistent. One might argue that a species that can’t support its existence in a moral way should accept its own extinction (that is, the individual members of the species should accept the extinction of the whole species), along with the moral system that led to that conclusion.
I wasn’t attempting a broad defense of G.E. Moore’s account of goodness. I was just trying to point out what I considered mistaken moral reasoning without wasting too many words (I’m a slow writer.)
I interpret your comments to have the following connotations, “It isn’t worth referencing the naturalistic fallacy like you did.” Well, I’m sorry, but I believe that referencing the naturalistic fallacy (though it would have been more precise to reference Appeals to Nature) communicated something important to a lot of people. I don’t know a more efficient way to do that.
For the record, I’m not voting your comments down. But my guess is that they wouldn’t be voted down if you explained (or even linked to an explanation) of your criticism.
Those connotations roughly capture my intention. Claiming that someone’s is invoking a fallacy is a kind of put-down. However, if the claimed fallacy is just someone’s opinion (about what they think the word “good” ought to refer to) it doesn’t work to well.
I am unimpressed by Moore’s claims. Labelling your intellectual opponents’ thinking as fallacious when it is not is an underhand debating tactic that gets no respect from me. Moore wasn’t even on the better side of the argument. He opposed naturalism and reductionism. It should be no mystery why I think his views sucked—it’s because they did.
Tim, I wish our exchange could be a bit more amiable, but you caused me to read up on some stuff that may be changing the way I think. For this I thank you.
I’ve already acknowledged that “Appeal to Nature” is a more precise concept and that is what I might be inclined to reference in similar situations in the future. I’m even willing to question that practice. If you have time to provide some preferred concepts/vocabulary, that would be great.
Do you agree that improving one’s genetic fitness should be a terminal value for people?
Do you agree that Phil seemed to imply that?
Phil claimed that genetic fitness mattered for ethics—which it probably does. For example, the Shakers believed that everyone should be celibate—and now there aren’t any of them around any more.
There would still be Shakers around if they had been able to keep up the practice of adopting children indefinitely. According to Wikipedia, that only stopped working when adoption became the province of the state. Wikipedia also says that there are still four Shakers today and people may join them if they like.
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness. [...] but when you then conclude that lions are evil for eating zebras, you’ve gone wrong somehow. I don’t buy the argument that the rules are different for humans because they can reflect on the rules.
(Emphases mine). Please settle on a set of applicable subjects. I’m with you that moral claims apply to “people”. I don’t think many here will claim that humans exhaust that space in theory, or that lions inhabit it currently.
Humans are not really free moral agents when deciding on a balance between collectivism & individuality. Given their particular mechanism for reproducing, and given a particular society with a particular distribution of their genetic relatedness to the people they typically encounter, you can compute how much an average person in that society can be expected to value their own life, vs. the lives of others. An individual human can reason about it, and choose a different valuation; but they’re then trying to act in a way inconsistent with their psychology, and a way that will lower their own genetic fitness.
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness. (Please do not respond with pointers to literature on how vegetarianism is healthy for you. For the sake of argument, presume you are a hunter-gatherer in a cold climate.) You can sit down and reason out that eating other creatures who have feelings and thoughts is bad, but when you then conclude that lions are evil for eating zebras, you’ve gone wrong somehow. I don’t buy the argument that the rules are different for humans because they can reflect on the rules.
(Also, please don’t respond by saying that genetic fitness shouldn’t matter to thoughtful modern people. Genetic fitness matters, because a moral system must be evolutionarily stable.)
A designer of a posthuman society is free, in a way that a human is not, to choose that balance. The problem presented by this post is that the parameter they would adjust to choose that balance, is also one of the key parameters to adjust to choose an exploration/exploitation tradeoff. The former is an ethical issue; the latter is an optimization issue. How much weight do you give to ethics vs. goal attainment? Does it make any sense for, let’s say, a singleton AI, to hold an ethical viewpoint that causes it to act less optimally?
This implies that tragedies of the commons are acceptable under such moral system.
(Under the “tragedy of the commons” I mean a situation where “selfish” members of a population take advantage of a communally-maintained resource while not contributing to it, thus gaining free energy to outreproduce the “good citizens”, thereby increasing the frequency of the genes for cheating and reducing the frequency of the genes for maintenance of the common resource. Any population consisting only of “good citizens” is evolutionarily unstable because it is vulnerable to an invasion of “selfish” mutations.)
You have it exactly backwards. I made the statement because a moral system that encourages tragedies of the commons is not evolutionarily stable and hence not acceptable.
If you still think it does, please provide an explanation this time.
Thanks for the clarification. I think I have a somewhat clearer idea what you’re getting at now, but the distinction you are attempting to draw between ethics and goal attainment still seems wrongheaded to me. As the designer of a posthuman society, your ethics determine your goals; I don’t see how the two are supposed to come into conflict in the way you suggest. (Maybe the demands of evolution will place constraints on the psychologies of feasible post-humans, but that’s a rather different point.)
Nitpick:
As it’s currently stated, I don’t think this claim makes any sense. If they can do it, then it’s consistent with their psychology.
Why must a moral system be evolutionarily stable?
Yes, and furthermore, what does Phil mean by “evolutionarily stable”? I’m not asking for the definition of an evolutionarily stable state but rather an explanation for what Phil means by it in this context.
Because it won’t last if it isn’t. If you propose a moral system, knowing that the inevitable consequences of people adopting this system is that they will be exploited by defectors and the system will collapse, leaving an immoral and low-utility society of defectors, that’s not moral.
This only follows if you’re a consequentialist.
I think Phil may be saying that persistent moral systems must be evolutionary stable, though that raises the question why the moral system needs to be persistent. One might argue that a species that can’t support its existence in a moral way should accept its own extinction (that is, the individual members of the species should accept the extinction of the whole species), along with the moral system that led to that conclusion.
This does help clarify things. Unfortunately, Conchis is right, you’re committing the naturalistic fallacy.
I think we can safely put the naturalistic fallacy in the “out-of-date philosophical claptrap” dustbin.
Tim, what do you mean by this?
That G.E. Moore’s account of goodness is over 100 years old, and is too confused to be worth bothering with.
I wasn’t attempting a broad defense of G.E. Moore’s account of goodness. I was just trying to point out what I considered mistaken moral reasoning without wasting too many words (I’m a slow writer.)
I interpret your comments to have the following connotations, “It isn’t worth referencing the naturalistic fallacy like you did.” Well, I’m sorry, but I believe that referencing the naturalistic fallacy (though it would have been more precise to reference Appeals to Nature) communicated something important to a lot of people. I don’t know a more efficient way to do that.
For the record, I’m not voting your comments down. But my guess is that they wouldn’t be voted down if you explained (or even linked to an explanation) of your criticism.
Those connotations roughly capture my intention. Claiming that someone’s is invoking a fallacy is a kind of put-down. However, if the claimed fallacy is just someone’s opinion (about what they think the word “good” ought to refer to) it doesn’t work to well.
I am unimpressed by Moore’s claims. Labelling your intellectual opponents’ thinking as fallacious when it is not is an underhand debating tactic that gets no respect from me. Moore wasn’t even on the better side of the argument. He opposed naturalism and reductionism. It should be no mystery why I think his views sucked—it’s because they did.
Tim, I wish our exchange could be a bit more amiable, but you caused me to read up on some stuff that may be changing the way I think. For this I thank you.
I’ve already acknowledged that “Appeal to Nature” is a more precise concept and that is what I might be inclined to reference in similar situations in the future. I’m even willing to question that practice. If you have time to provide some preferred concepts/vocabulary, that would be great.
Do you agree that improving one’s genetic fitness should be a terminal value for people? Do you agree that Phil seemed to imply that?
Phil claimed that genetic fitness mattered for ethics—which it probably does. For example, the Shakers believed that everyone should be celibate—and now there aren’t any of them around any more.
There would still be Shakers around if they had been able to keep up the practice of adopting children indefinitely. According to Wikipedia, that only stopped working when adoption became the province of the state. Wikipedia also says that there are still four Shakers today and people may join them if they like.
People can choose their own values. Inclusive genetic fitness seems like a reasonable-enough maximand to me—because it is mine—see:
http://alife.co.uk/essays/nietzscheanism/
(Emphases mine). Please settle on a set of applicable subjects. I’m with you that moral claims apply to “people”. I don’t think many here will claim that humans exhaust that space in theory, or that lions inhabit it currently.