Could you spell out in even more tedious detail what you mean by the following?
The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy.
The obvious interpretation of this sentence seems to commit the naturalistic fallacy; is there another meaning that I’m missing?
As near as I can tell from your link, this Naturalistic Fallacy means disagreeing with G. E. Moore’s position that “good” cannot be defined in natural terms. It seems to be a powerful debating trick to convince people that disagreeing with you is a fallacy.
Further, Phil’s statement does not even define “good”, it describes how people define “good”. It is not a fallacy to describe a behavior that commits a fallacy.
I wonder, would you have realized these issues yourself, if you had tried to explain how the fallacy applies to the statement? Or would it have helped me to realize that you meant something else, and there is indeed a problem here?
Apologies, I should have made it clearer that I was referring to the naturalistic fallacy in it’s casual sense, which denies the validity of drawing moral conclusions directly from natural facts. (I assumed that this usage was common enough that I didn’t need to spell it out; that assumption was clearly false.)
Pace your second paragraph, it seemed to me that Phil was trying to do this, and others seem to have interpreted the post in this way too. But I admit that part of the vagueness in my phrasing was due to the fact that I was (and still am) having trouble figuring out exactly what Phil is trying to say.
Ah, it might have helped to instead call it the more specific Appeal to Nature, one of several usages discussed in the article you referenced. Even so, I don’t think Phil was drawing moral conclusions directly from natural facts. He was saying, given these natural facts, this behavior works, and it is the fact that it works that causes people to call it moral. The position seems to be Consequentialism as an explanation for other people’s behavior.
As I interpreted the article, Phil is saying that, however much we frame our moral thinking and discussion in terms of abstract ethics, ultimately our conclusions are determined by natural facts about us and our society, that is, we somehow decide the moral thing is what works, even if that is not our explicit reasoning. This sets up the question that, if we are in a situation to determine the natural facts that force our ethics, is there in fact a higher ethical principle such that we should fix our nature to fit that ethical principal? And does this conflict with other goals?
You are right, it would have been better to cite Appeal to Nature. But I insist that Phil did commit this fallacy. Quoted from his longer comment in this thread:
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness.
Moral claims, such as valuing vegetarianism or celibacy, that require people to change their usual behavior in a way that decreases their fitness should not be accepted without a compelling reason that addresses the loss to the person so constrained.
Would that have committed the fallacy? Would it still support his point? Should we, because Phil committed this fallacy in answering a question about his article, discard the whole article?
Also note, Phil does not seem to have any problem with a person denying their nature to increase their fitness.
I’m not sure I understand your altered Phil quote either: “What if he had said...” If I do understand it, we still disagree. It’d be helpful if you answered your own questions. Here are a few for you.
Do you believe it is morally good to take actions which increase our fitness? Do you think it is sometimes/always bad to take actions which decrease our fitness?
I intrinsically value my experience of life, and to the extent that it causes others to have life experiences that they similarly value, I find that my fitness has instrumental value. (Though I tend to value memetic fitness over genetic fitness.)
People instinctively have values that promote genetic fitness (though most don’t value genetic fitness itself). One should consider if a loss of genetic fitness reflects a loss to one of these values.
The modified quote does not Appeal to Nature (or if it does, Appealing to Nature is not always wrong). That a behavioral restriction reduces fitness is a reasonable red flag that it may be reducing the person’s actual utility, and I don’t think it is controversial that you should not do arbitrarily. The compelling reason may be that the loss of fitness has nothing to do with anything the person values, but it does promote something else that really is valued. But it is not wrong to desire an explicit reason for changing one’s behavior. Every improvement is a change, but not every change is an improvement.
I think that both the modified and original quote are really a side point to the issue Phil was discussing. What might be more pertinent is that a moral system, whether it is good or not, that causes its followers to decrease their fitness, will be “punished” in that it will become less common than moral systems that promote fitness. This nicely supports the idea that we could promote a good moral system, if we identified one, if we could fix certain parameters so that morality does increase fitness.
And no, we should not dismiss an article because its author made a mistake in answering a question about it. If no one is able to address an objection to a critical part of the article, then we should consider dismissing it.
I find the response of the LW community schizophrenic. If someone writes a post advocating moral realism, they get jumped on for being religious. Yet when I wrote this post asking whether there is some evolution-free moral code that should influence the choices of organism-designers, I got jumped on for even presenting as one possibility that moral realism might be false.
Claiming that the “naturalistic fallacy” is a fallacy is, I think, identical to defending moral realism. You can’t even define the naturalistic fallacy without presuming moral realism.
Your post was poorly received because it tackled a confusing topic, and failed to bring clarity. All of the supposed counter-arguments are just confabulations. I expect Less Wrongers to jump on any attempt to analyze morality in abstract terms, regardless of its conclusion, because there’s an extensive body of philosophical literature showing that such attempts produce only concentrated confusion.
Also, you shouldn’t expect different individuals within a community to all advocate consistent positions. If they did, that would mean that either the question was an easy one and not worthy of further discussion, or the community was broken and suffering from groupthink.
I agree with jimrandomh. One should expect different people to have different opinions. Furthermore, we’re most likely to respond to things that we find obviously wrong—thus, where the community is not in consensus, expect to be ‘jumped on’ no matter which position you advocate.
Humans are not really free moral agents when deciding on a balance between collectivism & individuality. Given their particular mechanism for reproducing, and given a particular society with a particular distribution of their genetic relatedness to the people they typically encounter, you can compute how much an average person in that society can be expected to value their own life, vs. the lives of others. An individual human can reason about it, and choose a different valuation; but they’re then trying to act in a way inconsistent with their psychology, and a way that will lower their own genetic fitness.
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness. (Please do not respond with pointers to literature on how vegetarianism is healthy for you. For the sake of argument, presume you are a hunter-gatherer in a cold climate.) You can sit down and reason out that eating other creatures who have feelings and thoughts is bad, but when you then conclude that lions are evil for eating zebras, you’ve gone wrong somehow. I don’t buy the argument that the rules are different for humans because they can reflect on the rules.
(Also, please don’t respond by saying that genetic fitness shouldn’t matter to thoughtful modern people. Genetic fitness matters, because a moral system must be evolutionarily stable.)
A designer of a posthuman society is free, in a way that a human is not, to choose that balance. The problem presented by this post is that the parameter they would adjust to choose that balance, is also one of the key parameters to adjust to choose an exploration/exploitation tradeoff. The former is an ethical issue; the latter is an optimization issue. How much weight do you give to ethics vs. goal attainment? Does it make any sense for, let’s say, a singleton AI, to hold an ethical viewpoint that causes it to act less optimally?
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness.
Genetic fitness matters, because a moral system must be evolutionarily stable.
This implies that tragedies of the commons are acceptable under such moral system.
(Under the “tragedy of the commons” I mean a situation where “selfish” members of a population take advantage of a communally-maintained resource while not contributing to it, thus gaining free energy to outreproduce the “good citizens”, thereby increasing the frequency of the genes for cheating and reducing the frequency of the genes for maintenance of the common resource. Any population consisting only of “good citizens” is evolutionarily unstable because it is vulnerable to an invasion of “selfish” mutations.)
This implies that tragedies of the commons are acceptable under such moral system.
You have it exactly backwards. I made the statement because a moral system that encourages tragedies of the commons is not evolutionarily stable and hence not acceptable.
If you still think it does, please provide an explanation this time.
Thanks for the clarification. I think I have a somewhat clearer idea what you’re getting at now, but the distinction you are attempting to draw between ethics and goal attainment still seems wrongheaded to me. As the designer of a posthuman society, your ethics determine your goals; I don’t see how the two are supposed to come into conflict in the way you suggest. (Maybe the demands of evolution will place constraints on the psychologies of feasible post-humans, but that’s a rather different point.)
Nitpick:
An individual human can reason about it, and choose a different valuation; but they’re then trying to act in a way inconsistent with their psychology
As it’s currently stated, I don’t think this claim makes any sense. If they can do it, then it’s consistent with their psychology.
Yes, and furthermore, what does Phil mean by “evolutionarily stable”? I’m not asking for the definition of an evolutionarily stable state but rather an explanation for what Phil means by it in this context.
Because it won’t last if it isn’t. If you propose a moral system, knowing that the inevitable consequences of people adopting this system is that they will be exploited by defectors and the system will collapse, leaving an immoral and low-utility society of defectors, that’s not moral.
If you propose a moral system, knowing that the inevitable consequences of people adopting this system is that they will be exploited by defectors and the system will collapse, leaving an immoral and low-utility society of defectors, that’s not moral.
I think Phil may be saying that persistent moral systems must be evolutionary stable, though that raises the question why the moral system needs to be persistent. One might argue that a species that can’t support its existence in a moral way should accept its own extinction (that is, the individual members of the species should accept the extinction of the whole species), along with the moral system that led to that conclusion.
I wasn’t attempting a broad defense of G.E. Moore’s account of goodness. I was just trying to point out what I considered mistaken moral reasoning without wasting too many words (I’m a slow writer.)
I interpret your comments to have the following connotations, “It isn’t worth referencing the naturalistic fallacy like you did.” Well, I’m sorry, but I believe that referencing the naturalistic fallacy (though it would have been more precise to reference Appeals to Nature) communicated something important to a lot of people. I don’t know a more efficient way to do that.
For the record, I’m not voting your comments down. But my guess is that they wouldn’t be voted down if you explained (or even linked to an explanation) of your criticism.
Those connotations roughly capture my intention. Claiming that someone’s is invoking a fallacy is a kind of put-down. However, if the claimed fallacy is just someone’s opinion (about what they think the word “good” ought to refer to) it doesn’t work to well.
I am unimpressed by Moore’s claims. Labelling your intellectual opponents’ thinking as fallacious when it is not is an underhand debating tactic that gets no respect from me. Moore wasn’t even on the better side of the argument. He opposed naturalism and reductionism. It should be no mystery why I think his views sucked—it’s because they did.
Tim, I wish our exchange could be a bit more amiable, but you caused me to read up on some stuff that may be changing the way I think. For this I thank you.
I’ve already acknowledged that “Appeal to Nature” is a more precise concept and that is what I might be inclined to reference in similar situations in the future. I’m even willing to question that practice. If you have time to provide some preferred concepts/vocabulary, that would be great.
Do you agree that improving one’s genetic fitness should be a terminal value for people?
Do you agree that Phil seemed to imply that?
Phil claimed that genetic fitness mattered for ethics—which it probably does. For example, the Shakers believed that everyone should be celibate—and now there aren’t any of them around any more.
There would still be Shakers around if they had been able to keep up the practice of adopting children indefinitely. According to Wikipedia, that only stopped working when adoption became the province of the state. Wikipedia also says that there are still four Shakers today and people may join them if they like.
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness. [...] but when you then conclude that lions are evil for eating zebras, you’ve gone wrong somehow. I don’t buy the argument that the rules are different for humans because they can reflect on the rules.
(Emphases mine). Please settle on a set of applicable subjects. I’m with you that moral claims apply to “people”. I don’t think many here will claim that humans exhaust that space in theory, or that lions inhabit it currently.
Though your linked resources for Exploration and Exploitation don’t help much. One keeps giving me page load errors, and the other appears to be a symposium description and schedule. We have had some discussion of the concept here on LW. And maybe it would help to add an inline explanation that it is the issue of trading off between strategies that are known to work well, and trying other strategies to find out how well they work.
I rewrote it to spell it out in tedious detail.
Could you spell out in even more tedious detail what you mean by the following?
The obvious interpretation of this sentence seems to commit the naturalistic fallacy; is there another meaning that I’m missing?
As near as I can tell from your link, this Naturalistic Fallacy means disagreeing with G. E. Moore’s position that “good” cannot be defined in natural terms. It seems to be a powerful debating trick to convince people that disagreeing with you is a fallacy.
Further, Phil’s statement does not even define “good”, it describes how people define “good”. It is not a fallacy to describe a behavior that commits a fallacy.
I wonder, would you have realized these issues yourself, if you had tried to explain how the fallacy applies to the statement? Or would it have helped me to realize that you meant something else, and there is indeed a problem here?
Apologies, I should have made it clearer that I was referring to the naturalistic fallacy in it’s casual sense, which denies the validity of drawing moral conclusions directly from natural facts. (I assumed that this usage was common enough that I didn’t need to spell it out; that assumption was clearly false.)
Pace your second paragraph, it seemed to me that Phil was trying to do this, and others seem to have interpreted the post in this way too. But I admit that part of the vagueness in my phrasing was due to the fact that I was (and still am) having trouble figuring out exactly what Phil is trying to say.
Ah, it might have helped to instead call it the more specific Appeal to Nature, one of several usages discussed in the article you referenced. Even so, I don’t think Phil was drawing moral conclusions directly from natural facts. He was saying, given these natural facts, this behavior works, and it is the fact that it works that causes people to call it moral. The position seems to be Consequentialism as an explanation for other people’s behavior.
As I interpreted the article, Phil is saying that, however much we frame our moral thinking and discussion in terms of abstract ethics, ultimately our conclusions are determined by natural facts about us and our society, that is, we somehow decide the moral thing is what works, even if that is not our explicit reasoning. This sets up the question that, if we are in a situation to determine the natural facts that force our ethics, is there in fact a higher ethical principle such that we should fix our nature to fit that ethical principal? And does this conflict with other goals?
You are right, it would have been better to cite Appeal to Nature. But I insist that Phil did commit this fallacy. Quoted from his longer comment in this thread:
What if he had said:
Would that have committed the fallacy? Would it still support his point? Should we, because Phil committed this fallacy in answering a question about his article, discard the whole article?
Also note, Phil does not seem to have any problem with a person denying their nature to increase their fitness.
I’m not sure I understand your altered Phil quote either: “What if he had said...” If I do understand it, we still disagree. It’d be helpful if you answered your own questions. Here are a few for you.
Do you believe it is morally good to take actions which increase our fitness?
Do you think it is sometimes/always bad to take actions which decrease our fitness?
Do you take fitness to have terminal or intrinsic moral value?
My answers: “Not necessarily,” “Sometimes,” and “No.”
I’ll try to answer your questions tomorrow, but I’ll have to be asking for clarification.
I intrinsically value my experience of life, and to the extent that it causes others to have life experiences that they similarly value, I find that my fitness has instrumental value. (Though I tend to value memetic fitness over genetic fitness.)
People instinctively have values that promote genetic fitness (though most don’t value genetic fitness itself). One should consider if a loss of genetic fitness reflects a loss to one of these values.
The modified quote does not Appeal to Nature (or if it does, Appealing to Nature is not always wrong). That a behavioral restriction reduces fitness is a reasonable red flag that it may be reducing the person’s actual utility, and I don’t think it is controversial that you should not do arbitrarily. The compelling reason may be that the loss of fitness has nothing to do with anything the person values, but it does promote something else that really is valued. But it is not wrong to desire an explicit reason for changing one’s behavior. Every improvement is a change, but not every change is an improvement.
I think that both the modified and original quote are really a side point to the issue Phil was discussing. What might be more pertinent is that a moral system, whether it is good or not, that causes its followers to decrease their fitness, will be “punished” in that it will become less common than moral systems that promote fitness. This nicely supports the idea that we could promote a good moral system, if we identified one, if we could fix certain parameters so that morality does increase fitness.
And no, we should not dismiss an article because its author made a mistake in answering a question about it. If no one is able to address an objection to a critical part of the article, then we should consider dismissing it.
I find the response of the LW community schizophrenic. If someone writes a post advocating moral realism, they get jumped on for being religious. Yet when I wrote this post asking whether there is some evolution-free moral code that should influence the choices of organism-designers, I got jumped on for even presenting as one possibility that moral realism might be false.
Claiming that the “naturalistic fallacy” is a fallacy is, I think, identical to defending moral realism. You can’t even define the naturalistic fallacy without presuming moral realism.
Your post was poorly received because it tackled a confusing topic, and failed to bring clarity. All of the supposed counter-arguments are just confabulations. I expect Less Wrongers to jump on any attempt to analyze morality in abstract terms, regardless of its conclusion, because there’s an extensive body of philosophical literature showing that such attempts produce only concentrated confusion.
Also, you shouldn’t expect different individuals within a community to all advocate consistent positions. If they did, that would mean that either the question was an easy one and not worthy of further discussion, or the community was broken and suffering from groupthink.
I agree with jimrandomh. One should expect different people to have different opinions. Furthermore, we’re most likely to respond to things that we find obviously wrong—thus, where the community is not in consensus, expect to be ‘jumped on’ no matter which position you advocate.
-a moral realist of sorts
Humans are not really free moral agents when deciding on a balance between collectivism & individuality. Given their particular mechanism for reproducing, and given a particular society with a particular distribution of their genetic relatedness to the people they typically encounter, you can compute how much an average person in that society can be expected to value their own life, vs. the lives of others. An individual human can reason about it, and choose a different valuation; but they’re then trying to act in a way inconsistent with their psychology, and a way that will lower their own genetic fitness.
I’m dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness. (Please do not respond with pointers to literature on how vegetarianism is healthy for you. For the sake of argument, presume you are a hunter-gatherer in a cold climate.) You can sit down and reason out that eating other creatures who have feelings and thoughts is bad, but when you then conclude that lions are evil for eating zebras, you’ve gone wrong somehow. I don’t buy the argument that the rules are different for humans because they can reflect on the rules.
(Also, please don’t respond by saying that genetic fitness shouldn’t matter to thoughtful modern people. Genetic fitness matters, because a moral system must be evolutionarily stable.)
A designer of a posthuman society is free, in a way that a human is not, to choose that balance. The problem presented by this post is that the parameter they would adjust to choose that balance, is also one of the key parameters to adjust to choose an exploration/exploitation tradeoff. The former is an ethical issue; the latter is an optimization issue. How much weight do you give to ethics vs. goal attainment? Does it make any sense for, let’s say, a singleton AI, to hold an ethical viewpoint that causes it to act less optimally?
This implies that tragedies of the commons are acceptable under such moral system.
(Under the “tragedy of the commons” I mean a situation where “selfish” members of a population take advantage of a communally-maintained resource while not contributing to it, thus gaining free energy to outreproduce the “good citizens”, thereby increasing the frequency of the genes for cheating and reducing the frequency of the genes for maintenance of the common resource. Any population consisting only of “good citizens” is evolutionarily unstable because it is vulnerable to an invasion of “selfish” mutations.)
You have it exactly backwards. I made the statement because a moral system that encourages tragedies of the commons is not evolutionarily stable and hence not acceptable.
If you still think it does, please provide an explanation this time.
Thanks for the clarification. I think I have a somewhat clearer idea what you’re getting at now, but the distinction you are attempting to draw between ethics and goal attainment still seems wrongheaded to me. As the designer of a posthuman society, your ethics determine your goals; I don’t see how the two are supposed to come into conflict in the way you suggest. (Maybe the demands of evolution will place constraints on the psychologies of feasible post-humans, but that’s a rather different point.)
Nitpick:
As it’s currently stated, I don’t think this claim makes any sense. If they can do it, then it’s consistent with their psychology.
Why must a moral system be evolutionarily stable?
Yes, and furthermore, what does Phil mean by “evolutionarily stable”? I’m not asking for the definition of an evolutionarily stable state but rather an explanation for what Phil means by it in this context.
Because it won’t last if it isn’t. If you propose a moral system, knowing that the inevitable consequences of people adopting this system is that they will be exploited by defectors and the system will collapse, leaving an immoral and low-utility society of defectors, that’s not moral.
This only follows if you’re a consequentialist.
I think Phil may be saying that persistent moral systems must be evolutionary stable, though that raises the question why the moral system needs to be persistent. One might argue that a species that can’t support its existence in a moral way should accept its own extinction (that is, the individual members of the species should accept the extinction of the whole species), along with the moral system that led to that conclusion.
This does help clarify things. Unfortunately, Conchis is right, you’re committing the naturalistic fallacy.
I think we can safely put the naturalistic fallacy in the “out-of-date philosophical claptrap” dustbin.
Tim, what do you mean by this?
That G.E. Moore’s account of goodness is over 100 years old, and is too confused to be worth bothering with.
I wasn’t attempting a broad defense of G.E. Moore’s account of goodness. I was just trying to point out what I considered mistaken moral reasoning without wasting too many words (I’m a slow writer.)
I interpret your comments to have the following connotations, “It isn’t worth referencing the naturalistic fallacy like you did.” Well, I’m sorry, but I believe that referencing the naturalistic fallacy (though it would have been more precise to reference Appeals to Nature) communicated something important to a lot of people. I don’t know a more efficient way to do that.
For the record, I’m not voting your comments down. But my guess is that they wouldn’t be voted down if you explained (or even linked to an explanation) of your criticism.
Those connotations roughly capture my intention. Claiming that someone’s is invoking a fallacy is a kind of put-down. However, if the claimed fallacy is just someone’s opinion (about what they think the word “good” ought to refer to) it doesn’t work to well.
I am unimpressed by Moore’s claims. Labelling your intellectual opponents’ thinking as fallacious when it is not is an underhand debating tactic that gets no respect from me. Moore wasn’t even on the better side of the argument. He opposed naturalism and reductionism. It should be no mystery why I think his views sucked—it’s because they did.
Tim, I wish our exchange could be a bit more amiable, but you caused me to read up on some stuff that may be changing the way I think. For this I thank you.
I’ve already acknowledged that “Appeal to Nature” is a more precise concept and that is what I might be inclined to reference in similar situations in the future. I’m even willing to question that practice. If you have time to provide some preferred concepts/vocabulary, that would be great.
Do you agree that improving one’s genetic fitness should be a terminal value for people? Do you agree that Phil seemed to imply that?
Phil claimed that genetic fitness mattered for ethics—which it probably does. For example, the Shakers believed that everyone should be celibate—and now there aren’t any of them around any more.
There would still be Shakers around if they had been able to keep up the practice of adopting children indefinitely. According to Wikipedia, that only stopped working when adoption became the province of the state. Wikipedia also says that there are still four Shakers today and people may join them if they like.
People can choose their own values. Inclusive genetic fitness seems like a reasonable-enough maximand to me—because it is mine—see:
http://alife.co.uk/essays/nietzscheanism/
(Emphases mine). Please settle on a set of applicable subjects. I’m with you that moral claims apply to “people”. I don’t think many here will claim that humans exhaust that space in theory, or that lions inhabit it currently.
The “tedious” detail makes your point clearer.
Though your linked resources for Exploration and Exploitation don’t help much. One keeps giving me page load errors, and the other appears to be a symposium description and schedule. We have had some discussion of the concept here on LW. And maybe it would help to add an inline explanation that it is the issue of trading off between strategies that are known to work well, and trying other strategies to find out how well they work.