Evolutionary ethics aims to help people understand why we value the things we do. It doesn’t have the ability to say anything about what we ought to value.
Evolutionary ethics provides a solution to the “ought-from-is” problem — in a cold uncaring universe governed by physical laws, where does the preference ordering/utility function of human values come from? That is a question about humans, and evolutionary ethics is the name of the scientific field that studies and answers it.
In order to decide “what we ought to value”, you need to create a preference ordering on moral systems, to show that one is better than another. You can’t use a moral system to do that — any moral system (that isn’t actually internally inconsistent) automatically prefers itself to all other moral systems, so using a moral system to select a moral system is just a circular argument — the same logic applies to any moral system you plug in to such an argument.
I think this is tempting but ultimately misguided, because the choice of a ‘more practical and vague’ system by which to judge moral systems is just a second order moral system in itself which happens to be practical and vague. This is metanormative regress.
The only coherent solution to the “ought-from-is” problem I’ve come across is normative eliminativism - ‘ought’ statements are either false or a special type of descriptive statement.
I encourage you to look into evolutionary ethics (and evolutionary psychology in general): I think it provides both a single, well-defined (though vague) ethical foundation and an answer to the “ought-from-is” problem. It’s a branch of science, rather than philosophy, so we are able to do better than just agreeing to disagree.
I’ve looked into these things, and as far as I can tell, all such fields or theories either do not attempt to solve the is-ought problem (as e.g. evo psych does not), or attempt to do so but (absolutely unsurprising) completely fail.
Humans are living, evolved agents. They thus each individually have a set of goals they attempt to optimize: a preference ordering on possible outcomes. Evolution predicts that, inside the distribution the creature evolved in, this preference ordering will be nearly as well aligned to the creature’s evolutionary fitness as is computationally feasible for the creature.
This is the first step in ought-from-is: it gives us a preference ordering, which if approximately coherent (i.e. not significantly Dutch-bookable — something evolution seems likely to encourage) implies an approximate utility function — a separate one for each human (or other animal). As in “this is what I want (for good evolutionary reasons)”. So, using agent fundamentals terminology, the answer to the ought-from-is question “where does the preference ordering on states of the world come from?” is “every evolved intelligent animal is going to have a set of evolved and learned behaviors that can be thought of as encoding a preference ordering (albeit one that may not be completely coherent, to the extent that it only approximately fulfills the criteria for the coherence theorems). ” [It even gives us a scale on the utility function, something a preference ordering doesn’t give us, in terms of the approximat effect on the evolutionary fitness of the organism: which ought correlate fairly well with the effort the organism is willing to put in to optimizing the outcome. This solves things like the utility monster problem.]
So far, that’s just Darwinism, or arguably the subfield Evolutionary Psychology, since it’s about the evolution of behavior. And so far the preference ordering “ought” is “what I want” rather than an ethical system, so arguably doesn’t yet deserve the term “ought” — I want to have a billion dollars, but saying that I thus “ought” to have a billion dollars is a bit of a stretch linguistically. Arguably so far we’ve only solved “want-from-is”.
Evolutionary Ethics goes on to explain why humans, as an intelligent social animal, are evolved to have a set of moral instincts that lets them form a set of conventions for compromises between the preference orderings of all the individual members of a tribe or other society of humans, in order to reduce intra-group conflicts by forming a “social compact” (to modify Hobbes’ terminology slightly). For example, the human sense of fairness encourages sharing of food from successful hunting or gathering expeditions, our habit of forming friendships produces alliances, and so forth. The results of this are not exactly a single coherent preference ordering on all outcomes for the society in question , let alone a utility function, more a set of heuristics on how the preference orderings of individual tribal members should be reconciled (‘should’ is here being used in the sense that, if you don’t do this and other members of the society find out, there are likely to be consequences). In general, members of the society are free to optimize whatever their own individual preferences are, unless this significantly decreases the well-being (evolutionary fitness) of other members of the society. My business is mine, until it intrudes on someone else: but then we need to compromise.
So now we have a single socially agreed “ought” per society — albeit one fuzzier and with rather more internal structure than people generally encode into utility functions: it’s a preference ordering produced by a process whose inputs are many preference orderings, (and might thus be less coherent). This moral system will be shaped both by humans’ evolved moral instincts (which are mostly shared across members of our species, albeit less so by socipaths), as is predicted by evolutionary ethics, and also by sociological, historical and political processes.
So, in philosophical terminology:
moral realism: no (However, human evolved moral instincts do tend to provide some simple consistent moral patterns across human societies, as long as you qualify all your moral statements with the rider “For humans, …”. So one could argue for a sort of ‘semi-realism’ for some simple moral statements, like “incest is bad” — that has a pretty clear evolutionary basis, and is pretty close to socially universal.)
moral relativism: yes — per society, and for some basic patterns/elements for the entire human species, but with no guarantees that these would apply to a very different intelligent social species (though there might well be commonalities for good evolutionary reasons — anything with sexual reproduction and deleterious recessives is likely to evolve an incest taboo.).
Given Said Achmiz’s comment already has 11 upvotes and 2 agreement points, should I write a post explaining all this? I had thought it all rather obvious to anyone who looks into evolutionary ethics and thinks a bit about what this means for moral philosophy (as quite a number of moral philosophers have done), but perhaps not.
This comment does really help me understand what you’re saying better. If you write a post expanding it, I would encourage you to address the following related points:
Can you have some members of a society who don’t share some of the consistent moral patterns which evolved, or do you claim that every member reliably holds these morals?
Can someone decide what they ought to value using this system? How?
Is it wrong if someone simply doesn’t care about what society values? Why?
How can we tell that your story tells us what we ought to value rather than simply explaining why we value the things we do?
Do you make a clear distinction between normative ethics and descriptive ethics? What is it?
Thanks, I’ll keep that in mind when deciding what to cover in the post when I write it.
Briefly for now, just to continue the discussion a bit:
Can you have some members of a society who don’t share some of the consistent moral patterns which evolved, or do you claim that every member reliably holds these morals?
The former (sociopaths, for example, are genetically predisposed to be less moral, and it has often been suggested this behavior is an adapted form of social opportunism, in game theory terms a different strategy, perhaps one with a stable equilibrium frequency, rather than being simply a genetic disease) — though they may get punished or shunned as a result, if their morality is different in a way that other members of the society disapprove of.
Can someone decide what they ought to value using this system? How?
How a person wants to make decisions is up to them. Most people make these decisions in a way that is influenced by their own moral instincts, social pressures, their circumstances and upbringing, their personality, expedience, and so forth. Generally, acting contrary to your instincts and impulses is challenging to do and stressful — it’s probably easier to go against them only when there’s a clear rational need. For example, if you’re rationally aware that they are maladaptive or antisocial in modern society.
Is it wrong if someone simply doesn’t care about what society values? Why?
In the context of their society of humans, yes, it is considered wrong (in almost all societies). Note that this is a morally relative statement, not a morally realist one. However, simply not caring at all is pretty atypical behavior under human moral intuitions, and is generally also pretty maladaptive (unless, say, you have absolute power). So from an evolutionary ethics point of view, it seems likely to be maladaptive behavior that will often get you imprisoned, exiled or killed. So as relative statements go, this is a pretty strong one.
How can we tell that your story tells us what we ought to value rather than simply explaining why we value the things we do?
The point of evolutionary ethics is that there is no meaningful, uniquely defined, separate sense of “ought” much stronger than “according to most common moral systems for this particular social species, or most similar species”. So the best you can do is explain why we, or most societies of a certain type, or most societies of a certain species, believe that that’s something you “ought” to do. This approach isn’t a form of moral realism.
Do you make a clear distinction between normative ethics and descriptive ethics? What is it?
Normative ethics describes my opinion about what I think people should do. Descriptive ethics describes what many people think people should do. In a society that has a social compact, the latter carries a lot more weight. However, I’m perfectly happy to discuss ethical system design: if we altered the ethics of our (or some other) society in a certain way, then the effects on the society would be this or that, which would or wouldn’t tend to increase or decrease things like human flourishing (which is itself explained by evolutionary psychology). That sounds a lot like normative ethics, but there’s a key difference: the discussion is based on a (hopefully mutually agreed) assessment of the relative merits of the predicted consequences, not “because I said so” or “because I heard God say so”.
Given Said Achmiz’s comment already has 11 upvotes and 2 agreement points, should I write a post explaining all this? I had thought it all rather obvious to anyone who looks into evolutionary ethics and thinks a bit about what this means for moral philosophy (as quite a number of moral philosophers have done), but perhaps not.
I’m afraid that what you’ve written here seems… confused, and riddled with gaps in reasoning, unjustified leaps, etc. I do encourage you to expand this into a post, though. In that case I will hold off on writing any detailed critical reply, since the full post will be a better place for it.
Evolutionary ethics aims to help people understand why we value the things we do. It doesn’t have the ability to say anything about what we ought to value.
Evolutionary ethics provides a solution to the “ought-from-is” problem — in a cold uncaring universe governed by physical laws, where does the preference ordering/utility function of human values come from? That is a question about humans, and evolutionary ethics is the name of the scientific field that studies and answers it.
In order to decide “what we ought to value”, you need to create a preference ordering on moral systems, to show that one is better than another. You can’t use a moral system to do that — any moral system (that isn’t actually internally inconsistent) automatically prefers itself to all other moral systems, so using a moral system to select a moral system is just a circular argument — the same logic applies to any moral system you plug in to such an argument.
So to discuss “what we ought to value” you need to judge moral systems and their consequences using something that is both vaguer and more practical than a moral system. Such as psychology, or sociology, or political expedience, or some combination of these. All of which occur in the context of human nature and human moral intuitions and instincts — which is exactly what evolutionary ethics studies and provides a theoretical framework to explain.
Thanks for explaining.
I think this is tempting but ultimately misguided, because the choice of a ‘more practical and vague’ system by which to judge moral systems is just a second order moral system in itself which happens to be practical and vague. This is metanormative regress.
The only coherent solution to the “ought-from-is” problem I’ve come across is normative eliminativism - ‘ought’ statements are either false or a special type of descriptive statement.
I encourage you to look into evolutionary ethics (and evolutionary psychology in general): I think it provides both a single, well-defined (though vague) ethical foundation and an answer to the “ought-from-is” problem. It’s a branch of science, rather than philosophy, so we are able to do better than just agreeing to disagree.
I’ve looked into these things, and as far as I can tell, all such fields or theories either do not attempt to solve the is-ought problem (as e.g. evo psych does not), or attempt to do so but (absolutely unsurprising) completely fail.
What am I missing? What’s the answer?
Humans are living, evolved agents. They thus each individually have a set of goals they attempt to optimize: a preference ordering on possible outcomes. Evolution predicts that, inside the distribution the creature evolved in, this preference ordering will be nearly as well aligned to the creature’s evolutionary fitness as is computationally feasible for the creature.
This is the first step in ought-from-is: it gives us a preference ordering, which if approximately coherent (i.e. not significantly Dutch-bookable — something evolution seems likely to encourage) implies an approximate utility function — a separate one for each human (or other animal). As in “this is what I want (for good evolutionary reasons)”. So, using agent fundamentals terminology, the answer to the ought-from-is question “where does the preference ordering on states of the world come from?” is “every evolved intelligent animal is going to have a set of evolved and learned behaviors that can be thought of as encoding a preference ordering (albeit one that may not be completely coherent, to the extent that it only approximately fulfills the criteria for the coherence theorems). ” [It even gives us a scale on the utility function, something a preference ordering doesn’t give us, in terms of the approximat effect on the evolutionary fitness of the organism: which ought correlate fairly well with the effort the organism is willing to put in to optimizing the outcome. This solves things like the utility monster problem.]
So far, that’s just Darwinism, or arguably the subfield Evolutionary Psychology, since it’s about the evolution of behavior. And so far the preference ordering “ought” is “what I want” rather than an ethical system, so arguably doesn’t yet deserve the term “ought” — I want to have a billion dollars, but saying that I thus “ought” to have a billion dollars is a bit of a stretch linguistically. Arguably so far we’ve only solved “want-from-is”.
Evolutionary Ethics goes on to explain why humans, as an intelligent social animal, are evolved to have a set of moral instincts that lets them form a set of conventions for compromises between the preference orderings of all the individual members of a tribe or other society of humans, in order to reduce intra-group conflicts by forming a “social compact” (to modify Hobbes’ terminology slightly). For example, the human sense of fairness encourages sharing of food from successful hunting or gathering expeditions, our habit of forming friendships produces alliances, and so forth. The results of this are not exactly a single coherent preference ordering on all outcomes for the society in question , let alone a utility function, more a set of heuristics on how the preference orderings of individual tribal members should be reconciled (‘should’ is here being used in the sense that, if you don’t do this and other members of the society find out, there are likely to be consequences). In general, members of the society are free to optimize whatever their own individual preferences are, unless this significantly decreases the well-being (evolutionary fitness) of other members of the society. My business is mine, until it intrudes on someone else: but then we need to compromise.
So now we have a single socially agreed “ought” per society — albeit one fuzzier and with rather more internal structure than people generally encode into utility functions: it’s a preference ordering produced by a process whose inputs are many preference orderings, (and might thus be less coherent). This moral system will be shaped both by humans’ evolved moral instincts (which are mostly shared across members of our species, albeit less so by socipaths), as is predicted by evolutionary ethics, and also by sociological, historical and political processes.
So, in philosophical terminology:
moral realism: no (However, human evolved moral instincts do tend to provide some simple consistent moral patterns across human societies, as long as you qualify all your moral statements with the rider “For humans, …”. So one could argue for a sort of ‘semi-realism’ for some simple moral statements, like “incest is bad” — that has a pretty clear evolutionary basis, and is pretty close to socially universal.)
moral relativism: yes — per society, and for some basic patterns/elements for the entire human species, but with no guarantees that these would apply to a very different intelligent social species (though there might well be commonalities for good evolutionary reasons — anything with sexual reproduction and deleterious recessives is likely to evolve an incest taboo.).
Given Said Achmiz’s comment already has 11 upvotes and 2 agreement points, should I write a post explaining all this? I had thought it all rather obvious to anyone who looks into evolutionary ethics and thinks a bit about what this means for moral philosophy (as quite a number of moral philosophers have done), but perhaps not.
This comment does really help me understand what you’re saying better. If you write a post expanding it, I would encourage you to address the following related points:
Can you have some members of a society who don’t share some of the consistent moral patterns which evolved, or do you claim that every member reliably holds these morals?
Can someone decide what they ought to value using this system? How?
Is it wrong if someone simply doesn’t care about what society values? Why?
How can we tell that your story tells us what we ought to value rather than simply explaining why we value the things we do?
Do you make a clear distinction between normative ethics and descriptive ethics? What is it?
Thanks, I’ll keep that in mind when deciding what to cover in the post when I write it.
Briefly for now, just to continue the discussion a bit:
Can you have some members of a society who don’t share some of the consistent moral patterns which evolved, or do you claim that every member reliably holds these morals?
The former (sociopaths, for example, are genetically predisposed to be less moral, and it has often been suggested this behavior is an adapted form of social opportunism, in game theory terms a different strategy, perhaps one with a stable equilibrium frequency, rather than being simply a genetic disease) — though they may get punished or shunned as a result, if their morality is different in a way that other members of the society disapprove of.
Can someone decide what they ought to value using this system? How?
How a person wants to make decisions is up to them. Most people make these decisions in a way that is influenced by their own moral instincts, social pressures, their circumstances and upbringing, their personality, expedience, and so forth. Generally, acting contrary to your instincts and impulses is challenging to do and stressful — it’s probably easier to go against them only when there’s a clear rational need. For example, if you’re rationally aware that they are maladaptive or antisocial in modern society.
Is it wrong if someone simply doesn’t care about what society values? Why?
In the context of their society of humans, yes, it is considered wrong (in almost all societies). Note that this is a morally relative statement, not a morally realist one. However, simply not caring at all is pretty atypical behavior under human moral intuitions, and is generally also pretty maladaptive (unless, say, you have absolute power). So from an evolutionary ethics point of view, it seems likely to be maladaptive behavior that will often get you imprisoned, exiled or killed. So as relative statements go, this is a pretty strong one.
How can we tell that your story tells us what we ought to value rather than simply explaining why we value the things we do?
The point of evolutionary ethics is that there is no meaningful, uniquely defined, separate sense of “ought” much stronger than “according to most common moral systems for this particular social species, or most similar species”. So the best you can do is explain why we, or most societies of a certain type, or most societies of a certain species, believe that that’s something you “ought” to do. This approach isn’t a form of moral realism.
Do you make a clear distinction between normative ethics and descriptive ethics? What is it?
Normative ethics describes my opinion about what I think people should do. Descriptive ethics describes what many people think people should do. In a society that has a social compact, the latter carries a lot more weight. However, I’m perfectly happy to discuss ethical system design: if we altered the ethics of our (or some other) society in a certain way, then the effects on the society would be this or that, which would or wouldn’t tend to increase or decrease things like human flourishing (which is itself explained by evolutionary psychology). That sounds a lot like normative ethics, but there’s a key difference: the discussion is based on a (hopefully mutually agreed) assessment of the relative merits of the predicted consequences, not “because I said so” or “because I heard God say so”.
I’m afraid that what you’ve written here seems… confused, and riddled with gaps in reasoning, unjustified leaps, etc. I do encourage you to expand this into a post, though. In that case I will hold off on writing any detailed critical reply, since the full post will be a better place for it.
Fair enough — then I’ll add that to my list of posts to write.