This is an assertion, not an argument. Why is morality about rules, not conseqeuences?
I don’t actually understand what people mean when they say in principle it’s the rules which matter, not the balance of the good and bad consequences which occur. If consequences were unimportant, why have the rules that we have? Surely you agree that proscriptions against rape, murder, theft, torture, arson, etc all have the common thread of not causing undue suffering to another person?
I can understand (and in most cases accept) the argument that human beings are too flawed to figure out and understand the consequences. Therefore, in most cases we should stick to tried and tested rules which have reduced suffering and created peaceful societies in the past and shut down the cognitive processes which say, “But maybe I could murder the leader and seize power just this once if the whole group will benefit....”
But I can’t see how the point of morality is rules. If that’s the case, why are the rules not completely random? Why is morality not fashion?
By the way, 10 people is probably too low a number for me to sacrifice myself, especially given that I can just donate a large portion of my income to save 1000′s of lives. But if in some bizarre world, the only way for me to save X people was to be subjected to rape (I’m female, BTW), for sufficiently high values of X I should damn well hope I’ll step up. (And I’m not proud that 1 or 2 or 10 probably wouldn’t do it for me. I’m selfish, and I am what I am, but this is not my ideal self.)
The one who offered this sadistic choice is of course evil, because ze could have fed the starving kids without raping me, thus creating the maximum well-being in zir capacity. (Knock-on effects of encouraging sadistic rapists should be factored into the consequential calculation, but I have no problem treating a hypothetical as pure and simple.)
I’m not actually knocking rules here; I think we run on corrupted hardware and in our personal lives we should follow rules quite strictly. I’m just saying that the rules should be (and are) derived from the consequences of those actions.
If consequences were unimportant, why have the rules that we have? Surely you agree that proscriptions against rape, murder, theft, torture, arson, etc all have the common thread of not causing undue suffering to another person?
To play the devil’s advocate (I am not a deontologist myself), the converse question, i.e. why care about the consequences we care about is about as legitimate as yours. It is not entirely unimaginable for a person to have a strong instinctive aversion towards murder while caring much less (or not at all) about its consequences. Many people indeed reveal such preferences by voting for inaction in the Trolley Problem or by ascribing to Rand’s Objectivism. You seem to think that those people are in error, actually having derived their deontological preferences from harm minimisation and then forgetting that the rules aren’t primary—but isn’t it at least possible that their preferences are genuine?
It’s hard for me to say when and whether other people are in error, especially moral error. I don’t deny that it’s possible people have a strong aversion to murder while not caring about the consequences. In fact, in terms of genetic fitness, going out of your way to avoid being the one who personally stabs the other guy while not caring much whether he gets stabbed would have helped you avoid both punishment and risk.
But from my observations, most people are upset when others suffer and die. This tells me most of us do care, though it doesn’t tell me how much. I don’t actually rail against people who care less than I do; as a consequentialist one of the problems I need to solve is incentivizing people to help even if they only care a little bit.
Caring is like activation energy in a chemical reaction; it has to get to a certain point before help is forthcoming. We can try to raise people’s levels of caring, which is usually exhausting and almost always temporary, or we can make helping easier and more effective, and watch what happens then. If it becomes more forthcoming, we can believe that consequences and cost-benefit balances do matter to some degree.
This was a circuitous answer, I know. My reply to you is basically, “Yes, it’s possible, but people don’t behave as if they literally care nothing for consequences to other people’s well being.”
I can’t but agree with all you have written, but I have the feeling that we are now discussing a question slightly different from the original one: “how the point of morality is rules?” People indeed don’t behave as if they literally care nothing for consequences to other people’s well being, but many people behave as if, in certain situations, the consequences are less important than the rules. Often it is possible to persuade them to accept the consequentialist viewpoint by abstract argument—more often than it is possible to convert a consequentialist to deontology by abstract argument—but that only shows consequentialism is more consistent with abstract thinking. But there are situations, like the Trolley Problem, where even many the self-identified consequentialists choose to prefer rules over consequences, even if it necessitates heavy rationalisation and/or fighting the hypothetical.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences. Perhaps it isn’t a helpful answer if you want to understand, on the level of gut feelings, how the rules can trump solid consequentialist reasoning even in absence of uncertainty and bias, if your own deontologist intuitions are very weak. But at least it should be clear that the answer to the question you have asked in your topmost comment,
[if the point of morality is rules] why are the rules not completely random?
has something to do with our evolved intuitions. And even if you disagree with that, I hope you agree that whatever the answer is, it would not change much if in the conditional we replace “rules” by “consequences”.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences.
I agree with you there. But even though people seem to care about both rules and consequences, as separate categories in their mental conceptions of morality, it does seem as if the rules have a recurring pattern of bringing about or preventing certain particular consequences. Our evolved instincts make us prone to following certain rules, and they make us prone to desiring certain outcomes. Many of us think the rules should trump the desired outcomes—but the rules themselves line up with desired outcomes most of the time. Moral dilemmas are just descriptions of those rare situations when following the rule won’t lead to the desired outcome.
Hm, I used the local vernacular in favor of explaining myself more clearly. You make a valid point.
How about this: Our brain was not created in one shot. New adaptations were layered over more primitive ones. The neocortex and various other recent adaptations, which arose back when the homo genus came into being, are most likely what give me the thing I call “consciousness.” The cluster of recently adapted conscious modules make up the voice in my head which narrates my thoughts. I restrict my definition of “I” to this “conscious software.” This conscious “I” has absorbed various values which augment the limited natural empathy and altruism which was beneficial to my ancestors. Obviously, “I” only care about “me.”
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
But these ancient functions don’t exactly get a say when I’m thinking my thoughts and determining my values. So, from the perspective of my conscious, far-mode modules, which have certain values like “I should treat people equally,” “I should be honest,” and “My values should be self-consistent and complete,” older modules are often trying to thwart me.
This relates to moral dilemmas because when the I in my brain is trying to honestly and accurately calculate what the best course of action would be, selfishness and power-grabbing instincts can sneak in and wordlessly steer my decisions so the “best” course of action “coincidentally” ends up with me somehow getting a lot of money and power.
Thanks for the explanation. Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
Yeah. I borrowed my vocabulary for discussing this kind of thing from a community dominated by programmers, and I myself am a pretty math-y kind of person. :)
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
In the end, I feel responsible for the actions of my body caused by selfish impulses, even if I don’t verbally approve of them. And society holds me responsible, too. Regardless of whether it’s fair, I have to work in a world where I’m expected to control my brain.
Besides, I am smarter than my brain, after all. There are limits to how much I can exert conscious control over ancient motivations—but as far as I’m concerned, it’s totally fair to criticize me for not doing my absolute best to reach that limit.
For example, the brain is a creature of habit, and because I haven’t started my independent life yet, I’m in the perfect position to adopt habits that will improve the world optimally. I can plan ahead of time to only spend up to a certain dollar amount on myself and my friends/family (based on happiness research, knowledge of my own needs, etc) and throw any and all surplus income into an “optimal philanthropy” bucket which must be donated. My monkey brain will just think of that money as “unavailable” and donate out of habit, allowing me to maximize my impact while minimizing difficulty for myself. (Thinking of meat as “just unavailable” is how I and most other vegetarians organize our diets without stress.)
I know I can do this, the science backs me up; if I do not, and succumb to selfish impulses anyway, that’s definitely my fault. I have the opportunity to plan ahead and manipulate my brain; if my values are to be self-consistent, I must take it.
Thanks, by the way, for indulging my question and elaborating on something tangential to your point.
Besides, I am smarter than my brain, after all.
This is similar to the ‘corrupted hardware’ claim insofar as both seem to me to be in tension with the software/hardware metaphor: if your brain is your hardware, and your rational deliberation and reflection is software, then it doesn’t make sense to say that the brain isn’t as smart as you (the software) are. It wouldn’t make sense to say of hardware that it doesn’t [sufficiently] perform the functions of software. Hardware and software do different things.
So it has to be that you have two different sets of software. A native software that your brain is running all the time and which is selfish and uncontrolled, alongside an engendered software which is rational and with which you self-identify. If the brain is corrupted, it’s not in its distinctive functions, but just in the fact that it has this native software that you can’t entirely control and can’t get rid of.
But that still seems off to me. We can’t really call the brain ‘corrupted hardware’ because we have no idea what non-corrupted hardware would even look like. At the moment, general intelligence is only possible on one kind of hardware: ours. So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires. Likewise, the non-rational software apparently doesn’t stand in relation to the rational software as an alien competitor. The non-rational stuff and the rational stuff seem to be joined everywhere, and it’s not at all clear that the rational stuff even works without the rest of it.
Well, when metaphors break, I say just toss ’em. It’s not exactly like the distinction between hardware and software; your new metaphor makes a little bit more sense in terms of what we’re discussing now, but in the end, the brain is only completely like the brain.
We could think of it this way: the brain is like a computer with an awful user interface, which forces us to constantly run a whole lot of programs which we don’t necessarily want and can’t actually read or control. It also has a little bit of processing power left for us to install other applications. The only thing we actually like about our computer is the applications we chose to put in, even though not having the computer at all would mean we had no way to run them.
I was not being 100% serious when I said I was smarter than my brain; it was sort of intended to illustrate the weird tension I have: all that I am is contained in my brain, but not all of my brain is who I am.
So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires.
This hacked-together brain results in some general intelligence; it’s highly unlikely that it’s optimized for general intelligence, that we can’t, even in theory, imagine a better substrate for it. In short, “corrupted hardware” means “my physical brain is not optimized for the things my conscious mind values.”
But I can’t see how the point of morality is rules. If that’s the case, why are the rules not completely random? Why is morality not fashion?
My understanding of the work of Haidt is that much of morality is pattern matching on behavior and not just outcomes, and that’s what I would expect to see in evolved social creatures.
This is an assertion, not an argument. Why is morality about rules, not conseqeuences?
I don’t actually understand what people mean when they say in principle it’s the rules which matter, not the balance of the good and bad consequences which occur. If consequences were unimportant, why have the rules that we have? Surely you agree that proscriptions against rape, murder, theft, torture, arson, etc all have the common thread of not causing undue suffering to another person?
I can understand (and in most cases accept) the argument that human beings are too flawed to figure out and understand the consequences. Therefore, in most cases we should stick to tried and tested rules which have reduced suffering and created peaceful societies in the past and shut down the cognitive processes which say, “But maybe I could murder the leader and seize power just this once if the whole group will benefit....”
But I can’t see how the point of morality is rules. If that’s the case, why are the rules not completely random? Why is morality not fashion?
By the way, 10 people is probably too low a number for me to sacrifice myself, especially given that I can just donate a large portion of my income to save 1000′s of lives. But if in some bizarre world, the only way for me to save X people was to be subjected to rape (I’m female, BTW), for sufficiently high values of X I should damn well hope I’ll step up. (And I’m not proud that 1 or 2 or 10 probably wouldn’t do it for me. I’m selfish, and I am what I am, but this is not my ideal self.)
The one who offered this sadistic choice is of course evil, because ze could have fed the starving kids without raping me, thus creating the maximum well-being in zir capacity. (Knock-on effects of encouraging sadistic rapists should be factored into the consequential calculation, but I have no problem treating a hypothetical as pure and simple.)
I’m not actually knocking rules here; I think we run on corrupted hardware and in our personal lives we should follow rules quite strictly. I’m just saying that the rules should be (and are) derived from the consequences of those actions.
To play the devil’s advocate (I am not a deontologist myself), the converse question, i.e. why care about the consequences we care about is about as legitimate as yours. It is not entirely unimaginable for a person to have a strong instinctive aversion towards murder while caring much less (or not at all) about its consequences. Many people indeed reveal such preferences by voting for inaction in the Trolley Problem or by ascribing to Rand’s Objectivism. You seem to think that those people are in error, actually having derived their deontological preferences from harm minimisation and then forgetting that the rules aren’t primary—but isn’t it at least possible that their preferences are genuine?
It’s hard for me to say when and whether other people are in error, especially moral error. I don’t deny that it’s possible people have a strong aversion to murder while not caring about the consequences. In fact, in terms of genetic fitness, going out of your way to avoid being the one who personally stabs the other guy while not caring much whether he gets stabbed would have helped you avoid both punishment and risk.
But from my observations, most people are upset when others suffer and die. This tells me most of us do care, though it doesn’t tell me how much. I don’t actually rail against people who care less than I do; as a consequentialist one of the problems I need to solve is incentivizing people to help even if they only care a little bit.
Caring is like activation energy in a chemical reaction; it has to get to a certain point before help is forthcoming. We can try to raise people’s levels of caring, which is usually exhausting and almost always temporary, or we can make helping easier and more effective, and watch what happens then. If it becomes more forthcoming, we can believe that consequences and cost-benefit balances do matter to some degree.
This was a circuitous answer, I know. My reply to you is basically, “Yes, it’s possible, but people don’t behave as if they literally care nothing for consequences to other people’s well being.”
I can’t but agree with all you have written, but I have the feeling that we are now discussing a question slightly different from the original one: “how the point of morality is rules?” People indeed don’t behave as if they literally care nothing for consequences to other people’s well being, but many people behave as if, in certain situations, the consequences are less important than the rules. Often it is possible to persuade them to accept the consequentialist viewpoint by abstract argument—more often than it is possible to convert a consequentialist to deontology by abstract argument—but that only shows consequentialism is more consistent with abstract thinking. But there are situations, like the Trolley Problem, where even many the self-identified consequentialists choose to prefer rules over consequences, even if it necessitates heavy rationalisation and/or fighting the hypothetical.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences. Perhaps it isn’t a helpful answer if you want to understand, on the level of gut feelings, how the rules can trump solid consequentialist reasoning even in absence of uncertainty and bias, if your own deontologist intuitions are very weak. But at least it should be clear that the answer to the question you have asked in your topmost comment,
has something to do with our evolved intuitions. And even if you disagree with that, I hope you agree that whatever the answer is, it would not change much if in the conditional we replace “rules” by “consequences”.
I agree with you there. But even though people seem to care about both rules and consequences, as separate categories in their mental conceptions of morality, it does seem as if the rules have a recurring pattern of bringing about or preventing certain particular consequences. Our evolved instincts make us prone to following certain rules, and they make us prone to desiring certain outcomes. Many of us think the rules should trump the desired outcomes—but the rules themselves line up with desired outcomes most of the time. Moral dilemmas are just descriptions of those rare situations when following the rule won’t lead to the desired outcome.
Compared to what? Or corrupted from what more functional state?
Hm, I used the local vernacular in favor of explaining myself more clearly. You make a valid point.
How about this: Our brain was not created in one shot. New adaptations were layered over more primitive ones. The neocortex and various other recent adaptations, which arose back when the homo genus came into being, are most likely what give me the thing I call “consciousness.” The cluster of recently adapted conscious modules make up the voice in my head which narrates my thoughts. I restrict my definition of “I” to this “conscious software.” This conscious “I” has absorbed various values which augment the limited natural empathy and altruism which was beneficial to my ancestors. Obviously, “I” only care about “me.”
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
But these ancient functions don’t exactly get a say when I’m thinking my thoughts and determining my values. So, from the perspective of my conscious, far-mode modules, which have certain values like “I should treat people equally,” “I should be honest,” and “My values should be self-consistent and complete,” older modules are often trying to thwart me.
This relates to moral dilemmas because when the I in my brain is trying to honestly and accurately calculate what the best course of action would be, selfishness and power-grabbing instincts can sneak in and wordlessly steer my decisions so the “best” course of action “coincidentally” ends up with me somehow getting a lot of money and power.
This is what I meant when I used the shorthand.
Thanks for the explanation. Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
Yeah. I borrowed my vocabulary for discussing this kind of thing from a community dominated by programmers, and I myself am a pretty math-y kind of person. :)
In the end, I feel responsible for the actions of my body caused by selfish impulses, even if I don’t verbally approve of them. And society holds me responsible, too. Regardless of whether it’s fair, I have to work in a world where I’m expected to control my brain.
Besides, I am smarter than my brain, after all. There are limits to how much I can exert conscious control over ancient motivations—but as far as I’m concerned, it’s totally fair to criticize me for not doing my absolute best to reach that limit.
For example, the brain is a creature of habit, and because I haven’t started my independent life yet, I’m in the perfect position to adopt habits that will improve the world optimally. I can plan ahead of time to only spend up to a certain dollar amount on myself and my friends/family (based on happiness research, knowledge of my own needs, etc) and throw any and all surplus income into an “optimal philanthropy” bucket which must be donated. My monkey brain will just think of that money as “unavailable” and donate out of habit, allowing me to maximize my impact while minimizing difficulty for myself. (Thinking of meat as “just unavailable” is how I and most other vegetarians organize our diets without stress.)
I know I can do this, the science backs me up; if I do not, and succumb to selfish impulses anyway, that’s definitely my fault. I have the opportunity to plan ahead and manipulate my brain; if my values are to be self-consistent, I must take it.
Thanks, by the way, for indulging my question and elaborating on something tangential to your point.
This is similar to the ‘corrupted hardware’ claim insofar as both seem to me to be in tension with the software/hardware metaphor: if your brain is your hardware, and your rational deliberation and reflection is software, then it doesn’t make sense to say that the brain isn’t as smart as you (the software) are. It wouldn’t make sense to say of hardware that it doesn’t [sufficiently] perform the functions of software. Hardware and software do different things.
So it has to be that you have two different sets of software. A native software that your brain is running all the time and which is selfish and uncontrolled, alongside an engendered software which is rational and with which you self-identify. If the brain is corrupted, it’s not in its distinctive functions, but just in the fact that it has this native software that you can’t entirely control and can’t get rid of.
But that still seems off to me. We can’t really call the brain ‘corrupted hardware’ because we have no idea what non-corrupted hardware would even look like. At the moment, general intelligence is only possible on one kind of hardware: ours. So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires. Likewise, the non-rational software apparently doesn’t stand in relation to the rational software as an alien competitor. The non-rational stuff and the rational stuff seem to be joined everywhere, and it’s not at all clear that the rational stuff even works without the rest of it.
Well, when metaphors break, I say just toss ’em. It’s not exactly like the distinction between hardware and software; your new metaphor makes a little bit more sense in terms of what we’re discussing now, but in the end, the brain is only completely like the brain.
We could think of it this way: the brain is like a computer with an awful user interface, which forces us to constantly run a whole lot of programs which we don’t necessarily want and can’t actually read or control. It also has a little bit of processing power left for us to install other applications. The only thing we actually like about our computer is the applications we chose to put in, even though not having the computer at all would mean we had no way to run them.
I was not being 100% serious when I said I was smarter than my brain; it was sort of intended to illustrate the weird tension I have: all that I am is contained in my brain, but not all of my brain is who I am.
This hacked-together brain results in some general intelligence; it’s highly unlikely that it’s optimized for general intelligence, that we can’t, even in theory, imagine a better substrate for it. In short, “corrupted hardware” means “my physical brain is not optimized for the things my conscious mind values.”
Point taken, and you’re probably right about the optimization thing. Thanks for taking the time to explain.
You’re welcome! :) Thank you for forcing me to think more precisely about this.
Wiki: Corrupted hardware
I think my questions (idle though they may be) stand.
My understanding of the work of Haidt is that much of morality is pattern matching on behavior and not just outcomes, and that’s what I would expect to see in evolved social creatures.