If consequences were unimportant, why have the rules that we have? Surely you agree that proscriptions against rape, murder, theft, torture, arson, etc all have the common thread of not causing undue suffering to another person?
To play the devil’s advocate (I am not a deontologist myself), the converse question, i.e. why care about the consequences we care about is about as legitimate as yours. It is not entirely unimaginable for a person to have a strong instinctive aversion towards murder while caring much less (or not at all) about its consequences. Many people indeed reveal such preferences by voting for inaction in the Trolley Problem or by ascribing to Rand’s Objectivism. You seem to think that those people are in error, actually having derived their deontological preferences from harm minimisation and then forgetting that the rules aren’t primary—but isn’t it at least possible that their preferences are genuine?
It’s hard for me to say when and whether other people are in error, especially moral error. I don’t deny that it’s possible people have a strong aversion to murder while not caring about the consequences. In fact, in terms of genetic fitness, going out of your way to avoid being the one who personally stabs the other guy while not caring much whether he gets stabbed would have helped you avoid both punishment and risk.
But from my observations, most people are upset when others suffer and die. This tells me most of us do care, though it doesn’t tell me how much. I don’t actually rail against people who care less than I do; as a consequentialist one of the problems I need to solve is incentivizing people to help even if they only care a little bit.
Caring is like activation energy in a chemical reaction; it has to get to a certain point before help is forthcoming. We can try to raise people’s levels of caring, which is usually exhausting and almost always temporary, or we can make helping easier and more effective, and watch what happens then. If it becomes more forthcoming, we can believe that consequences and cost-benefit balances do matter to some degree.
This was a circuitous answer, I know. My reply to you is basically, “Yes, it’s possible, but people don’t behave as if they literally care nothing for consequences to other people’s well being.”
I can’t but agree with all you have written, but I have the feeling that we are now discussing a question slightly different from the original one: “how the point of morality is rules?” People indeed don’t behave as if they literally care nothing for consequences to other people’s well being, but many people behave as if, in certain situations, the consequences are less important than the rules. Often it is possible to persuade them to accept the consequentialist viewpoint by abstract argument—more often than it is possible to convert a consequentialist to deontology by abstract argument—but that only shows consequentialism is more consistent with abstract thinking. But there are situations, like the Trolley Problem, where even many the self-identified consequentialists choose to prefer rules over consequences, even if it necessitates heavy rationalisation and/or fighting the hypothetical.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences. Perhaps it isn’t a helpful answer if you want to understand, on the level of gut feelings, how the rules can trump solid consequentialist reasoning even in absence of uncertainty and bias, if your own deontologist intuitions are very weak. But at least it should be clear that the answer to the question you have asked in your topmost comment,
[if the point of morality is rules] why are the rules not completely random?
has something to do with our evolved intuitions. And even if you disagree with that, I hope you agree that whatever the answer is, it would not change much if in the conditional we replace “rules” by “consequences”.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences.
I agree with you there. But even though people seem to care about both rules and consequences, as separate categories in their mental conceptions of morality, it does seem as if the rules have a recurring pattern of bringing about or preventing certain particular consequences. Our evolved instincts make us prone to following certain rules, and they make us prone to desiring certain outcomes. Many of us think the rules should trump the desired outcomes—but the rules themselves line up with desired outcomes most of the time. Moral dilemmas are just descriptions of those rare situations when following the rule won’t lead to the desired outcome.
To play the devil’s advocate (I am not a deontologist myself), the converse question, i.e. why care about the consequences we care about is about as legitimate as yours. It is not entirely unimaginable for a person to have a strong instinctive aversion towards murder while caring much less (or not at all) about its consequences. Many people indeed reveal such preferences by voting for inaction in the Trolley Problem or by ascribing to Rand’s Objectivism. You seem to think that those people are in error, actually having derived their deontological preferences from harm minimisation and then forgetting that the rules aren’t primary—but isn’t it at least possible that their preferences are genuine?
It’s hard for me to say when and whether other people are in error, especially moral error. I don’t deny that it’s possible people have a strong aversion to murder while not caring about the consequences. In fact, in terms of genetic fitness, going out of your way to avoid being the one who personally stabs the other guy while not caring much whether he gets stabbed would have helped you avoid both punishment and risk.
But from my observations, most people are upset when others suffer and die. This tells me most of us do care, though it doesn’t tell me how much. I don’t actually rail against people who care less than I do; as a consequentialist one of the problems I need to solve is incentivizing people to help even if they only care a little bit.
Caring is like activation energy in a chemical reaction; it has to get to a certain point before help is forthcoming. We can try to raise people’s levels of caring, which is usually exhausting and almost always temporary, or we can make helping easier and more effective, and watch what happens then. If it becomes more forthcoming, we can believe that consequences and cost-benefit balances do matter to some degree.
This was a circuitous answer, I know. My reply to you is basically, “Yes, it’s possible, but people don’t behave as if they literally care nothing for consequences to other people’s well being.”
I can’t but agree with all you have written, but I have the feeling that we are now discussing a question slightly different from the original one: “how the point of morality is rules?” People indeed don’t behave as if they literally care nothing for consequences to other people’s well being, but many people behave as if, in certain situations, the consequences are less important than the rules. Often it is possible to persuade them to accept the consequentialist viewpoint by abstract argument—more often than it is possible to convert a consequentialist to deontology by abstract argument—but that only shows consequentialism is more consistent with abstract thinking. But there are situations, like the Trolley Problem, where even many the self-identified consequentialists choose to prefer rules over consequences, even if it necessitates heavy rationalisation and/or fighting the hypothetical.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences. Perhaps it isn’t a helpful answer if you want to understand, on the level of gut feelings, how the rules can trump solid consequentialist reasoning even in absence of uncertainty and bias, if your own deontologist intuitions are very weak. But at least it should be clear that the answer to the question you have asked in your topmost comment,
has something to do with our evolved intuitions. And even if you disagree with that, I hope you agree that whatever the answer is, it would not change much if in the conditional we replace “rules” by “consequences”.
I agree with you there. But even though people seem to care about both rules and consequences, as separate categories in their mental conceptions of morality, it does seem as if the rules have a recurring pattern of bringing about or preventing certain particular consequences. Our evolved instincts make us prone to following certain rules, and they make us prone to desiring certain outcomes. Many of us think the rules should trump the desired outcomes—but the rules themselves line up with desired outcomes most of the time. Moral dilemmas are just descriptions of those rare situations when following the rule won’t lead to the desired outcome.