The standard of rationality required to make your view correct judges all humans as “irrational”. As a result, what you say is technically true but practically false.
Huh? The sancity argument is based on all (or at least many important) humans being irrational. My argument is that it’s an OK heuristic to discourage trades where irrationality reigns, but rational agents don’t need it.
Again, this statement is only true under a standard of “rationality” so high that no humans meet it.
Similarly:
I won’t take away strategic options from presumed-competent actors.
If the actors in question are human, then the presumption of competence is incorrect, by the standard of “competence” required to resist the pressures in question.
Interesting. Do you extend this to all consequentialist philosophies? They’re probably technically corrrect, but deontology is better for humans due to imperfect irrationality?
The problem is that the sacrosanct topics (and deontological mandates) are devised by exactly the same incompetents who can’t implement trading (and consequentialist moral decisions).
Not only I, but no less than Nick Bostrom, take the view that deontology as a means of establishing boundary conditions for consequentialism is the correct approach to large-scale ethical considerations. (You can read about this in his paper Infinite Ethics [PDF link] [note that an earlier version of this paper was titled “Infinitarian Challenges to Aggregative Ethics”].)
An alternative way to come to essentially the same point—“consequentialist ethics is technically correct but ‘deontology’ is better for imperfect agents”—is rule consequentialism (and this is what makes up a large part of my own current views on ethics).
Note, by the way, that deontology is not the only available ‘crutch’, so to speak; there is also virtue ethics (which is, to a first approximation, the most natural and efficient way for human minds to implement any kind of moral rule, be it consequentialist or deontological).
(And all these are compatible: one may be act-consequentialist / world-consequentialist in principle, rule-consequentialist in theory, deontologist in overall implementation of theory, and virtue-ethicist in detailed, everyday practice. These are not contradictions, but simply the way in which the goal—ideal consequences—is achieved.)
The problem is that the sacrosanct topics (and deontological mandates) are devised by exactly the same incompetents who can’t implement trading (and consequentialist moral decisions).
Indeed not; trading, and act-consequentialist decisions in general, are implemented by individuals, whereas deontological mandates are devised by egregores (or, less poetically: they emerge via cultural—and, on a much larger scale, biological—evolution).
(In fact, and equally interestingly, this is true even on an individual level: you may devise a deontological mandate for yourself, at leisure, after consideration, drawing on all your faculties, and update it—in moments of sober reflection, following great life events, for instance—as you gain wisdom; while, on the other hand, if you make every decision, evaluate every trade, on an act-consequentialist basis, then you must be making such decisions constantly, with only the faculties available to you in the moment… and even in your best moments, your faculties are less than the sum total of that which you can bring to bear over time; how long, then, until you make a bad decision? Most people make them daily… can you sustain perfect decision-making for even a single day, much less a lifetime?)
Awesome, thank you. I think I have the crux now, and can successfully ITT the sanctity argument, or at least one aspect of it. It’s about recognizing what complexity of model one can productively follow.
One (important) caveat: what you say is true denotatively, but perhaps misleading connotatively. Remember that the degree of complexity-of-model that would need to be constructed (and comprehended) in order to apply act consequentialism directly, is not merely “large” but computationally intractable, even given all resources available in the observable universe.
And then, of course, to each step of simplification, we apply the more “mundane” practical considerations, such as boundedness with respect to time and available cognitive resources, and human cognitive biases and other frailties, and so on. In this way we proceed down the chain from theory to practice, as I outlined in the grandparent comment.
Sure, agreed. Consequentialism in a limited agent (which is all of us) looks a lot like deontology. With a significant distinction that the rules are internal, not external. Each agent can (and must) pick the specific rules it thinks best implement it’s preferred consequence within it’s constraints of knowledge and decisionmaking.
First, picking rules that implement your preferred consequences is hard. Is it entirely out of the question that one might defer selection of consequentialist rules to trusted authorities, or to processes that seem like they are likely to have generated good rules? I think it is not; it seems quite reasonable, to me.
But more importantly: however ‘external’ you consider any ethical rule to be, you are still the one who decides to follow it. If you think that the deontological rules that you must follow came from God himself, handed down to Moses on Mount Sinai, that is a still a judgment that you have made. If you conclude that Kant was right, and the categorical imperative is the root of all morality, you are still the one who has come to that conclusion. However much of your rule-making you surrender to any system—however external, however authority-based—you are still the one who chose that surrender.
It may feel different, introspectively. It may feel like finding rules that are true, instead of selecting rules that are useful. But the decision, ultimately, is still yours—for there is no one else who could make it.
This may be technically true in a sense, but I disagree with the connotation. If you live in an English-speaking country, there’s a sense in which you “can” “decide” to speak only Swahili instead of English, but it would be more sensible to say that that decision has already been made for you by your society. Likewise for moral rules.
It’s not obvious to me that this is true in any significant way. Specifically, I am skeptical about the “likewise for moral rules” part of your argument; can you expand on that? How, exactly, is it likewise?
After all, if I was born and raised in an English-speaking country, then I learned English effortlessly, having to make no effort in order to do so. Learning Swahili, on the other hand, takes considerable effort, and for some people it may not even be feasible (not everyone’s good at learning foreign languages, especially without immersion). Meanwhile, selecting different moral rules requires nothing remotely approaching that much effort. Furthermore, speaking Swahili to someone who doesn’t understand it (i.e., basically everyone you ever interact with, in an English-speaking country) is tremendously counterproductive and harmful to your interests, whereas following a different set of moral rules… can be harmful, but in practice it’s often totally invisible to most of the people you interact with on a daily basis (if anything, it can be less obtrusive and less detectable by third parties than merely following the moral rules you were raised with, if you do the latter more faithfully than most members of your community!).
But perhaps an even more important point is that even if what you say is true, it’s no less true for rule-consequentialist moral rules than for deontological moral rules or virtue-ethical moral rules. Your objection, even if we accept it, does not make the distinction Dagon raised any more real.
Ok, let’s take kidney sales as a specific. Whether it’s “each agent must decide whether to buy or sell a kidney today” or “each agent must decide whether to accept rules that allow buying or selling a kidney, and then must decide if that rule should apply to this specific situation”, the agent must decide, right?
Of course—but if the rule is not formulated so as to make it nigh-trivial to determine whether it applies to any given situation, then it’s not a very good rule, is it?
And then all the considerations I’ve already outlined in my previous comments apply.
The standard of rationality required to make your view correct judges all humans as “irrational”. As a result, what you say is technically true but practically false.
Huh? The sancity argument is based on all (or at least many important) humans being irrational. My argument is that it’s an OK heuristic to discourage trades where irrationality reigns, but rational agents don’t need it.
Again, this statement is only true under a standard of “rationality” so high that no humans meet it.
Similarly:
If the actors in question are human, then the presumption of competence is incorrect, by the standard of “competence” required to resist the pressures in question.
Interesting. Do you extend this to all consequentialist philosophies? They’re probably technically corrrect, but deontology is better for humans due to imperfect irrationality?
The problem is that the sacrosanct topics (and deontological mandates) are devised by exactly the same incompetents who can’t implement trading (and consequentialist moral decisions).
Not only I, but no less than Nick Bostrom, take the view that deontology as a means of establishing boundary conditions for consequentialism is the correct approach to large-scale ethical considerations. (You can read about this in his paper Infinite Ethics [PDF link] [note that an earlier version of this paper was titled “Infinitarian Challenges to Aggregative Ethics”].)
An alternative way to come to essentially the same point—“consequentialist ethics is technically correct but ‘deontology’ is better for imperfect agents”—is rule consequentialism (and this is what makes up a large part of my own current views on ethics).
Note, by the way, that deontology is not the only available ‘crutch’, so to speak; there is also virtue ethics (which is, to a first approximation, the most natural and efficient way for human minds to implement any kind of moral rule, be it consequentialist or deontological).
(And all these are compatible: one may be act-consequentialist / world-consequentialist in principle, rule-consequentialist in theory, deontologist in overall implementation of theory, and virtue-ethicist in detailed, everyday practice. These are not contradictions, but simply the way in which the goal—ideal consequences—is achieved.)
Indeed not; trading, and act-consequentialist decisions in general, are implemented by individuals, whereas deontological mandates are devised by egregores (or, less poetically: they emerge via cultural—and, on a much larger scale, biological—evolution).
(In fact, and equally interestingly, this is true even on an individual level: you may devise a deontological mandate for yourself, at leisure, after consideration, drawing on all your faculties, and update it—in moments of sober reflection, following great life events, for instance—as you gain wisdom; while, on the other hand, if you make every decision, evaluate every trade, on an act-consequentialist basis, then you must be making such decisions constantly, with only the faculties available to you in the moment… and even in your best moments, your faculties are less than the sum total of that which you can bring to bear over time; how long, then, until you make a bad decision? Most people make them daily… can you sustain perfect decision-making for even a single day, much less a lifetime?)
Awesome, thank you. I think I have the crux now, and can successfully ITT the sanctity argument, or at least one aspect of it. It’s about recognizing what complexity of model one can productively follow.
One (important) caveat: what you say is true denotatively, but perhaps misleading connotatively. Remember that the degree of complexity-of-model that would need to be constructed (and comprehended) in order to apply act consequentialism directly, is not merely “large” but computationally intractable, even given all resources available in the observable universe.
And then, of course, to each step of simplification, we apply the more “mundane” practical considerations, such as boundedness with respect to time and available cognitive resources, and human cognitive biases and other frailties, and so on. In this way we proceed down the chain from theory to practice, as I outlined in the grandparent comment.
Sure, agreed. Consequentialism in a limited agent (which is all of us) looks a lot like deontology. With a significant distinction that the rules are internal, not external. Each agent can (and must) pick the specific rules it thinks best implement it’s preferred consequence within it’s constraints of knowledge and decisionmaking.
This distinction is illusory.
First, picking rules that implement your preferred consequences is hard. Is it entirely out of the question that one might defer selection of consequentialist rules to trusted authorities, or to processes that seem like they are likely to have generated good rules? I think it is not; it seems quite reasonable, to me.
But more importantly: however ‘external’ you consider any ethical rule to be, you are still the one who decides to follow it. If you think that the deontological rules that you must follow came from God himself, handed down to Moses on Mount Sinai, that is a still a judgment that you have made. If you conclude that Kant was right, and the categorical imperative is the root of all morality, you are still the one who has come to that conclusion. However much of your rule-making you surrender to any system—however external, however authority-based—you are still the one who chose that surrender.
It may feel different, introspectively. It may feel like finding rules that are true, instead of selecting rules that are useful. But the decision, ultimately, is still yours—for there is no one else who could make it.
This may be technically true in a sense, but I disagree with the connotation. If you live in an English-speaking country, there’s a sense in which you “can” “decide” to speak only Swahili instead of English, but it would be more sensible to say that that decision has already been made for you by your society. Likewise for moral rules.
It’s not obvious to me that this is true in any significant way. Specifically, I am skeptical about the “likewise for moral rules” part of your argument; can you expand on that? How, exactly, is it likewise?
After all, if I was born and raised in an English-speaking country, then I learned English effortlessly, having to make no effort in order to do so. Learning Swahili, on the other hand, takes considerable effort, and for some people it may not even be feasible (not everyone’s good at learning foreign languages, especially without immersion). Meanwhile, selecting different moral rules requires nothing remotely approaching that much effort. Furthermore, speaking Swahili to someone who doesn’t understand it (i.e., basically everyone you ever interact with, in an English-speaking country) is tremendously counterproductive and harmful to your interests, whereas following a different set of moral rules… can be harmful, but in practice it’s often totally invisible to most of the people you interact with on a daily basis (if anything, it can be less obtrusive and less detectable by third parties than merely following the moral rules you were raised with, if you do the latter more faithfully than most members of your community!).
But perhaps an even more important point is that even if what you say is true, it’s no less true for rule-consequentialist moral rules than for deontological moral rules or virtue-ethical moral rules. Your objection, even if we accept it, does not make the distinction Dagon raised any more real.
Ok, let’s take kidney sales as a specific. Whether it’s “each agent must decide whether to buy or sell a kidney today” or “each agent must decide whether to accept rules that allow buying or selling a kidney, and then must decide if that rule should apply to this specific situation”, the agent must decide, right?
Of course—but if the rule is not formulated so as to make it nigh-trivial to determine whether it applies to any given situation, then it’s not a very good rule, is it?
And then all the considerations I’ve already outlined in my previous comments apply.
We’re talking about humans though, not rational agents.