you can call it “trader mindset” if you like, but it feels a lot more like “agent mindset”to me. Every decision, including trades and non-trade actions, is made in order to increase the probability of some future world-state. Cutting off some avenues of optimization (for a rational, well-informed agent) is just plain incorrect.
Hell, whether it’s a trade or “extortion” is irrelevant—if paying makes for a better future-universe, I’m going to do that. I’ll continue to work to reduce the ability to set up such annoying situations (much like I’ll continue to try to reduce kidney disease), and to provide more options for those people for whom all choices are unpleasant (cheaper artificial kidneys, fewer rent-seeking predators). But I won’t take away strategic options from presumed-competent actors.
I totally accept arguments that most people aren’t rational, well-informed agents and therefore other non-rational agents (us) can somehow protect them from bad decisions by calling some topics off-limits. But that’s not what it seems you’re saying.
Conflating the trader mindset with the agent mindset—and making a profit on this transaction with producing preferred world-states—is exactly the sort of thing I’m claiming the trader mindset does, erroneously.
How is consideration of decisions about trade not part of agent mindset? “Making a profit” isn’t a special thing, it’s just one more possible future world-state that one might prefer. So yes, I think trading is part of agency. Where’s the error?
“Making a profit” privileges your unit of account as something intrinsically valuable, instead of considering the desirability of outcome directly. This is sometimes a good approximation (and indispensable for running a business), but it is not actually an attempt to directly discern the worldstate features you can change that you care about. This is what I mean by collapsing the map-territory distinction.
Ah, I see. I’m so deeply in the consequentialist/market view of the world that I mentally translate “making a profit” as not necessarily monetary, but just “improving my perceived state of the world”. I also say that I profit by going to bed on time in order to feel good the next day and that I profit by donating money to a charity that I believe improves the world more than I otherwise could with that money. “profit” is just shorthand for “result in a better world-state”, and every action is trading the un-taken decision for the taken one.
In the narrower sense “making a monetary profit” can absolutely be a bad decision. One doesn’t need to categorize things as sacred to make good decisions.
The thing I want you to notice here is that using “profit” as the default term for this makes profiting from a single transaction (e.g. arbitrage) the central case of acting to produce desired world states. I expect that simply reordering material reality to suit your preferences (e.g. tidying your room), or improving the capacity of aligned systems (e.g. learning to communicate better with your friends) will occur to you less often as things you might want to focus on, than it would if you treated profit more explicitly as a special case of beneficial actions.
The standard of rationality required to make your view correct judges all humans as “irrational”. As a result, what you say is technically true but practically false.
Huh? The sancity argument is based on all (or at least many important) humans being irrational. My argument is that it’s an OK heuristic to discourage trades where irrationality reigns, but rational agents don’t need it.
Again, this statement is only true under a standard of “rationality” so high that no humans meet it.
Similarly:
I won’t take away strategic options from presumed-competent actors.
If the actors in question are human, then the presumption of competence is incorrect, by the standard of “competence” required to resist the pressures in question.
Interesting. Do you extend this to all consequentialist philosophies? They’re probably technically corrrect, but deontology is better for humans due to imperfect irrationality?
The problem is that the sacrosanct topics (and deontological mandates) are devised by exactly the same incompetents who can’t implement trading (and consequentialist moral decisions).
Not only I, but no less than Nick Bostrom, take the view that deontology as a means of establishing boundary conditions for consequentialism is the correct approach to large-scale ethical considerations. (You can read about this in his paper Infinite Ethics [PDF link] [note that an earlier version of this paper was titled “Infinitarian Challenges to Aggregative Ethics”].)
An alternative way to come to essentially the same point—“consequentialist ethics is technically correct but ‘deontology’ is better for imperfect agents”—is rule consequentialism (and this is what makes up a large part of my own current views on ethics).
Note, by the way, that deontology is not the only available ‘crutch’, so to speak; there is also virtue ethics (which is, to a first approximation, the most natural and efficient way for human minds to implement any kind of moral rule, be it consequentialist or deontological).
(And all these are compatible: one may be act-consequentialist / world-consequentialist in principle, rule-consequentialist in theory, deontologist in overall implementation of theory, and virtue-ethicist in detailed, everyday practice. These are not contradictions, but simply the way in which the goal—ideal consequences—is achieved.)
The problem is that the sacrosanct topics (and deontological mandates) are devised by exactly the same incompetents who can’t implement trading (and consequentialist moral decisions).
Indeed not; trading, and act-consequentialist decisions in general, are implemented by individuals, whereas deontological mandates are devised by egregores (or, less poetically: they emerge via cultural—and, on a much larger scale, biological—evolution).
(In fact, and equally interestingly, this is true even on an individual level: you may devise a deontological mandate for yourself, at leisure, after consideration, drawing on all your faculties, and update it—in moments of sober reflection, following great life events, for instance—as you gain wisdom; while, on the other hand, if you make every decision, evaluate every trade, on an act-consequentialist basis, then you must be making such decisions constantly, with only the faculties available to you in the moment… and even in your best moments, your faculties are less than the sum total of that which you can bring to bear over time; how long, then, until you make a bad decision? Most people make them daily… can you sustain perfect decision-making for even a single day, much less a lifetime?)
Awesome, thank you. I think I have the crux now, and can successfully ITT the sanctity argument, or at least one aspect of it. It’s about recognizing what complexity of model one can productively follow.
One (important) caveat: what you say is true denotatively, but perhaps misleading connotatively. Remember that the degree of complexity-of-model that would need to be constructed (and comprehended) in order to apply act consequentialism directly, is not merely “large” but computationally intractable, even given all resources available in the observable universe.
And then, of course, to each step of simplification, we apply the more “mundane” practical considerations, such as boundedness with respect to time and available cognitive resources, and human cognitive biases and other frailties, and so on. In this way we proceed down the chain from theory to practice, as I outlined in the grandparent comment.
Sure, agreed. Consequentialism in a limited agent (which is all of us) looks a lot like deontology. With a significant distinction that the rules are internal, not external. Each agent can (and must) pick the specific rules it thinks best implement it’s preferred consequence within it’s constraints of knowledge and decisionmaking.
First, picking rules that implement your preferred consequences is hard. Is it entirely out of the question that one might defer selection of consequentialist rules to trusted authorities, or to processes that seem like they are likely to have generated good rules? I think it is not; it seems quite reasonable, to me.
But more importantly: however ‘external’ you consider any ethical rule to be, you are still the one who decides to follow it. If you think that the deontological rules that you must follow came from God himself, handed down to Moses on Mount Sinai, that is a still a judgment that you have made. If you conclude that Kant was right, and the categorical imperative is the root of all morality, you are still the one who has come to that conclusion. However much of your rule-making you surrender to any system—however external, however authority-based—you are still the one who chose that surrender.
It may feel different, introspectively. It may feel like finding rules that are true, instead of selecting rules that are useful. But the decision, ultimately, is still yours—for there is no one else who could make it.
This may be technically true in a sense, but I disagree with the connotation. If you live in an English-speaking country, there’s a sense in which you “can” “decide” to speak only Swahili instead of English, but it would be more sensible to say that that decision has already been made for you by your society. Likewise for moral rules.
It’s not obvious to me that this is true in any significant way. Specifically, I am skeptical about the “likewise for moral rules” part of your argument; can you expand on that? How, exactly, is it likewise?
After all, if I was born and raised in an English-speaking country, then I learned English effortlessly, having to make no effort in order to do so. Learning Swahili, on the other hand, takes considerable effort, and for some people it may not even be feasible (not everyone’s good at learning foreign languages, especially without immersion). Meanwhile, selecting different moral rules requires nothing remotely approaching that much effort. Furthermore, speaking Swahili to someone who doesn’t understand it (i.e., basically everyone you ever interact with, in an English-speaking country) is tremendously counterproductive and harmful to your interests, whereas following a different set of moral rules… can be harmful, but in practice it’s often totally invisible to most of the people you interact with on a daily basis (if anything, it can be less obtrusive and less detectable by third parties than merely following the moral rules you were raised with, if you do the latter more faithfully than most members of your community!).
But perhaps an even more important point is that even if what you say is true, it’s no less true for rule-consequentialist moral rules than for deontological moral rules or virtue-ethical moral rules. Your objection, even if we accept it, does not make the distinction Dagon raised any more real.
Ok, let’s take kidney sales as a specific. Whether it’s “each agent must decide whether to buy or sell a kidney today” or “each agent must decide whether to accept rules that allow buying or selling a kidney, and then must decide if that rule should apply to this specific situation”, the agent must decide, right?
Of course—but if the rule is not formulated so as to make it nigh-trivial to determine whether it applies to any given situation, then it’s not a very good rule, is it?
And then all the considerations I’ve already outlined in my previous comments apply.
you can call it “trader mindset” if you like, but it feels a lot more like “agent mindset”to me. Every decision, including trades and non-trade actions, is made in order to increase the probability of some future world-state. Cutting off some avenues of optimization (for a rational, well-informed agent) is just plain incorrect.
Hell, whether it’s a trade or “extortion” is irrelevant—if paying makes for a better future-universe, I’m going to do that. I’ll continue to work to reduce the ability to set up such annoying situations (much like I’ll continue to try to reduce kidney disease), and to provide more options for those people for whom all choices are unpleasant (cheaper artificial kidneys, fewer rent-seeking predators). But I won’t take away strategic options from presumed-competent actors.
I totally accept arguments that most people aren’t rational, well-informed agents and therefore other non-rational agents (us) can somehow protect them from bad decisions by calling some topics off-limits. But that’s not what it seems you’re saying.
Conflating the trader mindset with the agent mindset—and making a profit on this transaction with producing preferred world-states—is exactly the sort of thing I’m claiming the trader mindset does, erroneously.
How is consideration of decisions about trade not part of agent mindset? “Making a profit” isn’t a special thing, it’s just one more possible future world-state that one might prefer. So yes, I think trading is part of agency. Where’s the error?
“Making a profit” privileges your unit of account as something intrinsically valuable, instead of considering the desirability of outcome directly. This is sometimes a good approximation (and indispensable for running a business), but it is not actually an attempt to directly discern the worldstate features you can change that you care about. This is what I mean by collapsing the map-territory distinction.
Ah, I see. I’m so deeply in the consequentialist/market view of the world that I mentally translate “making a profit” as not necessarily monetary, but just “improving my perceived state of the world”. I also say that I profit by going to bed on time in order to feel good the next day and that I profit by donating money to a charity that I believe improves the world more than I otherwise could with that money. “profit” is just shorthand for “result in a better world-state”, and every action is trading the un-taken decision for the taken one.
In the narrower sense “making a monetary profit” can absolutely be a bad decision. One doesn’t need to categorize things as sacred to make good decisions.
The thing I want you to notice here is that using “profit” as the default term for this makes profiting from a single transaction (e.g. arbitrage) the central case of acting to produce desired world states. I expect that simply reordering material reality to suit your preferences (e.g. tidying your room), or improving the capacity of aligned systems (e.g. learning to communicate better with your friends) will occur to you less often as things you might want to focus on, than it would if you treated profit more explicitly as a special case of beneficial actions.
The standard of rationality required to make your view correct judges all humans as “irrational”. As a result, what you say is technically true but practically false.
Huh? The sancity argument is based on all (or at least many important) humans being irrational. My argument is that it’s an OK heuristic to discourage trades where irrationality reigns, but rational agents don’t need it.
Again, this statement is only true under a standard of “rationality” so high that no humans meet it.
Similarly:
If the actors in question are human, then the presumption of competence is incorrect, by the standard of “competence” required to resist the pressures in question.
Interesting. Do you extend this to all consequentialist philosophies? They’re probably technically corrrect, but deontology is better for humans due to imperfect irrationality?
The problem is that the sacrosanct topics (and deontological mandates) are devised by exactly the same incompetents who can’t implement trading (and consequentialist moral decisions).
Not only I, but no less than Nick Bostrom, take the view that deontology as a means of establishing boundary conditions for consequentialism is the correct approach to large-scale ethical considerations. (You can read about this in his paper Infinite Ethics [PDF link] [note that an earlier version of this paper was titled “Infinitarian Challenges to Aggregative Ethics”].)
An alternative way to come to essentially the same point—“consequentialist ethics is technically correct but ‘deontology’ is better for imperfect agents”—is rule consequentialism (and this is what makes up a large part of my own current views on ethics).
Note, by the way, that deontology is not the only available ‘crutch’, so to speak; there is also virtue ethics (which is, to a first approximation, the most natural and efficient way for human minds to implement any kind of moral rule, be it consequentialist or deontological).
(And all these are compatible: one may be act-consequentialist / world-consequentialist in principle, rule-consequentialist in theory, deontologist in overall implementation of theory, and virtue-ethicist in detailed, everyday practice. These are not contradictions, but simply the way in which the goal—ideal consequences—is achieved.)
Indeed not; trading, and act-consequentialist decisions in general, are implemented by individuals, whereas deontological mandates are devised by egregores (or, less poetically: they emerge via cultural—and, on a much larger scale, biological—evolution).
(In fact, and equally interestingly, this is true even on an individual level: you may devise a deontological mandate for yourself, at leisure, after consideration, drawing on all your faculties, and update it—in moments of sober reflection, following great life events, for instance—as you gain wisdom; while, on the other hand, if you make every decision, evaluate every trade, on an act-consequentialist basis, then you must be making such decisions constantly, with only the faculties available to you in the moment… and even in your best moments, your faculties are less than the sum total of that which you can bring to bear over time; how long, then, until you make a bad decision? Most people make them daily… can you sustain perfect decision-making for even a single day, much less a lifetime?)
Awesome, thank you. I think I have the crux now, and can successfully ITT the sanctity argument, or at least one aspect of it. It’s about recognizing what complexity of model one can productively follow.
One (important) caveat: what you say is true denotatively, but perhaps misleading connotatively. Remember that the degree of complexity-of-model that would need to be constructed (and comprehended) in order to apply act consequentialism directly, is not merely “large” but computationally intractable, even given all resources available in the observable universe.
And then, of course, to each step of simplification, we apply the more “mundane” practical considerations, such as boundedness with respect to time and available cognitive resources, and human cognitive biases and other frailties, and so on. In this way we proceed down the chain from theory to practice, as I outlined in the grandparent comment.
Sure, agreed. Consequentialism in a limited agent (which is all of us) looks a lot like deontology. With a significant distinction that the rules are internal, not external. Each agent can (and must) pick the specific rules it thinks best implement it’s preferred consequence within it’s constraints of knowledge and decisionmaking.
This distinction is illusory.
First, picking rules that implement your preferred consequences is hard. Is it entirely out of the question that one might defer selection of consequentialist rules to trusted authorities, or to processes that seem like they are likely to have generated good rules? I think it is not; it seems quite reasonable, to me.
But more importantly: however ‘external’ you consider any ethical rule to be, you are still the one who decides to follow it. If you think that the deontological rules that you must follow came from God himself, handed down to Moses on Mount Sinai, that is a still a judgment that you have made. If you conclude that Kant was right, and the categorical imperative is the root of all morality, you are still the one who has come to that conclusion. However much of your rule-making you surrender to any system—however external, however authority-based—you are still the one who chose that surrender.
It may feel different, introspectively. It may feel like finding rules that are true, instead of selecting rules that are useful. But the decision, ultimately, is still yours—for there is no one else who could make it.
This may be technically true in a sense, but I disagree with the connotation. If you live in an English-speaking country, there’s a sense in which you “can” “decide” to speak only Swahili instead of English, but it would be more sensible to say that that decision has already been made for you by your society. Likewise for moral rules.
It’s not obvious to me that this is true in any significant way. Specifically, I am skeptical about the “likewise for moral rules” part of your argument; can you expand on that? How, exactly, is it likewise?
After all, if I was born and raised in an English-speaking country, then I learned English effortlessly, having to make no effort in order to do so. Learning Swahili, on the other hand, takes considerable effort, and for some people it may not even be feasible (not everyone’s good at learning foreign languages, especially without immersion). Meanwhile, selecting different moral rules requires nothing remotely approaching that much effort. Furthermore, speaking Swahili to someone who doesn’t understand it (i.e., basically everyone you ever interact with, in an English-speaking country) is tremendously counterproductive and harmful to your interests, whereas following a different set of moral rules… can be harmful, but in practice it’s often totally invisible to most of the people you interact with on a daily basis (if anything, it can be less obtrusive and less detectable by third parties than merely following the moral rules you were raised with, if you do the latter more faithfully than most members of your community!).
But perhaps an even more important point is that even if what you say is true, it’s no less true for rule-consequentialist moral rules than for deontological moral rules or virtue-ethical moral rules. Your objection, even if we accept it, does not make the distinction Dagon raised any more real.
Ok, let’s take kidney sales as a specific. Whether it’s “each agent must decide whether to buy or sell a kidney today” or “each agent must decide whether to accept rules that allow buying or selling a kidney, and then must decide if that rule should apply to this specific situation”, the agent must decide, right?
Of course—but if the rule is not formulated so as to make it nigh-trivial to determine whether it applies to any given situation, then it’s not a very good rule, is it?
And then all the considerations I’ve already outlined in my previous comments apply.
We’re talking about humans though, not rational agents.