Deontology and Virtue Ethics are reducible to counting non-obvious consequences of your actions. If you choose to lie, people are more likely to disbelieve you—so there’s a reason to follow a “no lying” rule that a naive consequentialist misses.
I don’t believe this is a reduction. A deontologist will not lie even when he has built up an immense base of trust and would “win” a whole lot from the lie. He just won’t do it, because to him it’s completely unethical.
Furthermore, the consequentialist might reason the other way around. A deontologist not-liar might decide that he can use Exact Words or You Didn’t Ask to engage in some necessary deception. A long-term consequentialist will note that actually doing so gets you a reputation for being a Manipulative Bastard—which then actually segues right into “Virtue Ethics as Timeless Decision Theory or ethics under repeated games”.
(By the way, what is actually the distinction between timeless decision theory and a decision model under which all scenarios are treated as repeated even before they happen the first time?)
EDIT: What I actually do, myself, is to sometimes lie using the Moral Equivalent of the Truth: a lie designed not to poison other people’s decision-making. Lying outright about having an errand to do instead of sleeping in (insert other minor vices here...) is more-or-less ok, but using Exact Words about making a contract and becoming a magical girl… is evil.
(Yes, that was a Madoka Magica reference.)
EDIT EDIT: Which definitely does seem consequentialist, in the limit, but includes consequential reasoning over how my actions affect other people’s decision-making, which then involves Timelessness and virtue-reasoning.
A deontologist will not lie even when he has built up an immense base of trust and would “win” a whole lot from the lie.
If you make a deontologist out of whole cloth with non-contradicting rules, sure. An actual human using deontological thinking is reducible to consequentialism plus large penalties for rule breaking. I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between “always tell the truth” and “do not kill people, or through inaction allow people to die”), and the way to do that is by figuring out which rule is more important, which sounds an awful lot like consequentialism (I suppose you could make rules for which rules to follow when, but that way lies making way too many rules).
(By the way, what is actually the distinction between timeless decision theory and a decision model under which all scenarios are treated as repeated even before they happen the first time?)
I believe that TDT is a formalization of the intuition that if you make a certain choice in certain circumstances, then everything that makes decisions in a similar enough manner is going to make the same choice in similar enough circumstances. That’s a complicated sentence, let’s see if I can do better:
Your current decision isn’t an isolated event. You make that decision for certain reasons—the kind of person you are, the situation you are in, the kind of logic you use, and maybe some other things that I’m missing at the moment. This decision-making process has causal influence on all other decisions that are similar enough to the decision you’re currently making. So if you want to get the right answer for certain classes of problems—say, playing prisoner’s dilemma against a copy of you, or Newcomb’s problem—then you need a decision theory that explicitly takes this causal link into account. TDT is one such formal decision theory that takes this into account.
I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between “always tell the truth” and “do not kill people, or through inaction allow people to die”), and the way to do that is by figuring out which rule is more important, which sounds an awful lot like consequentialism
Sorta agreed. But note that rewriting some conflicting rules into consequentialist values automatically produces the instrumental goal of “avoid getting into situations where the rules would conflict”, whereas the original deontologist might or might not have that as one of their rules.
I don’t believe this is a reduction. A deontologist will not lie even when he has built up an immense base of trust and would “win” a whole lot from the lie. He just won’t do it, because to him it’s completely unethical.
Furthermore, the consequentialist might reason the other way around. A deontologist not-liar might decide that he can use Exact Words or You Didn’t Ask to engage in some necessary deception. A long-term consequentialist will note that actually doing so gets you a reputation for being a Manipulative Bastard—which then actually segues right into “Virtue Ethics as Timeless Decision Theory or ethics under repeated games”.
(By the way, what is actually the distinction between timeless decision theory and a decision model under which all scenarios are treated as repeated even before they happen the first time?)
EDIT: What I actually do, myself, is to sometimes lie using the Moral Equivalent of the Truth: a lie designed not to poison other people’s decision-making. Lying outright about having an errand to do instead of sleeping in (insert other minor vices here...) is more-or-less ok, but using Exact Words about making a contract and becoming a magical girl… is evil.
(Yes, that was a Madoka Magica reference.)
EDIT EDIT: Which definitely does seem consequentialist, in the limit, but includes consequential reasoning over how my actions affect other people’s decision-making, which then involves Timelessness and virtue-reasoning.
If you make a deontologist out of whole cloth with non-contradicting rules, sure. An actual human using deontological thinking is reducible to consequentialism plus large penalties for rule breaking. I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between “always tell the truth” and “do not kill people, or through inaction allow people to die”), and the way to do that is by figuring out which rule is more important, which sounds an awful lot like consequentialism (I suppose you could make rules for which rules to follow when, but that way lies making way too many rules).
I believe that TDT is a formalization of the intuition that if you make a certain choice in certain circumstances, then everything that makes decisions in a similar enough manner is going to make the same choice in similar enough circumstances. That’s a complicated sentence, let’s see if I can do better:
Your current decision isn’t an isolated event. You make that decision for certain reasons—the kind of person you are, the situation you are in, the kind of logic you use, and maybe some other things that I’m missing at the moment. This decision-making process has causal influence on all other decisions that are similar enough to the decision you’re currently making. So if you want to get the right answer for certain classes of problems—say, playing prisoner’s dilemma against a copy of you, or Newcomb’s problem—then you need a decision theory that explicitly takes this causal link into account. TDT is one such formal decision theory that takes this into account.
Sorta agreed. But note that rewriting some conflicting rules into consequentialist values automatically produces the instrumental goal of “avoid getting into situations where the rules would conflict”, whereas the original deontologist might or might not have that as one of their rules.