A deontologist will not lie even when he has built up an immense base of trust and would “win” a whole lot from the lie.
If you make a deontologist out of whole cloth with non-contradicting rules, sure. An actual human using deontological thinking is reducible to consequentialism plus large penalties for rule breaking. I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between “always tell the truth” and “do not kill people, or through inaction allow people to die”), and the way to do that is by figuring out which rule is more important, which sounds an awful lot like consequentialism (I suppose you could make rules for which rules to follow when, but that way lies making way too many rules).
(By the way, what is actually the distinction between timeless decision theory and a decision model under which all scenarios are treated as repeated even before they happen the first time?)
I believe that TDT is a formalization of the intuition that if you make a certain choice in certain circumstances, then everything that makes decisions in a similar enough manner is going to make the same choice in similar enough circumstances. That’s a complicated sentence, let’s see if I can do better:
Your current decision isn’t an isolated event. You make that decision for certain reasons—the kind of person you are, the situation you are in, the kind of logic you use, and maybe some other things that I’m missing at the moment. This decision-making process has causal influence on all other decisions that are similar enough to the decision you’re currently making. So if you want to get the right answer for certain classes of problems—say, playing prisoner’s dilemma against a copy of you, or Newcomb’s problem—then you need a decision theory that explicitly takes this causal link into account. TDT is one such formal decision theory that takes this into account.
I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between “always tell the truth” and “do not kill people, or through inaction allow people to die”), and the way to do that is by figuring out which rule is more important, which sounds an awful lot like consequentialism
Sorta agreed. But note that rewriting some conflicting rules into consequentialist values automatically produces the instrumental goal of “avoid getting into situations where the rules would conflict”, whereas the original deontologist might or might not have that as one of their rules.
If you make a deontologist out of whole cloth with non-contradicting rules, sure. An actual human using deontological thinking is reducible to consequentialism plus large penalties for rule breaking. I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between “always tell the truth” and “do not kill people, or through inaction allow people to die”), and the way to do that is by figuring out which rule is more important, which sounds an awful lot like consequentialism (I suppose you could make rules for which rules to follow when, but that way lies making way too many rules).
I believe that TDT is a formalization of the intuition that if you make a certain choice in certain circumstances, then everything that makes decisions in a similar enough manner is going to make the same choice in similar enough circumstances. That’s a complicated sentence, let’s see if I can do better:
Your current decision isn’t an isolated event. You make that decision for certain reasons—the kind of person you are, the situation you are in, the kind of logic you use, and maybe some other things that I’m missing at the moment. This decision-making process has causal influence on all other decisions that are similar enough to the decision you’re currently making. So if you want to get the right answer for certain classes of problems—say, playing prisoner’s dilemma against a copy of you, or Newcomb’s problem—then you need a decision theory that explicitly takes this causal link into account. TDT is one such formal decision theory that takes this into account.
Sorta agreed. But note that rewriting some conflicting rules into consequentialist values automatically produces the instrumental goal of “avoid getting into situations where the rules would conflict”, whereas the original deontologist might or might not have that as one of their rules.