Let’s suppose that you believe you can kill someone with bare hands with non-zero probability. Then I can come to you and say: “I cursed you to kill someone with bare hands tomorrow. Pay me to lift the curse.” You are willing to pay me an arbitrary amount of money, because me coming and saying about curse is an evidence in favor of curse existence. Proof of arbitrariness: let’s suppose that there is a difference in expected utility between two possible policies, one of which involves KWBH, other doesn’t. You will choose the other policy, no matter how large gap in utility between them. That means you are willing to sacrifice an arbitrary amount of utility, which is an equivalent of willingness to spend an arbitrary amount of money.
Let’s suppose that you believe the probability of you KWBH is zero. It means you are willing to bet an arbitrary amount of money against my 1$ with condition “you are going to hit person’s head with bare hands with maximum strength for hour and not kill them”, because you believe it’s a sure win. You hit someone in head for hour with maximum strength, person dies, I get money. The next turn depends on how you update on zero-probability events. If you don’t update, I can just repeat bet. If you update in some logical induction manner, I can just threaten you with curse. PROFIT
The problem with this scenario is that the number of people who have a deontological rule “never kill anyone with your bare hands” is zero. There are people who have a rule that can be informally described as “never kill people with your bare hands”, and which in most situations works like it, but that’s different.
If anything, most people’s rules are closer to “never kill anyone with your bare hands, except for small probabilities in a Pascal’s Mugging scenario”. If you asked them what their rules were, they’d never describe it that way, of course. Normies don’t bother being precise enough to exclude low probability scenarios.
Deontology often implies acceptance of infinite payoff/cost of rule following/breaking. Consequentialists generally can/should recognize the concept of limits and comparability of different very large/small values.
Consequentialists can avoid pascal’s mugging by having bounded utility functions. If you add in a deontological side-constraint implemented as “rule out every action that has a nonzero possibility of violating the constraint” then that trivially rules out every action because zero is not a probability. So obviously that’s not how you’d implement it. I’m not sure how to implement it but a first-pass attempt would be to rule out actions that have, say, a >1% chance of violating the constraint. Second-pass attempt is to rule out actions that increase your credence in eventual constraint-violation-by-you by more than 1%. I do have a gut feeling that these will turn out to be problematic somehow, so I’m excited to be discussing this!
I can see two ways.
First, boring: assign bounded utilities over everything and very large disutility on violating constraint, such that >1% chance of violating constraint doesn’t worth it.
Second: throw away most part of utilitarian framework and design agent to work under rules in limited environment, if agent ever leaves environment, it throws exception and waits for your guidance.
First is unexploitable because it’s simply utility maximizer. Second is presumably unexploitable, because we (presumably) designed exception for every possibility of being exploited.
Is a consequentialist who has artificially bounded their utility function still truly a consequentialist? Likewise, if you make a deontological ruleset complicated and probabilistic enough, it starts to look a lot like a utility function.
There may still be modeling and self-image differences—the deontologist considers their choices to be terminally valuable, and the consequentialist considers these as ONLY instrumental to the utility of future experiences.
Weirdly, the consequentialist DOES typically assign utility to imagined universe-state that their experiences support, and it’s unclear why that’s all that different to the value of the experience of choosing correctly.
I agree that any deontologist can be represented as a consequentialist, by making the utility function complicated enough. I also agree that certain very sophisticated and complicated deontologists can probably be represented as consequentialists with not-too-complex utility functions.
Well, it depends on how exactly you design deontological mind. Case that you described seems to be equivalent of “assign infinite negative value to KWBH”, from which Pascal’s mugging follows.
Let’s suppose that you believe you can kill someone with bare hands with non-zero probability. Then I can come to you and say: “I cursed you to kill someone with bare hands tomorrow. Pay me to lift the curse.” You are willing to pay me an arbitrary amount of money, because me coming and saying about curse is an evidence in favor of curse existence. Proof of arbitrariness: let’s suppose that there is a difference in expected utility between two possible policies, one of which involves KWBH, other doesn’t. You will choose the other policy, no matter how large gap in utility between them. That means you are willing to sacrifice an arbitrary amount of utility, which is an equivalent of willingness to spend an arbitrary amount of money.
Let’s suppose that you believe the probability of you KWBH is zero. It means you are willing to bet an arbitrary amount of money against my 1$ with condition “you are going to hit person’s head with bare hands with maximum strength for hour and not kill them”, because you believe it’s a sure win. You hit someone in head for hour with maximum strength, person dies, I get money. The next turn depends on how you update on zero-probability events. If you don’t update, I can just repeat bet. If you update in some logical induction manner, I can just threaten you with curse. PROFIT
Answer inspired by this post.
The problem with this scenario is that the number of people who have a deontological rule “never kill anyone with your bare hands” is zero. There are people who have a rule that can be informally described as “never kill people with your bare hands”, and which in most situations works like it, but that’s different.
If anything, most people’s rules are closer to “never kill anyone with your bare hands, except for small probabilities in a Pascal’s Mugging scenario”. If you asked them what their rules were, they’d never describe it that way, of course. Normies don’t bother being precise enough to exclude low probability scenarios.
Isn’t this just pascal’s mugging? I don’t see why deontologists are more susceptible to it than consequentialists.
Deontology often implies acceptance of infinite payoff/cost of rule following/breaking. Consequentialists generally can/should recognize the concept of limits and comparability of different very large/small values.
Consequentialists can avoid pascal’s mugging by having bounded utility functions. If you add in a deontological side-constraint implemented as “rule out every action that has a nonzero possibility of violating the constraint” then that trivially rules out every action because zero is not a probability. So obviously that’s not how you’d implement it. I’m not sure how to implement it but a first-pass attempt would be to rule out actions that have, say, a >1% chance of violating the constraint. Second-pass attempt is to rule out actions that increase your credence in eventual constraint-violation-by-you by more than 1%. I do have a gut feeling that these will turn out to be problematic somehow, so I’m excited to be discussing this!
I can see two ways. First, boring: assign bounded utilities over everything and very large disutility on violating constraint, such that >1% chance of violating constraint doesn’t worth it. Second: throw away most part of utilitarian framework and design agent to work under rules in limited environment, if agent ever leaves environment, it throws exception and waits for your guidance. First is unexploitable because it’s simply utility maximizer. Second is presumably unexploitable, because we (presumably) designed exception for every possibility of being exploited.
Is a consequentialist who has artificially bounded their utility function still truly a consequentialist? Likewise, if you make a deontological ruleset complicated and probabilistic enough, it starts to look a lot like a utility function.
There may still be modeling and self-image differences—the deontologist considers their choices to be terminally valuable, and the consequentialist considers these as ONLY instrumental to the utility of future experiences.
Weirdly, the consequentialist DOES typically assign utility to imagined universe-state that their experiences support, and it’s unclear why that’s all that different to the value of the experience of choosing correctly.
A consequentialist with an unbounded utility function is broken, due to pascal’s mugging-related problems. At least that’s my opinion. See Tiny Probabilities of Vast Utilities: A Problem for Longtermism? - EA Forum (effectivealtruism.org)
I agree that any deontologist can be represented as a consequentialist, by making the utility function complicated enough. I also agree that certain very sophisticated and complicated deontologists can probably be represented as consequentialists with not-too-complex utility functions.
Not sure if we are disagreeing about anything.
Well, it depends on how exactly you design deontological mind. Case that you described seems to be equivalent of “assign infinite negative value to KWBH”, from which Pascal’s mugging follows.
See my reply to Dagon.