No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds.
2- A fairly common deontological rule is “Don’t murder an innocent, no matter how great the benefit.” Take the following scenario:
This is not a counter-example. It doesn’t even seem to be an especially difficult scenario. I’m confused.
-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B’s own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity’s sake.
Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision “intentionally kill innocent” at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T).
Your conversion would have “Killing innocents intentionally” as an evil, and thus A would be obliged to kill the innocent.
No, that would be a silly conversion. If A is a deontological agent that adheres to the rule “never kill innocents intentionally’ then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn’t. It doesn’t kill B.
I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn’t. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions.
You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequentialist when encountering his first deontologist may well say “What the F@#%? Are you telling me that of all the things that ever exist or occur in the whole universe across all of space and time the only consequence that matters to you is what your decision is in this instant? Are you for real? Is your creator trolling me?”. We’re just considering that viewpoint in the form of the utility function it would take to make it happen.
No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds.
This is not a counter-example. It doesn’t even seem to be an especially difficult scenario. I’m confused.
Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision “intentionally kill innocent” at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T).
No, that would be a silly conversion. If A is a deontological agent that adheres to the rule “never kill innocents intentionally’ then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn’t. It doesn’t kill B.
I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn’t. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions.
You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequentialist when encountering his first deontologist may well say “What the F@#%? Are you telling me that of all the things that ever exist or occur in the whole universe across all of space and time the only consequence that matters to you is what your decision is in this instant? Are you for real? Is your creator trolling me?”. We’re just considering that viewpoint in the form of the utility function it would take to make it happen.
Alright- conceded.