One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.
This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others.
If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example, was a good action. Buying a losing lottery ticket was a bad action. Since we care about just the consequences of the action, the goodness of an action can only be evaluated after the consequences have been observed—at some point after the action was taken (I think this is enforced by the direction of causality, but maybe not).
So we don’t know if an action is good or not until it’s in the past. But we can only choose future actions! What’s a consequentialist to do? (Equivalently, since we don’t know whether a lottery ticket is a winner or a loser until the draw, how can we choose to buy the winning ticket and choose not to buy the losing ticket?) Well, we make the best choice under uncertainty that we can, which is to use expected values. The probability-literate person is making the best choice under uncertainty they can; the lottery player is not.
The next step is to say that we want as many good things to happen as possible, so “expected value calculations” is a correct way of making decisions (that can sometimes produce bad actions, but less often than others) and “wishful thinking” is an incorrect way of making decisions.
So the probability-literate used a correct decision procedure to come to a bad action, and the lottery player used an incorrect decision procedure to come to a good action.
The last step is to say that judging past actions changes nothing about the consequences of that action, but judging decision procedures does change something about future consequences (via changing which actions get taken). Here is the value in judging a person’s decision procedures. The terrorist used a very morally wrong decision procedure to come up with a very morally good action: the act is good and the decision procedure is bad, and if we judge the terrorist by their decision procedure we influence future actions.
--
I think it’s very important for consequentialists to always remember that an action’s moral worth is evaluated on its consequences, and not on the decision theory that produced it. This means that despite your best efforts, you will absolutely make the best decision possible and still commit bad acts.
If all ‘moral worth’ meant was the consequences of what happened, I just wouldn’t deem ‘moral worth’ to be that relevant towards judging. It would seem to me like we’re just making ‘moral worth’ into something kind of irrelevant except from a completely pragmatic point.
Not sure if saying ‘making the best decision you could is al you can do’ is that much of a shortcut. I mean, I would imagine that a lot of smart people would realize that ‘making the best decision you can’ is still really, really difficult. If you act as your only judge (not just all of you, but only you at any given moment), then you may have less motivation; however, it would seem strange to me if ‘fear of being judged’ is the one thing that keeps us moral, even if it happens to become apparent that judging is technically impossible.
Also, keep in mind that in this case ‘every decision you make is “good”‘, but ‘good’ is defined as everything, so it becomes a neutral term. In the future you can still learn stuff; you can say “I made the right decision at this time using what I knew, but then the results taught me some new information, and now I would know to choose differently next time”.
If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example, was a good action. Buying a losing lottery ticket was a bad action. Since we care about just the consequences of the action, the goodness of an action can only be evaluated after the consequences have been observed—at some point after the action was taken (I think this is enforced by the direction of causality, but maybe not).
So we don’t know if an action is good or not until it’s in the past. But we can only choose future actions! What’s a consequentialist to do? (Equivalently, since we don’t know whether a lottery ticket is a winner or a loser until the draw, how can we choose to buy the winning ticket and choose not to buy the losing ticket?) Well, we make the best choice under uncertainty that we can, which is to use expected values. The probability-literate person is making the best choice under uncertainty they can; the lottery player is not.
The next step is to say that we want as many good things to happen as possible, so “expected value calculations” is a correct way of making decisions (that can sometimes produce bad actions, but less often than others) and “wishful thinking” is an incorrect way of making decisions.
So the probability-literate used a correct decision procedure to come to a bad action, and the lottery player used an incorrect decision procedure to come to a good action.
The last step is to say that judging past actions changes nothing about the consequences of that action, but judging decision procedures does change something about future consequences (via changing which actions get taken). Here is the value in judging a person’s decision procedures. The terrorist used a very morally wrong decision procedure to come up with a very morally good action: the act is good and the decision procedure is bad, and if we judge the terrorist by their decision procedure we influence future actions.
--
I think it’s very important for consequentialists to always remember that an action’s moral worth is evaluated on its consequences, and not on the decision theory that produced it. This means that despite your best efforts, you will absolutely make the best decision possible and still commit bad acts.
If you let it collapse—if you take the shortcut and say “making the best decision you could is all you can do”, then every decision you make is good, except for inattentiveness or laziness, and you lose the chance to find out that expected value calculations or Bayes’ theorem needs to go out the window.
If all ‘moral worth’ meant was the consequences of what happened, I just wouldn’t deem ‘moral worth’ to be that relevant towards judging. It would seem to me like we’re just making ‘moral worth’ into something kind of irrelevant except from a completely pragmatic point.
Not sure if saying ‘making the best decision you could is al you can do’ is that much of a shortcut. I mean, I would imagine that a lot of smart people would realize that ‘making the best decision you can’ is still really, really difficult. If you act as your only judge (not just all of you, but only you at any given moment), then you may have less motivation; however, it would seem strange to me if ‘fear of being judged’ is the one thing that keeps us moral, even if it happens to become apparent that judging is technically impossible.
Also, keep in mind that in this case ‘every decision you make is “good”‘, but ‘good’ is defined as everything, so it becomes a neutral term. In the future you can still learn stuff; you can say “I made the right decision at this time using what I knew, but then the results taught me some new information, and now I would know to choose differently next time”.