Should you punish people for wronging others, or for making the wrong call about wronging others?
This is a topic where the answer depends a whole lot on how generally you’re asking, and what your moral and decision framework is. The key ambiguity is “should”. “what should you do” has been an open question for millennia.
The obvious consequentialist answer is that you should do either, both, or neither, depending on circumstance and your expected net impact of your action. Other moral frameworks likely have different answers.
The signal I prefer to send on the topic, intended to encourage a mechanical, reductionist view of the universe, is that punishment should be automatic and non-judgemental, with as little as possible speculation as to motive or reasoning. If it caused harm, the punishment should be proportional to the harm. Yes, this lets luck and good-intentioned mistakes control more than an omniscient god might prefer. But I don’t have one of those, so I prefer to eliminate the other human failings of bad judgement that come with humans making a punishment call putatively based on inference about reasoning or motivation, but actually mostly based on biases and guesses.
I’m OK with adding punishment for very high-risk behaviors that don’t happen to cause harm in the observed instance. I don’t have a theory for how to remove human bias from that part of judgement, but I also don’t trust people enough to do without it.
This is a topic where the answer depends a whole lot on how generally you’re asking, and what your moral and decision framework is. The key ambiguity is “should”. “what should you do” has been an open question for millennia.
The obvious consequentialist answer is that you should do either, both, or neither, depending on circumstance and your expected net impact of your action. Other moral frameworks likely have different answers.
The signal I prefer to send on the topic, intended to encourage a mechanical, reductionist view of the universe, is that punishment should be automatic and non-judgemental, with as little as possible speculation as to motive or reasoning. If it caused harm, the punishment should be proportional to the harm. Yes, this lets luck and good-intentioned mistakes control more than an omniscient god might prefer. But I don’t have one of those, so I prefer to eliminate the other human failings of bad judgement that come with humans making a punishment call putatively based on inference about reasoning or motivation, but actually mostly based on biases and guesses.
I’m OK with adding punishment for very high-risk behaviors that don’t happen to cause harm in the observed instance. I don’t have a theory for how to remove human bias from that part of judgement, but I also don’t trust people enough to do without it.