To clarify: My point was that the crucial aspect is not that people observe a punishment, and then infer that they should not commit crimes later. Rather, the important thing is that people, to the extent that they correctly model the rest of society and its response to their crimes, get an “internal simulation” that outputs “they will inflict disutility on you even if it’s expensive to do so, and even knowing that it failed to deter you”. And this model can only be correct and have this character if people really do punish in the crime-instances.
In other words, to the extent that people require punishments to deter, they only require subjunctive, not causal deterrence—though obviously the latter is factored in.
The second half of the answer is that most people believe in justice.
That’s what I was referring to as the “otherwise-ungrounded deservedness of others [who defect] of being treated badly”—this internalization of subjunctive (and acausal) criteria feels like a desire for “justice” or “what is right” on the inside. In other words, we have causal reasons for doing things, and separate from that, we have acausal criteria that often conflict, and when the acausals outweigh, we get the feeling of, “this person should be punished, even if expensive, and even though the crime has already happened”.
Rather, the important thing is that people, to the extent that they correctly model the rest of society and its response to their crimes, get an “internal simulation” that outputs “they will inflict disutility on you even if it’s expensive to do so, and even knowing that it failed to deter you”.
Another thought-experiment to heighten the distinction: if the President went on TV and said that starting this year, refusing to pay taxes would no longer be a crime, then the deterrence effect of having put people in jail for tax evasion would evaporate overnight. Every punishment would still have happened, but they would no longer deter future acts of the same kind.
Well said, but do we then conclude that we do actually value justice in itself? Or do we conclude that we value justice instrumentally? Yes, evolution designed us to care about justice for subjunctive deterrence reasons, but so what? Evolution designed all of our values instrumentally for all sorts of purposes that we may or may not care about. But that doesn’t mean we have no values. I have no idea how to answer this question, and am at a loss for how to determine whether a perceived value is terminal or instrumental, in general.
My article, “Morality as Parfitian-filtered decision theory?”, was devoted to exactly that question, and my conclusion is that justice—or at least the feeling that drives us to pursuit-of-justice actions—is an instrumental value, even though such actions cause harm to our terminal values. This is because theories that attempt to explain such “self-sacrificial” actions by positing justice (or morality, etc.) as a separate term in the agent’s utility function add complexity without corresponding explanatory power.
I skimmed the article. First, good idea. I would never have thought of that. But I do think there is a flaw. Given evolution, we would expect humans to have fairly complex utility functions and not simple utility functions. The complexity penalty for evolution + simple utility function could actually be higher than that of evolution + complicated utility function, depending on precisely how complex the simple and complicated utility functions are. For example, I assert that the complexity penalty for [evolution + a utility function with only one value (e.g. paper clips or happiness)] is higher than the complexity penalty for [evolution + any reasonable approximation to our current values].
This is only to say that a more complicated utility function for an evolved agent doesn’t necessarily imply a high complexity penalty. You could still be right in this particular case, but I’m not sure without actually being able to evaluate the relevant complexity penalties.
I think the phrase “otherwise-ungrounded” is likely a mistake. People (and animals) conflate justice in the sense you describe of “set of subjunctive criteria” as well as justice in the folk sense of “these are the things which are a priori wrong and deserve punishment regardless of one’s society”. Most useful descriptions of justice need to combine and conflate these two (among other) senses into a coherent whole. Without such a combination phrases like “unjust law” become difficult to explain.
To clarify: My point was that the crucial aspect is not that people observe a punishment, and then infer that they should not commit crimes later. Rather, the important thing is that people, to the extent that they correctly model the rest of society and its response to their crimes, get an “internal simulation” that outputs “they will inflict disutility on you even if it’s expensive to do so, and even knowing that it failed to deter you”. And this model can only be correct and have this character if people really do punish in the crime-instances.
In other words, to the extent that people require punishments to deter, they only require subjunctive, not causal deterrence—though obviously the latter is factored in.
That’s what I was referring to as the “otherwise-ungrounded deservedness of others [who defect] of being treated badly”—this internalization of subjunctive (and acausal) criteria feels like a desire for “justice” or “what is right” on the inside. In other words, we have causal reasons for doing things, and separate from that, we have acausal criteria that often conflict, and when the acausals outweigh, we get the feeling of, “this person should be punished, even if expensive, and even though the crime has already happened”.
That is generally called the “sense of justice”.
Another thought-experiment to heighten the distinction: if the President went on TV and said that starting this year, refusing to pay taxes would no longer be a crime, then the deterrence effect of having put people in jail for tax evasion would evaporate overnight. Every punishment would still have happened, but they would no longer deter future acts of the same kind.
Well said, but do we then conclude that we do actually value justice in itself? Or do we conclude that we value justice instrumentally? Yes, evolution designed us to care about justice for subjunctive deterrence reasons, but so what? Evolution designed all of our values instrumentally for all sorts of purposes that we may or may not care about. But that doesn’t mean we have no values. I have no idea how to answer this question, and am at a loss for how to determine whether a perceived value is terminal or instrumental, in general.
My article, “Morality as Parfitian-filtered decision theory?”, was devoted to exactly that question, and my conclusion is that justice—or at least the feeling that drives us to pursuit-of-justice actions—is an instrumental value, even though such actions cause harm to our terminal values. This is because theories that attempt to explain such “self-sacrificial” actions by positing justice (or morality, etc.) as a separate term in the agent’s utility function add complexity without corresponding explanatory power.
I skimmed the article. First, good idea. I would never have thought of that. But I do think there is a flaw. Given evolution, we would expect humans to have fairly complex utility functions and not simple utility functions. The complexity penalty for evolution + simple utility function could actually be higher than that of evolution + complicated utility function, depending on precisely how complex the simple and complicated utility functions are. For example, I assert that the complexity penalty for [evolution + a utility function with only one value (e.g. paper clips or happiness)] is higher than the complexity penalty for [evolution + any reasonable approximation to our current values].
This is only to say that a more complicated utility function for an evolved agent doesn’t necessarily imply a high complexity penalty. You could still be right in this particular case, but I’m not sure without actually being able to evaluate the relevant complexity penalties.
That’s a good point, and I’ll have to think about it.
I think the phrase “otherwise-ungrounded” is likely a mistake. People (and animals) conflate justice in the sense you describe of “set of subjunctive criteria” as well as justice in the folk sense of “these are the things which are a priori wrong and deserve punishment regardless of one’s society”. Most useful descriptions of justice need to combine and conflate these two (among other) senses into a coherent whole. Without such a combination phrases like “unjust law” become difficult to explain.
“Otherwise-ungrounded” is not the same as “ungrounded”; it’s just that it’s not grounded in a specific benefit that “treating defectors badly” causes.