First I don’t think conflating blame and “bad person” is necessarily helpful.
OK, yeah, your view of blame as social incentive (skin-in-the-game) seems superior.
The most case is what is traditionally called “being tempted by sin”, e.g., someone procrastinating and not doing what he was supposed to do.
I agree that imposing social costs can be a useful way of reducing this, but I think we would probably have disagreements about how often and in what cases. I think a lot of cases where people blame other people for their failings are more harmful than helpful, and push people away from each other in the long term.
And don’t get me started on situations where most of the participants are only there for a paycheck, a.k.a., the real world.
It sounds like we both agree that this is a nightmare scenario in terms of creating effective teams and good environments for people, albeit common.
However, even when the primary motive is money, there’s some social glue holding things together. I recommend the book The Moral Economy, which discusses how capitalist societies rely to a large extend on the goodwill of the populace. As mutual trust decreases, transaction costs increase. The most direct effect is the cost of security; shops in different neighborhoods require different amounts of it. This is often cited as the reason the diamond industry is dominated by Hasidic Jews; they save on security cost due to the high level of trust they can have as part of a community. Some of this trust comes from imposing social costs, but some of it also comes from common goals of the community members.
The Moral Economy argues that the lesson of the impossibility theorems of mechanism design is that it would not be possible to run a society on properly aligned incentives alone. There is no way to impose the right costs to get a society of selfish agents to behave. Instead, a mechanism designer in the real world has to recognize, utilize, and foster people’s altruistic and otherwise pro-social tendencies. It is also shown empirically that designing incentives as if people were selfish tends to make people act more selfish in many cases.
So, I will try and watch out for blame being a useful social mechanism in the way you describe. I’m probably underestimating the number of cases where imposed social costs are useful precisely because they don’t end up being applied (IE, implicit threats). At present I still think it would be better if people were both less quick to employ blame, and less concerned about other people blaming them (making more room for self-motivation).
Not exactly.
(1) What is the family of calibration curves you’re updating on? These are functions from stated probabilities to ‘true’ probabilities, so the class of possible functions is quite large. Do we want a parametric family? A non-parametric family? We would like something which is mathematically convenient, looks as much like typical calibration curves as possible, but which has a good ability to fit anomalous curves as well when those come up.
(2) What is the prior oven this family of curves? It may not matter too much if we plan on using a lot of data, but if we want to estimate people’s calibration quickly, it would be nice to have a decent prior. This suggests a hierarchical Bayesian approach (where we estimate a good prior distribution via a higher-order prior).
(3) As mentioned by cousin_it, we would actually want to estimate different calibration curves for different topics. This suggests adding at least one more level to the hierarchical Bayesian model, so that we can simultaneously estimate the general distribution of calibration curves in the population, the all-subject calibration curve for an individual, and the single-subject calibration curve for an individual. At this point, one might prefer to shut one’s eyes and ignore the complexity of the problem.