I think a lot of the asymmetry in intuitions just comes from harm often being very easy to cause but goodness generally being very hard. Goodness tends to be fragile, such that it’s very easy for one defector to really mess things up, but very hard for somebody to add a bunch of additional value—e.g. it’s a lot easier to murder someone than it is to found a company that produces a comparable amount of consumer surplus. Furthermore, and partially as a result, causing pain tends to be done intentionally, whereas not causing goodness tends to be done unintentionally (most of the time you could have done something good but didn’t, probably you had no idea). So, our intuitions are such that causing harm is something often done intentionally, that can easily break things quite badly, and that almost anyone could theoretically do—so something people should be really quite culpable for doing. Whereas not causing goodness is something usually done unintentionally, that’s very hard to avoid, and where very few people ever succeed at creating huge amounts of goodness—so something people really shouldn’t be that culpable for failing at. But notably none of those reasons have anything to do with goodness or harm being inherently asymmetrical with respect to e.g. fundamental differences in how good goodness is vs. how bad badness is.
Yep I agree this explains a lot of the psychological bias. Probably alongside the fact that humans have a greater capacity to feel pain/discomfort than pleasure. And also perhaps the r/k-selection thing where humans are more risk-averse/have greater responses to threat-like stimuli because new individuals are harder to create.
Also probably punishing cruelty/causing harm in society is more conducive to long-term societal flourishing than punishing a general lack of generosity. And so group selection favors suffering-focused ethical intuitions.
I think a lot of the asymmetry in intuitions just comes from harm often being very easy to cause but goodness generally being very hard. Goodness tends to be fragile, such that it’s very easy for one defector to really mess things up, but very hard for somebody to add a bunch of additional value—e.g. it’s a lot easier to murder someone than it is to found a company that produces a comparable amount of consumer surplus. Furthermore, and partially as a result, causing pain tends to be done intentionally, whereas not causing goodness tends to be done unintentionally (most of the time you could have done something good but didn’t, probably you had no idea). So, our intuitions are such that causing harm is something often done intentionally, that can easily break things quite badly, and that almost anyone could theoretically do—so something people should be really quite culpable for doing. Whereas not causing goodness is something usually done unintentionally, that’s very hard to avoid, and where very few people ever succeed at creating huge amounts of goodness—so something people really shouldn’t be that culpable for failing at. But notably none of those reasons have anything to do with goodness or harm being inherently asymmetrical with respect to e.g. fundamental differences in how good goodness is vs. how bad badness is.
Yep I agree this explains a lot of the psychological bias. Probably alongside the fact that humans have a greater capacity to feel pain/discomfort than pleasure. And also perhaps the r/k-selection thing where humans are more risk-averse/have greater responses to threat-like stimuli because new individuals are harder to create.
Also probably punishing cruelty/causing harm in society is more conducive to long-term societal flourishing than punishing a general lack of generosity. And so group selection favors suffering-focused ethical intuitions.