I also think it’s just super reasonable to eat animal products and offset with donations
The concept of offsetting evil with good does not make sense. Even if the good outweighs the evil, it would be even better to not do the evil thing, and still do the good thing.
In situations where a single act has both good and evil consequences, such as the classic trolley problem, it may make sense to calculate the net amount of good. It does not make sense when the good and the evil come from separate actions that can be chosen independently of each other.
I imagine there could be an argument along the lines of something something timeless decision theory, to exclude the choice of doing good and not evil, but I do not see what it could be.
ETA: I see there have been a few (ETA2: a lot of) disagreement votes. I can’t say much to those without any comments to go on, but here’s a diagram that expresses things as starkly as possible. You can choose any of the four boxes. Which one?
Everything has a cost. Inconvenience, taste, enjoyment, economic impacts. The argument that for some reason in the domain of animal welfare we should stop doing triage and just do everything has been discussed a lot, and responded to a lot.
I thought Richard was saying “why would the [thing you do to offset] become worth it once you’ve done [thing you want to offset]? Probably it’s worth doing or not, and probably [thing you want to offset] is bad to do or fine, irrespective of choosing the other”
Indeed. There is no linkage between the two actions in the example before us. Offsetting makes no sense in terms of utility maximisation.
Where, then, does its appeal lie? Here are two defences of offsetting which I have not seen presented, although I expect that the second one may be familiar to followers of religions that practice the rite of confession. Common to both is the idea that offsetting is done first of all for oneself, only secondarily for the world.
The principle of offsetting one’s sins (eating meat, not recycling, flying, existing) can be understood as a practice that simplifies the accounting. For every evil thing that one does, make sure to also do a greater good. One’s account is then sure to always grow, never shrink. This avoids any complex totting up of sin and virtue over longer periods. This is not about maximizing goodness, but establishing a baseline that at least ensures that one will not backslide ever deeper into sin. From such a foundation, one may then build a life of greater virtue.
The discipline of offsetting is good for the soul. The good act undertaken in the wake of an evil one is performed not merely because it is good, but as a penance for the evil, a reminder that one has fallen short. It keeps the evil act before one’s mind, to assist one to do better in future. For a prerequisite for all virtue is noticing what you are about to do and choosing, instead of noticing what you have done, when it is beyond choice.
Each of these has its own failure mode.
Offsetting to compensate for evil can become offsetting to justify evil, as if saving two lives were to give one a licence to end one.
Offsetting as penance can result in penances that accomplish no good, such as saying long series of prayers or self-flagellation.
Offsetting makes no sense in terms of utility maximisation.
Donating less than 100% of your non-essential income also makes no sense in terms of utility maximization, and yet pretty much everybody is guilty of it, what’s up with that?
As it happens, people just aren’t particularly good at this utility maximization thing, so they need various crutches (like the GWWC pledge) to do at least better than most, and offsetting seems like a not-obviously-terrible crutch.
To frame it as a crutch for our irredeemably fallen nature is to accept the all-demandingness narrative of utility maximisation. We must but we can’t, we can’t but we must. I prefer to reject the demand entirely.
But that’s why offsetting makes sense. In the world as it is, people make deals with themselves that have causal influence. The factors are emotional, but those are real. We can’t do everything, so what we do is dependent on emotional factors—like an offsetting self-deal.
Offsetting makes perfect sense outside of unrealistic utilitarian absolutism.
I agree, and yet it does seem to me that self-identified EAs are better people, on average. If only there was a way to harness that goodness without skirting Wolf-Insanity quite this close...
One way of thinking about offsetting is using it to price in the negative effects of the thing you want to do. Personally, I find it confusing to navigate tradeoffs between dollars, animal welfare, uncertain health costs, cravings for foods I can’t eat, and fewer options when getting food. The convenient thing about offsets is I can reduce the decision to “Is the burger worth $x to me?”, where $x = price of burger + price of offset.
A common response to this is “Well, if you thought it was worth it to pay $y to eliminate t hours of cow suffering, then you should just do that anyway, regardless of whether you buy the burger”. I think that’s a good point, but I don’t feel like it helps me navigate the confusing-to-me tradeoff between like five different not-intuitively-commensurable considerations.
I agree if you model people as along some Pareto frontier of perfectly selfish to perfectly (direct) utilitarian, then in no point on that frontier does offsetting ever make sense. However, I think most people have, and endorse, having other moral goals.
For example, a lot of the intuition for offsetting may come from believing you want to be the type of person who internalizes the (large, predictably negative) externalities of your actions, so offsetting comes from your consumption rather than altruism budget.
Though again, I agree that perfect utilitarians, or people aspiring to be perfect utilitarians, should not offset. And this generalizes also to people whose idealized behavior is best described as a linear combination of perfectly utilitarian and perfectly selfish.
The most probable intuition behind the disagreements, it appears to me, is that “a person going around doing a bunch of good things and a little bit of bad-evil things is net-positive and we should keep him around even if we can’t fix him, and a different person doing the same amount of evil things but not ‘offsetting’ them with anything is a bigger problem.”
Responding to your confusion about disagreement votes: I think your model isn’t correctly describing how people are modelling this situation. People may believe that they can do more good from [choosing to eat meat + offsetting with donations] vs [not eating meat + offsetting with donations] because of the benefits described in the post. So you are failing to include a +I (or -I) term that factors in peoples abilities to do good (or maybe even the terminal effects of eating meat on themself).
The good flowing directly from the evil (here the positive effect on one’s own health) just lowers the value of E. So long as it remains positive, top right is still the highest value. If the benefit is enough to make E negative (i.e. net good), then top left becomes the highest. But then the good action is not offsetting the evil. The “evil” action has already offset itself. The only thing to recommend the (other) good action is that it is good, independently of the evil.
The only sensible scenario I can come up with is if better sustenance enables one to work harder, earn more, and donate more. But that is not offsetting an unavoidable sin with a good deed, it is committing the sin to be able to do the good deed. Eating meat to give, one might call it.
I’m somewhat sympathetic to this reasoning. But I think it proves too much.
For example: If you’re very hungry and walk past someone’s fruit tree, I think there’s a reasonable ethical case that it’s ok to take some fruit if you leave them some payment, if you’re justified in believing that they’d strongly prefer the payment to having the fruit. Even in cases where you shouldn’t have taken the fruit absent being able to repay them, and where you shouldn’t have paid them absent being able to take the fruit.
I think the reason for this is related to how it’s nice to have norms along the lines of “don’t leave people on-net worse-off” (and that such norms are way easier to enforce than e.g. “behave like an optimal utilitarian, harming people when optimal and benefitting people when optimal”). And then lots of people also have some internalized ethical intuitions or ethics-adjacent desires that work along similar lines.
And in the animal welfare case, instead of trying to avoid leaving a specific person worse-off, it’s about making a class of beings on-net better-off, or making a “cause area” on-net better-off. I have some ethical intuitions (or at least ethics-adjacent desires) along these lines and think it’s reasonable to indulge them.
For example: If you’re very hungry and walk past someone’s fruit tree, I think there’s a reasonable ethical case that it’s ok to take some fruit if you leave them some payment, if you’re justified in believing that they’d strongly prefer the payment to having the fruit. Even in cases where you shouldn’t have taken the fruit absent being able to repay them, and where you shouldn’t have paid them absent being able to take the fruit. … And in the animal welfare case, instead of trying to avoid leaving a specific person worse-off, it’s about making a class of beings on-net better-off, or making a “cause area” on-net better-off.
I think this is importantly different because here you are (very mildly) benefitting and harming the same person, not some more or less arbitrary class of people. So what you are doing is (coercively) trading with them. That is not the case if you harm an animal and then offset by helping some other animal.
In your fruit example, the tree owner is coerced into trading with you, but they have recourse after that. They can observe the payment, evaluate whether they prefer it to the fruit, and adjust future behaviour accordingly. That game theoretic process could converge on mutually beneficial arrangements. In the class of beings, or cause area, example, the individual that is harmed/killed doesn’t have any recourse like that. And for the individual who benefits from the trade, it is game-theoretically optimal to just keep on trading, since they are not the one who is being harmed.
(Actually, this makes me thing offsetting has things in common with coercive redistribution, where you are non-consensually harming some individuals to benefit other individuals. I guess you could argue all redistribution is in fact coercive, but you could also argue some distribution, when done by someone with legitimately derived political authority, is non-coercive.)
On the other hand, animals can’t act strategically at all, so there are even more differences. But human observers can act strategically, and could approve/disapprove of your actions, so maybe it matters more whether other humans can observe and verify your offsetting in this case, and respond strategically, than whether the affected individual can respond strategically.
Oh I think I see what you are arguing (that you should only care about whether or not eating meat is net good or net bad, theres no reason to factor in this other action of the donation offset)
Specifically then the two complaints may be:
You specify that $0 < E$ in your graph, where you are using E represent −1 * amount of badness. While in reality people are modelling $E$ as negative (where eating meat is instead being net good for the world)
People might instead think that doing E is ‘net evil’ but also desirable for them for another reason unrelated to that (maybe for some reason like ‘i also enjoy eating meat’). So here, if they only want to take net good actions while also eating meat, then they would offset it with donations. The story you outlined above arguing that ‘The concept of offsetting evil with good does not make sense’ misses that people might be willing to make such a tradeoff
I think I agree with what you are saying, and might be missing other reasons people are disagree voting
The concept of offsetting evil with good does not make sense. Even if the good outweighs the evil, it would be even better to not do the evil thing, and still do the good thing.
In situations where a single act has both good and evil consequences, such as the classic trolley problem, it may make sense to calculate the net amount of good. It does not make sense when the good and the evil come from separate actions that can be chosen independently of each other.
I imagine there could be an argument along the lines of something something timeless decision theory, to exclude the choice of doing good and not evil, but I do not see what it could be.
ETA: I see there have been a few (ETA2: a lot of) disagreement votes. I can’t say much to those without any comments to go on, but here’s a diagram that expresses things as starkly as possible. You can choose any of the four boxes. Which one?
Everything has a cost. Inconvenience, taste, enjoyment, economic impacts. The argument that for some reason in the domain of animal welfare we should stop doing triage and just do everything has been discussed a lot, and responded to a lot.
See also Self-Integrity and the Drowning Child.
I thought Richard was saying “why would the [thing you do to offset] become worth it once you’ve done [thing you want to offset]? Probably it’s worth doing or not, and probably [thing you want to offset] is bad to do or fine, irrespective of choosing the other”
Indeed. There is no linkage between the two actions in the example before us. Offsetting makes no sense in terms of utility maximisation.
Where, then, does its appeal lie? Here are two defences of offsetting which I have not seen presented, although I expect that the second one may be familiar to followers of religions that practice the rite of confession. Common to both is the idea that offsetting is done first of all for oneself, only secondarily for the world.
The principle of offsetting one’s sins (eating meat, not recycling, flying, existing) can be understood as a practice that simplifies the accounting. For every evil thing that one does, make sure to also do a greater good. One’s account is then sure to always grow, never shrink. This avoids any complex totting up of sin and virtue over longer periods. This is not about maximizing goodness, but establishing a baseline that at least ensures that one will not backslide ever deeper into sin. From such a foundation, one may then build a life of greater virtue.
The discipline of offsetting is good for the soul. The good act undertaken in the wake of an evil one is performed not merely because it is good, but as a penance for the evil, a reminder that one has fallen short. It keeps the evil act before one’s mind, to assist one to do better in future. For a prerequisite for all virtue is noticing what you are about to do and choosing, instead of noticing what you have done, when it is beyond choice.
Each of these has its own failure mode.
Offsetting to compensate for evil can become offsetting to justify evil, as if saving two lives were to give one a licence to end one.
Offsetting as penance can result in penances that accomplish no good, such as saying long series of prayers or self-flagellation.
Donating less than 100% of your non-essential income also makes no sense in terms of utility maximization, and yet pretty much everybody is guilty of it, what’s up with that?
As it happens, people just aren’t particularly good at this utility maximization thing, so they need various crutches (like the GWWC pledge) to do at least better than most, and offsetting seems like a not-obviously-terrible crutch.
To frame it as a crutch for our irredeemably fallen nature is to accept the all-demandingness narrative of utility maximisation. We must but we can’t, we can’t but we must. I prefer to reject the demand entirely.
But that’s why offsetting makes sense. In the world as it is, people make deals with themselves that have causal influence. The factors are emotional, but those are real. We can’t do everything, so what we do is dependent on emotional factors—like an offsetting self-deal.
Offsetting makes perfect sense outside of unrealistic utilitarian absolutism.
I agree, and yet it does seem to me that self-identified EAs are better people, on average. If only there was a way to harness that goodness without skirting Wolf-Insanity quite this close...
One way of thinking about offsetting is using it to price in the negative effects of the thing you want to do. Personally, I find it confusing to navigate tradeoffs between dollars, animal welfare, uncertain health costs, cravings for foods I can’t eat, and fewer options when getting food. The convenient thing about offsets is I can reduce the decision to “Is the burger worth $x to me?”, where $x = price of burger + price of offset.
A common response to this is “Well, if you thought it was worth it to pay $y to eliminate t hours of cow suffering, then you should just do that anyway, regardless of whether you buy the burger”. I think that’s a good point, but I don’t feel like it helps me navigate the confusing-to-me tradeoff between like five different not-intuitively-commensurable considerations.
I agree if you model people as along some Pareto frontier of perfectly selfish to perfectly (direct) utilitarian, then in no point on that frontier does offsetting ever make sense. However, I think most people have, and endorse, having other moral goals.
For example, a lot of the intuition for offsetting may come from believing you want to be the type of person who internalizes the (large, predictably negative) externalities of your actions, so offsetting comes from your consumption rather than altruism budget.
Though again, I agree that perfect utilitarians, or people aspiring to be perfect utilitarians, should not offset. And this generalizes also to people whose idealized behavior is best described as a linear combination of perfectly utilitarian and perfectly selfish.
The most probable intuition behind the disagreements, it appears to me, is that “a person going around doing a bunch of good things and a little bit of bad-evil things is net-positive and we should keep him around even if we can’t fix him, and a different person doing the same amount of evil things but not ‘offsetting’ them with anything is a bigger problem.”
Responding to your confusion about disagreement votes: I think your model isn’t correctly describing how people are modelling this situation. People may believe that they can do more good from [choosing to eat meat + offsetting with donations] vs [not eating meat + offsetting with donations] because of the benefits described in the post. So you are failing to include a
+I(or-I) term that factors in peoples abilities to do good (or maybe even the terminal effects of eating meat on themself).The good flowing directly from the evil (here the positive effect on one’s own health) just lowers the value of E. So long as it remains positive, top right is still the highest value. If the benefit is enough to make E negative (i.e. net good), then top left becomes the highest. But then the good action is not offsetting the evil. The “evil” action has already offset itself. The only thing to recommend the (other) good action is that it is good, independently of the evil.
The only sensible scenario I can come up with is if better sustenance enables one to work harder, earn more, and donate more. But that is not offsetting an unavoidable sin with a good deed, it is committing the sin to be able to do the good deed. Eating meat to give, one might call it.
I’m somewhat sympathetic to this reasoning. But I think it proves too much.
For example: If you’re very hungry and walk past someone’s fruit tree, I think there’s a reasonable ethical case that it’s ok to take some fruit if you leave them some payment, if you’re justified in believing that they’d strongly prefer the payment to having the fruit. Even in cases where you shouldn’t have taken the fruit absent being able to repay them, and where you shouldn’t have paid them absent being able to take the fruit.
I think the reason for this is related to how it’s nice to have norms along the lines of “don’t leave people on-net worse-off” (and that such norms are way easier to enforce than e.g. “behave like an optimal utilitarian, harming people when optimal and benefitting people when optimal”). And then lots of people also have some internalized ethical intuitions or ethics-adjacent desires that work along similar lines.
And in the animal welfare case, instead of trying to avoid leaving a specific person worse-off, it’s about making a class of beings on-net better-off, or making a “cause area” on-net better-off. I have some ethical intuitions (or at least ethics-adjacent desires) along these lines and think it’s reasonable to indulge them.
I think this is importantly different because here you are (very mildly) benefitting and harming the same person, not some more or less arbitrary class of people. So what you are doing is (coercively) trading with them. That is not the case if you harm an animal and then offset by helping some other animal.
In your fruit example, the tree owner is coerced into trading with you, but they have recourse after that. They can observe the payment, evaluate whether they prefer it to the fruit, and adjust future behaviour accordingly. That game theoretic process could converge on mutually beneficial arrangements. In the class of beings, or cause area, example, the individual that is harmed/killed doesn’t have any recourse like that. And for the individual who benefits from the trade, it is game-theoretically optimal to just keep on trading, since they are not the one who is being harmed.
(Actually, this makes me thing offsetting has things in common with coercive redistribution, where you are non-consensually harming some individuals to benefit other individuals. I guess you could argue all redistribution is in fact coercive, but you could also argue some distribution, when done by someone with legitimately derived political authority, is non-coercive.)
On the other hand, animals can’t act strategically at all, so there are even more differences. But human observers can act strategically, and could approve/disapprove of your actions, so maybe it matters more whether other humans can observe and verify your offsetting in this case, and respond strategically, than whether the affected individual can respond strategically.
Oh I think I see what you are arguing (that you should only care about whether or not eating meat is net good or net bad, theres no reason to factor in this other action of the donation offset)
Specifically then the two complaints may be:
You specify that $0 < E$ in your graph, where you are using E represent −1 * amount of badness. While in reality people are modelling $E$ as negative (where eating meat is instead being net good for the world)
People might instead think that doing E is ‘net evil’ but also desirable for them for another reason unrelated to that (maybe for some reason like ‘i also enjoy eating meat’). So here, if they only want to take net good actions while also eating meat, then they would offset it with donations. The story you outlined above arguing that ‘The concept of offsetting evil with good does not make sense’ misses that people might be willing to make such a tradeoff
I think I agree with what you are saying, and might be missing other reasons people are disagree voting
I agree with you on this and wanted to say so since so many ppl are downvoting you.