I’m somewhat sympathetic to this reasoning. But I think it proves too much.
For example: If you’re very hungry and walk past someone’s fruit tree, I think there’s a reasonable ethical case that it’s ok to take some fruit if you leave them some payment, if you’re justified in believing that they’d strongly prefer the payment to having the fruit. Even in cases where you shouldn’t have taken the fruit absent being able to repay them, and where you shouldn’t have paid them absent being able to take the fruit.
I think the reason for this is related to how it’s nice to have norms along the lines of “don’t leave people on-net worse-off” (and that such norms are way easier to enforce than e.g. “behave like an optimal utilitarian, harming people when optimal and benefitting people when optimal”). And then lots of people also have some internalized ethical intuitions or ethics-adjacent desires that work along similar lines.
And in the animal welfare case, instead of trying to avoid leaving a specific person worse-off, it’s about making a class of beings on-net better-off, or making a “cause area” on-net better-off. I have some ethical intuitions (or at least ethics-adjacent desires) along these lines and think it’s reasonable to indulge them.
For example: If you’re very hungry and walk past someone’s fruit tree, I think there’s a reasonable ethical case that it’s ok to take some fruit if you leave them some payment, if you’re justified in believing that they’d strongly prefer the payment to having the fruit. Even in cases where you shouldn’t have taken the fruit absent being able to repay them, and where you shouldn’t have paid them absent being able to take the fruit. … And in the animal welfare case, instead of trying to avoid leaving a specific person worse-off, it’s about making a class of beings on-net better-off, or making a “cause area” on-net better-off.
I think this is importantly different because here you are (very mildly) benefitting and harming the same person, not some more or less arbitrary class of people. So what you are doing is (coercively) trading with them. That is not the case if you harm an animal and then offset by helping some other animal.
In your fruit example, the tree owner is coerced into trading with you, but they have recourse after that. They can observe the payment, evaluate whether they prefer it to the fruit, and adjust future behaviour accordingly. That game theoretic process could converge on mutually beneficial arrangements. In the class of beings, or cause area, example, the individual that is harmed/killed doesn’t have any recourse like that. And for the individual who benefits from the trade, it is game-theoretically optimal to just keep on trading, since they are not the one who is being harmed.
(Actually, this makes me thing offsetting has things in common with coercive redistribution, where you are non-consensually harming some individuals to benefit other individuals. I guess you could argue all redistribution is in fact coercive, but you could also argue some distribution, when done by someone with legitimately derived political authority, is non-coercive.)
On the other hand, animals can’t act strategically at all, so there are even more differences. But human observers can act strategically, and could approve/disapprove of your actions, so maybe it matters more whether other humans can observe and verify your offsetting in this case, and respond strategically, than whether the affected individual can respond strategically.
I’m somewhat sympathetic to this reasoning. But I think it proves too much.
For example: If you’re very hungry and walk past someone’s fruit tree, I think there’s a reasonable ethical case that it’s ok to take some fruit if you leave them some payment, if you’re justified in believing that they’d strongly prefer the payment to having the fruit. Even in cases where you shouldn’t have taken the fruit absent being able to repay them, and where you shouldn’t have paid them absent being able to take the fruit.
I think the reason for this is related to how it’s nice to have norms along the lines of “don’t leave people on-net worse-off” (and that such norms are way easier to enforce than e.g. “behave like an optimal utilitarian, harming people when optimal and benefitting people when optimal”). And then lots of people also have some internalized ethical intuitions or ethics-adjacent desires that work along similar lines.
And in the animal welfare case, instead of trying to avoid leaving a specific person worse-off, it’s about making a class of beings on-net better-off, or making a “cause area” on-net better-off. I have some ethical intuitions (or at least ethics-adjacent desires) along these lines and think it’s reasonable to indulge them.
I think this is importantly different because here you are (very mildly) benefitting and harming the same person, not some more or less arbitrary class of people. So what you are doing is (coercively) trading with them. That is not the case if you harm an animal and then offset by helping some other animal.
In your fruit example, the tree owner is coerced into trading with you, but they have recourse after that. They can observe the payment, evaluate whether they prefer it to the fruit, and adjust future behaviour accordingly. That game theoretic process could converge on mutually beneficial arrangements. In the class of beings, or cause area, example, the individual that is harmed/killed doesn’t have any recourse like that. And for the individual who benefits from the trade, it is game-theoretically optimal to just keep on trading, since they are not the one who is being harmed.
(Actually, this makes me thing offsetting has things in common with coercive redistribution, where you are non-consensually harming some individuals to benefit other individuals. I guess you could argue all redistribution is in fact coercive, but you could also argue some distribution, when done by someone with legitimately derived political authority, is non-coercive.)
On the other hand, animals can’t act strategically at all, so there are even more differences. But human observers can act strategically, and could approve/disapprove of your actions, so maybe it matters more whether other humans can observe and verify your offsetting in this case, and respond strategically, than whether the affected individual can respond strategically.