I think we both agree that the underlying question is probably pretty confused, and importantly and relatedly, both probably agree that what we ultimately care about probably will not be grounded in the kind of analysis where you assign moral weights to entities and then sum up their experiences.
I think I narrowly agree on my moral views which are strongly influenced by longtermist-style thinking, though I think “assign weights and add experiences” isn’t way off of a perspective I might end up putting a bunch of weight on[1]. However, I do think “what moral weight should we assign bees” isn’t a notably more confused question in the context of animal welfare than “how should we prioritize between chicken welfare interventions and pig welfare interventions”. So, I think there at least exists a pretty common and broadly reasonable-ish perspective in which this question is sane.
The thing that creates a strong feeling of “I feel like people are just being crazy here” in me is the following chain of logic:
This feels a bit like a motte and bailey to me. Your original claim was “If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me”. This feels feels very different from claiming that the chain of logic you point out is crazy. One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline. I don’t think it’s good practice to dismiss a claim in the way you did (in particular calling the specific claim crazy) because someone saying a claim also appears to be exhibiting a bunch of bad epistemic practices and you think they followed a specific chain of logic that you think is problematic. (I’m not necessarily saying this is what you did, just that this justification would have been bad.)
Maybe you both think “the claim in isolation is crazy” (what you originally said and what I disagree with) and “the process used to reach that claim here seems particularly crazy”. Or maybe you want to partially walk back your original statement and focus on the process (if so, seems good to make this more explicit).
Separately, it’s worth noting that while Bentham’s Bulldog emphasizes the takeaway of “don’t eat honey”, they also do seem do be aware of and endorse other extreme conclusions of high moral weight on insects. (I wish they would also note in the post that this obviously has other more important implications than don’t eat honey!) So, I’m not sure that that point (4) is that much evidence about a bad epistemic process in this particular case.
Considerations like an arbitrarily large multiverse make questions around diversity of cognitive experience more complex and makes literally linear population ethics incoherant due to infinities. But, I think you pretty plausibly end up with something that roughly resembles linear aggregation via something like UDASSA.
One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline.
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I’ve generally updated away from putting much weight on moral intuitions / heuristics expect with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc.
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
I think I narrowly agree on my moral views which are strongly influenced by longtermist-style thinking, though I think “assign weights and add experiences” isn’t way off of a perspective I might end up putting a bunch of weight on[1]. However, I do think “what moral weight should we assign bees” isn’t a notably more confused question in the context of animal welfare than “how should we prioritize between chicken welfare interventions and pig welfare interventions”. So, I think there at least exists a pretty common and broadly reasonable-ish perspective in which this question is sane.
This feels a bit like a motte and bailey to me. Your original claim was “If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me”. This feels feels very different from claiming that the chain of logic you point out is crazy. One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline. I don’t think it’s good practice to dismiss a claim in the way you did (in particular calling the specific claim crazy) because someone saying a claim also appears to be exhibiting a bunch of bad epistemic practices and you think they followed a specific chain of logic that you think is problematic. (I’m not necessarily saying this is what you did, just that this justification would have been bad.)
Maybe you both think “the claim in isolation is crazy” (what you originally said and what I disagree with) and “the process used to reach that claim here seems particularly crazy”. Or maybe you want to partially walk back your original statement and focus on the process (if so, seems good to make this more explicit).
Separately, it’s worth noting that while Bentham’s Bulldog emphasizes the takeaway of “don’t eat honey”, they also do seem do be aware of and endorse other extreme conclusions of high moral weight on insects. (I wish they would also note in the post that this obviously has other more important implications than don’t eat honey!) So, I’m not sure that that point (4) is that much evidence about a bad epistemic process in this particular case.
Considerations like an arbitrarily large multiverse make questions around diversity of cognitive experience more complex and makes literally linear population ethics incoherant due to infinities. But, I think you pretty plausibly end up with something that roughly resembles linear aggregation via something like UDASSA.
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
oops, I meant except. My terrible spelling strikes again.