My guess is this is obvious, but IMO it seems extremely unlikely to me that bee-experience is remotely as important to care about as cow experience. Enough as to make statements like this just sound approximately insane:
97% of years of animal life brought about by industrial farming have been through the honey industry (though this doesn’t take into account other insect farming).
Like, no, this isn’t how this works. This obviously isn’t how this works. You can’t add up experience hours like this. At the very least use some kind of neuron basis.
The median estimate, from the most detailed report ever done on the intensity of pleasure and pain in animals, was that bees suffer 7% as intensely as humans. The mean estimate was around 15% as intensely as people. Bees were guessed to be more intensely conscious than salmon!
If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me that I find myself wanting to look somewhere else but the arguments in things like the Rethink Priorities report (which I have read, and argued with people about for many hours, and still sound insane to me, and IMO do not hold up), but instead look towards things like there being some kind of social signaling madness where someone is trying to signal commitment to some group standard of dedication, which involves some runaway set of extreme beliefs.
Edit: And to avoid a slipping of local norms here. I am only leaving this comment here now after I have seriously entertained the hypothesis that I might be wrong, that maybe there do exist good arguments for moral weights that seem crazy to from where I was originally, but no, after looking into the arguments for quite a while, they still seem crazy to me, and so now I feel comfortable moving on and trying to think about what psychological or social process produces posts like this. And still, I am hesitant about it, because many readers have probably not gone through the same journey, and I don’t want a culture of dismissing things just because they are big and would imply drastic actions.
I think that it’s pretty reasonable to think that bee suffering is plausibly similarly bad to human suffering. (Though I’ll give some important caveats to this in the discussion below.)
More precisely, I think it’s plausible that I (and others) think that on reflection[1] that the “bad” part of suffering is present in roughly the same “amount” in bees as in humans such that suffering in both is very comparable. (It’s also plausible I’d end up thinking that bee suffering is worse due to e.g. higher clock speed.) This is mostly because I don’t strongly think that on reflection I would care about the complex aspects of the suffering or end up caring in a way which is more proportional to neuron count (though these are also plausible).
See also Luke Muehlhauser’s post on moral weights which also discusses a way of compute moral weights which implies it’s plausible that bees have similar moral weight to humans.[2]
I find the idea that we should be radically uncertain about moral-weight-upon-reflection-for-bees pretty intuitive: I feel extremely uncertain about core questions in morality and philosophy which leaves extremely wide intervals. Upon hearing that some people put substantial moral weight on insects, my initial thought was that this was maybe reasonable but not very action relevant. I haven’t engaged with the Rethink Priorities work on moral weights and this isn’t shaping my perspective; my perspective is driven by mostly simpler and earlier views. I don’t feel very sympathetic to perspectives which are extremely confident in low moral weights (like this one) due to general skepticism about extreme confidence in most salient questions in morality.
Just because I think it’s plausible that I’ll end up with a high moral-weight-upon-reflection for bees relative to humans doesn’t mean that I necessarily think the aggregated moral weight should be high; this is because of two envelope problems. But, I think moral aggregation approaches that end up aggregating our current uncertainty in a way that assigns high overall moral weight to bees (e.g. a 15% weight like in the post) aren’t unreasonable. My off-the-cuff guess would be more like 1% if it was important to give an estimate now, but this isn’t very decision relevant from my perspective as I don’t put much moral weight on perspectives that care about this sort of thing. (To oversimplify: I put most terminal weight on longtermism, which doesn’t care about current bees, and then a bit of weight on something like common sense ethics which doesn’t care about this sort of calculation.) And, to be clear, I have a hard time imagining reasonable perspectives which put something like a >1% weight on bees without focusing on stuff other than getting people to eat less honey given that they are riding the crazy train this far.
Overall, I’m surprised by extreme confidence that a view which puts high moral weight on bees is unreasonable. It seems to me like a very uncertain and tricky question at a minimum. And, I’m sympathetic to something more like 1% (which isn’t orders of magnitude below 15%), though this mostly doesn’t seem decision relevant for me due to longtermism.
(Also, I appreciate the discussion of the norm of seriously entertaining ideas before dismissing them as crazy. But, then I find myself surprised you’d dismiss this idea as crazy when I feel like we’re so radically uncertain about the domain and plausible views about moral weights and plausible aggregations over these views end up with putting a bunch of weight on the bees.)
Separately, I don’t particularly like this post for several reasons, so don’t take this comment as an endorsement of the post overall. I’m not saying that this that this post argues effectively for its claims, just that these claims aren’t totally crazy.
As in, if I followed my preferred high effort (e.g. takes vast amounts of computational resources and probably at least thousands of subjective years) reflection procedure with access to an obedient powerful AI and other affordances.
Somewhat interestingly, you curated this post. The perspective expressed in the post is very similar to one that gets you substantial moral weight on bees, though two envelope problems are of course tricky.
I think we both agree that the underlying question is probably pretty confused, and importantly and relatedly, both probably agree that what we ultimately care about probably will not be grounded in the kind of analysis where you assign moral weights to entities and then sum up their experiences.
The thing that creates a strong feeling of “I feel like people are just being crazy here” in me is the following chain of logic:
That hedonic utilitarianism of this kind is the right choice of moral foundation,
then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons),
then arriving at an extreme conclusion using that methodology (despite it still admitting a bunch of saving throws and reasonably adjustments one could make to have the conclusion not come out crazy),
and then saying that the thing you should take away from this is to stop eating honey.
There are many additional steps here beyond the “if you take a hedonic utilitarian frame as given, what is your distribution over welfare estimates”, each one of which seems crazy to me. Together, they arrive at the answer “marginal bee experience is ~15% as important to care about as human experience”[1], which is my critique.
the last step of seeing what implications it would have on your behavior is still relevant for this, because it’s the saving throw you have for noticing when a belief implies extremely conclusions, which is one of the core feedback loops for updating your beliefs
And to be clear, the step where even if you take it as a given you arrive at a mean of 1% or 15% also seems crazy to me, but not alone crazy enough that start desperately looking for answers unrelated to the logical coherence of empirical evidence of the chain of arguments that have brought us here. Luke’s post doesn’t really give an answer here, it just gives huge enormous ranges (though IMO not ranges with enough room at the bottom), and the basic arguments that post makes for high variance makes sense.
I think we both agree that the underlying question is probably pretty confused, and importantly and relatedly, both probably agree that what we ultimately care about probably will not be grounded in the kind of analysis where you assign moral weights to entities and then sum up their experiences.
I think I narrowly agree on my moral views which are strongly influenced by longtermist-style thinking, though I think “assign weights and add experiences” isn’t way off of a perspective I might end up putting a bunch of weight on[1]. However, I do think “what moral weight should we assign bees” isn’t a notably more confused question in the context of animal welfare than “how should we prioritize between chicken welfare interventions and pig welfare interventions”. So, I think there at least exists a pretty common and broadly reasonable-ish perspective in which this question is sane.
The thing that creates a strong feeling of “I feel like people are just being crazy here” in me is the following chain of logic:
This feels a bit like a motte and bailey to me. Your original claim was “If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me”. This feels feels very different from claiming that the chain of logic you point out is crazy. One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline. I don’t think it’s good practice to dismiss a claim in the way you did (in particular calling the specific claim crazy) because someone saying a claim also appears to be exhibiting a bunch of bad epistemic practices and you think they followed a specific chain of logic that you think is problematic. (I’m not necessarily saying this is what you did, just that this justification would have been bad.)
Maybe you both think “the claim in isolation is crazy” (what you originally said and what I disagree with) and “the process used to reach that claim here seems particularly crazy”. Or maybe you want to partially walk back your original statement and focus on the process (if so, seems good to make this more explicit).
Separately, it’s worth noting that while Bentham’s Bulldog emphasizes the takeaway of “don’t eat honey”, they also do seem do be aware of and endorse other extreme conclusions of high moral weight on insects. (I wish they would also note in the post that this obviously has other more important implications than don’t eat honey!) So, I’m not sure that that point (4) is that much evidence about a bad epistemic process in this particular case.
Considerations like an arbitrarily large multiverse make questions around diversity of cognitive experience more complex and makes literally linear population ethics incoherant due to infinities. But, I think you pretty plausibly end up with something that roughly resembles linear aggregation via something like UDASSA.
One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline.
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I’ve generally updated away from putting much weight on moral intuitions / heuristics expect with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc.
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons)
Do you have a preferred writeup for the critique of these methods and how they ignore our evidence about brain mechanisms?
[Edit: though to clarify, it’s not particularly cruxy to me. I hadn’t heard of this report and it’s not causal in my views here.]
There are lots of critiques spread across lots of forum comments, but no single report I could link to. But you can see the relevant methodology section of the RP report yourself here:
You can see they approximately solely rely on behavioral proxies. I remember there was some section somewhere in the sequence arguing explicitly for this methodology, using some “we want to make a minimum assumption analysis and anything that looks at brain internals and neuron counts would introduce more assumptions” kind of reasoning, which I always consider very weak.
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great, and to be clear, consider hedonic utilitarianism also in general not a great foundation for ethics of any kind).
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great
I agree that neuron count carries some information as a proxy for consciousness or welfare, but it seems like a really bad and noisy one that we shouldn’t place much weight on. For example, in humans the cerebellum is the brain region with the largest neuron count but it has nothing to do with consciousness.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
(To be clear, I get that your main problem with RP is the hedonic utilitarianism assumption which is a fair challenge. I’m mainly challenging the appeal to neuron count.)
EDIT: Adding some citations since the comment got a reaction asking for cites.
This paper describes a living patient born without a cerebellum. The effect of being born without a cerebellum leads to impaired motor function but no impact to sustaining a conscious state.
Neuron counts in this paper put the cerebellum around ~70 billion neurons and the cortex (associated with consciousness) around ~15 billion neurons.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
You can’t have “strong behavioral evidence of consciousness”. At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
Like, modern video game characters (without any use of AI) would also check a huge number of these “behavioral evidence” checkboxes, and really very obviously aren’t conscious or moral patients of non-negligible weight.
You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it.
Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn’t seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).
At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things)
I agree strongly with both of the above points—we should be supplementing the behavioural picture by examining which functional brain regions are involved and whether these functional brain regions bear similarities with regions we know to be associated with consciousness in humans (e.g. the pallium in birds bears functional similarity with the human cortex).
Your original comment calls out that neuron counts are “not great” as a proxy but I think a more suitable proxy would be something like functional similarity + behavioural evidence.
we know to be associated with consciousness in humans
To be clear, my opinion is that we have no idea what “areas of the brain are associated with consciousness” and the whole area of research that claims otherwise is bunk.
This comment is a longer and more articulate statement of the comment that I might have written. It gets my endorsement and agreement.
Namely, I don’t think that high levels of confidence in particular view about “level of consciousness” or moral weight of particular animals is justified, and it especially seems incorrect to state that any particular view is obvious.
Further, it seems plausible to me that at reflective equilibrium, I would regard a pain-moment of an individual bee as approximately morally equivalent to a that of a pain-moment individual human.
Suppose that two dozen bees sting a human, and the human dies of anaphylaxis. Is the majority of the tragedy in this scenario the deaths of the bees?
I could be convinced that I have an overly-rosy view of honey production. I have no real information on it besides random internet memes, which give me an impression like ‘bees are free to be elsewhere, but stay in a hive where some honey sometimes gets taken because it’s a fair trade for a high-quality artificial hive and an indestructible protector.’ That might be propaganda by Big Bee. That might be an accurate summary of small-scale beekeepers but not of large-scale honey production. I am not sure, but I could be convinced on this point.
But the general epistemics on display here do not encourage me to view this as a more trustworthy source than internet memes.
Suppose that two dozen bees sting a human, and the human dies of anaphylaxis. Is the majority of the tragedy in this scenario the deaths of the bees?
FYI, this isn’t a good characterization of the view that I’m sympathetic to here.
The moral relevance of pain and the moral relevance of death are importantly different. The badness of pain is very simple, and doesn’t have to have have much relationship to higher-order functions relating to planning, goal-tracking, or narrativizing, or relationships with others. The badness of death is tied up in all that.
I could totally believe that, at reflective equilibrium, I’ll think that if I were to amputate the limb of a bee without anesthetic, the resulting pain is morally equivalent to that of amputating a human limb without anesthetic. But I would be surprised if I come to think that it’s equally bad for a human to die and a bee to die.
My guess is this is obvious, but IMO it seems extremely unlikely to me that bee-experience is remotely as important to care about as cow experience.
I agree with this, but would strike the ‘extremely’. I don’t actually have gears level models for how some algorithms produce qualia. ‘Something something, self modelling systems, strange loops’ is not a gears level model. I mostly don’t think a million neuron bee brain would be doing qualia, but I wouldn’t say I’m extremely confident.
Consequently, I don’t think people who say bees are likely to be conscious are so incredibly obviously making a mistake that we have to go looking for some signalling explanation for them producing those words.
97% of years of animal life brought about by industrial farming have been through the honey industry (though this doesn’t take into account other insect farming).
This number is nonsense by the way. If you click through to the original source you’ll see that it excludes shrimp and other marine animals.
To me, this comment seems very overconfident. We have no idea what it is like to be anything other than humans. I think it makes sense to use things like e.g. number of neurons as an extremely rough estimate of capacity for suffering, but that’s just because we have no good metrics to go off, and something that you can plausibly argue is maybe correlated with capacity for suffering is better than just saying “well, I guess we don’t know”.
Perhaps certain animals in certain niches experience pain much more intensely than humans, because it was adaptive in their environment. Is this true? Probably not! I have no idea! But we have no idea what other animals experience, so for all we know, it could be true, and then all of our rough estimates and approximations are completely worthless!
I just don’t think that saying things like “extremely unlikely” or implying someone hasn’t “thought about [x] reasonably at all” is either productive or particularly accurate when we’re talking about something for which we have very little well-grounded knowledge.
And just to be clear, I do think we should be prioritising based on the little information we do have. I’m not for throwing our hands up and giving in to ignorance. I just think a lot more epistemic humility is warranted around subjects like this where we really know very very little and the stakes are extremely high.
(if I’ve misunderstood you or if something I’ve said is inaccurate, please correct me!)
I just don’t think that saying things like “extremely unlikely” or implying someone hasn’t “thought about [x] reasonably at all” is either productive or particularly accurate when we’re talking about something for which we have very little well-grounded knowledge.
I agree that some amount of extreme uncertainty is appropriate, but this doesn’t mean that no conclusions are therefore insane. If someone was doing estimates that take into account extreme uncertainty, I would be much less upset! Instead the post says things like this:
If we assume very very very conservatively that a day of honey bee life is as unpleasant as a day spent attending a boring lecture, and then multiply by .15 to take into account the fact bees are probably less sentient than people
That is not a position of extreme uncertainty! And I really don’t think there exist any arguments that would collapse this uncertainty in a reasonable way for the OP here, that I just haven’t encountered.
I think a reasonable position on ethical values is extreme uncertainty. This post is not holding that position. It seems to think that it’s a conservative estimate that a day of honey bee life is 15% as bad as a bad human day.
Clearly you agree you at least have to multiply by some other number. You clearly can’t many any decisions on the basis of just years of animal life (which number to choose is the content of the rest of my comment).
Also, this number isn’t even correct! RP says directly:
At any time, more shrimp are alive on farms than any other group of farmed animals.
I think the 97% number is just completely made up, or at least I have no idea where it comes from. I don’t see it following from obvious fermi estimates, and RP research reports, which the post itself repeatedly uses as an authoritative source, directly contradict it.
I don’t think it is obvious that you have to multiply by some other number.
I don’t know how conscious experience works. Some views (such as Eliezer’s) hold that it’s binary: either a brain has the machinery to generate conscious experience or it doesn’t. That there aren’t gradations of consciousness where some brains are “more sentient” than others. This is not intuitive to me, and it’s not my main guess. But it’s on the table, given my state of knowledge.
Most moral theories, and moral folk theories, hold to the common sense claim that “pain is bad, and extreme pain is extremely bad.” There might be other things that are valuable or meaningful or bad. We don’t need to buy into hedonistic utilitarianism wholesale, to think that pain is bad.
Insofar as we care about reducing pain and it might be that brains are either conscious or not, it might totally be the case that we should be “adding up the experience hours”, when attempting to minimize pain.
And in particular, after we understand the details of the information processing involved in producing consciousness, we might think that weighting by neuron count is as dumb as weighting by the “the thickness of the coper wires in the computer running an AGI.” (Though I sure don’t know, and neuron count seems like one reasonable guess amongst several.)
I mean, I agree that if you want to entertain this as one remote possibility, sure, go ahead, I am not saying morality could not turn out to be weird. But clearly you can construct arguments of similar quality for at least hundreds if not thousands or tens of thousands distinct conclusions.
If you currently want to argue that this is true, and a reasonable assumption on which to make your purchase decisions, I would contend that yes, you are also very very confused about how ethics works.
Like, you can have a mutual state of knowledge about the uncertainty and the correct way to process that uncertainty. There are many plausible arguments for why random.org will spit out a specific number if you ask it for a random number, but it is also obvious that you are supposed to have uncertainty about what number it outputs. If someone shows up and claims to be confident that random.org will spit out a specific number next, they are obviously wrong, even if there was actually a non-trivial chance the number they were confident in will be picked.
The top-level post calculates an estimate in-expectation. If you calculate something in-expectation you are integrating your uncertainty. If you estimate that a randomly publicy traded company is worth 10x its ticker price, you might not be definitely wrong, but it is clear that you need to have a good argument, and if you do not have one, then you are obviously wrong.
Also, to be fair, most of this seems addressable with somewhat more sustainable apiculture practices. Unlike with meat, killing the bees isn’t a necessary step of the process, it’s just a side effect of carelessness or excessively cheap shortcuts. Bee suffering free honey would just cost a bit more and that’s it.
My guess is this is obvious, but IMO it seems extremely unlikely to me that bee-experience is remotely as important to care about as cow experience. Enough as to make statements like this just sound approximately insane:
Like, no, this isn’t how this works. This obviously isn’t how this works. You can’t add up experience hours like this. At the very least use some kind of neuron basis.
If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me that I find myself wanting to look somewhere else but the arguments in things like the Rethink Priorities report (which I have read, and argued with people about for many hours, and still sound insane to me, and IMO do not hold up), but instead look towards things like there being some kind of social signaling madness where someone is trying to signal commitment to some group standard of dedication, which involves some runaway set of extreme beliefs.
Edit: And to avoid a slipping of local norms here. I am only leaving this comment here now after I have seriously entertained the hypothesis that I might be wrong, that maybe there do exist good arguments for moral weights that seem crazy to from where I was originally, but no, after looking into the arguments for quite a while, they still seem crazy to me, and so now I feel comfortable moving on and trying to think about what psychological or social process produces posts like this. And still, I am hesitant about it, because many readers have probably not gone through the same journey, and I don’t want a culture of dismissing things just because they are big and would imply drastic actions.
I think that it’s pretty reasonable to think that bee suffering is plausibly similarly bad to human suffering. (Though I’ll give some important caveats to this in the discussion below.)
More precisely, I think it’s plausible that I (and others) think that on reflection[1] that the “bad” part of suffering is present in roughly the same “amount” in bees as in humans such that suffering in both is very comparable. (It’s also plausible I’d end up thinking that bee suffering is worse due to e.g. higher clock speed.) This is mostly because I don’t strongly think that on reflection I would care about the complex aspects of the suffering or end up caring in a way which is more proportional to neuron count (though these are also plausible).
See also Luke Muehlhauser’s post on moral weights which also discusses a way of compute moral weights which implies it’s plausible that bees have similar moral weight to humans.[2]
I find the idea that we should be radically uncertain about moral-weight-upon-reflection-for-bees pretty intuitive: I feel extremely uncertain about core questions in morality and philosophy which leaves extremely wide intervals. Upon hearing that some people put substantial moral weight on insects, my initial thought was that this was maybe reasonable but not very action relevant. I haven’t engaged with the Rethink Priorities work on moral weights and this isn’t shaping my perspective; my perspective is driven by mostly simpler and earlier views. I don’t feel very sympathetic to perspectives which are extremely confident in low moral weights (like this one) due to general skepticism about extreme confidence in most salient questions in morality.
Just because I think it’s plausible that I’ll end up with a high moral-weight-upon-reflection for bees relative to humans doesn’t mean that I necessarily think the aggregated moral weight should be high; this is because of two envelope problems. But, I think moral aggregation approaches that end up aggregating our current uncertainty in a way that assigns high overall moral weight to bees (e.g. a 15% weight like in the post) aren’t unreasonable. My off-the-cuff guess would be more like 1% if it was important to give an estimate now, but this isn’t very decision relevant from my perspective as I don’t put much moral weight on perspectives that care about this sort of thing. (To oversimplify: I put most terminal weight on longtermism, which doesn’t care about current bees, and then a bit of weight on something like common sense ethics which doesn’t care about this sort of calculation.) And, to be clear, I have a hard time imagining reasonable perspectives which put something like a >1% weight on bees without focusing on stuff other than getting people to eat less honey given that they are riding the crazy train this far.
Overall, I’m surprised by extreme confidence that a view which puts high moral weight on bees is unreasonable. It seems to me like a very uncertain and tricky question at a minimum. And, I’m sympathetic to something more like 1% (which isn’t orders of magnitude below 15%), though this mostly doesn’t seem decision relevant for me due to longtermism.
(Also, I appreciate the discussion of the norm of seriously entertaining ideas before dismissing them as crazy. But, then I find myself surprised you’d dismiss this idea as crazy when I feel like we’re so radically uncertain about the domain and plausible views about moral weights and plausible aggregations over these views end up with putting a bunch of weight on the bees.)
Separately, I don’t particularly like this post for several reasons, so don’t take this comment as an endorsement of the post overall. I’m not saying that this that this post argues effectively for its claims, just that these claims aren’t totally crazy.
As in, if I followed my preferred high effort (e.g. takes vast amounts of computational resources and probably at least thousands of subjective years) reflection procedure with access to an obedient powerful AI and other affordances.
Somewhat interestingly, you curated this post. The perspective expressed in the post is very similar to one that gets you substantial moral weight on bees, though two envelope problems are of course tricky.
I think we both agree that the underlying question is probably pretty confused, and importantly and relatedly, both probably agree that what we ultimately care about probably will not be grounded in the kind of analysis where you assign moral weights to entities and then sum up their experiences.
The thing that creates a strong feeling of “I feel like people are just being crazy here” in me is the following chain of logic:
That hedonic utilitarianism of this kind is the right choice of moral foundation,
then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons),
then arriving at an extreme conclusion using that methodology (despite it still admitting a bunch of saving throws and reasonably adjustments one could make to have the conclusion not come out crazy),
and then saying that the thing you should take away from this is to stop eating honey.
There are many additional steps here beyond the “if you take a hedonic utilitarian frame as given, what is your distribution over welfare estimates”, each one of which seems crazy to me. Together, they arrive at the answer “marginal bee experience is ~15% as important to care about as human experience”[1], which is my critique.
the last step of seeing what implications it would have on your behavior is still relevant for this, because it’s the saving throw you have for noticing when a belief implies extremely conclusions, which is one of the core feedback loops for updating your beliefs
And to be clear, the step where even if you take it as a given you arrive at a mean of 1% or 15% also seems crazy to me, but not alone crazy enough that start desperately looking for answers unrelated to the logical coherence of empirical evidence of the chain of arguments that have brought us here. Luke’s post doesn’t really give an answer here, it just gives huge enormous ranges (though IMO not ranges with enough room at the bottom), and the basic arguments that post makes for high variance makes sense.
I think I narrowly agree on my moral views which are strongly influenced by longtermist-style thinking, though I think “assign weights and add experiences” isn’t way off of a perspective I might end up putting a bunch of weight on[1]. However, I do think “what moral weight should we assign bees” isn’t a notably more confused question in the context of animal welfare than “how should we prioritize between chicken welfare interventions and pig welfare interventions”. So, I think there at least exists a pretty common and broadly reasonable-ish perspective in which this question is sane.
This feels a bit like a motte and bailey to me. Your original claim was “If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me”. This feels feels very different from claiming that the chain of logic you point out is crazy. One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline. I don’t think it’s good practice to dismiss a claim in the way you did (in particular calling the specific claim crazy) because someone saying a claim also appears to be exhibiting a bunch of bad epistemic practices and you think they followed a specific chain of logic that you think is problematic. (I’m not necessarily saying this is what you did, just that this justification would have been bad.)
Maybe you both think “the claim in isolation is crazy” (what you originally said and what I disagree with) and “the process used to reach that claim here seems particularly crazy”. Or maybe you want to partially walk back your original statement and focus on the process (if so, seems good to make this more explicit).
Separately, it’s worth noting that while Bentham’s Bulldog emphasizes the takeaway of “don’t eat honey”, they also do seem do be aware of and endorse other extreme conclusions of high moral weight on insects. (I wish they would also note in the post that this obviously has other more important implications than don’t eat honey!) So, I’m not sure that that point (4) is that much evidence about a bad epistemic process in this particular case.
Considerations like an arbitrarily large multiverse make questions around diversity of cognitive experience more complex and makes literally linear population ethics incoherant due to infinities. But, I think you pretty plausibly end up with something that roughly resembles linear aggregation via something like UDASSA.
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
oops, I meant except. My terrible spelling strikes again.
Do you have a preferred writeup for the critique of these methods and how they ignore our evidence about brain mechanisms?
[Edit: though to clarify, it’s not particularly cruxy to me. I hadn’t heard of this report and it’s not causal in my views here.]
There are lots of critiques spread across lots of forum comments, but no single report I could link to. But you can see the relevant methodology section of the RP report yourself here:
https://rethinkpriorities.org/research-area/the-welfare-range-table/
You can see they approximately solely rely on behavioral proxies. I remember there was some section somewhere in the sequence arguing explicitly for this methodology, using some “we want to make a minimum assumption analysis and anything that looks at brain internals and neuron counts would introduce more assumptions” kind of reasoning, which I always consider very weak.
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great, and to be clear, consider hedonic utilitarianism also in general not a great foundation for ethics of any kind).
I agree that neuron count carries some information as a proxy for consciousness or welfare, but it seems like a really bad and noisy one that we shouldn’t place much weight on. For example, in humans the cerebellum is the brain region with the largest neuron count but it has nothing to do with consciousness.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
(To be clear, I get that your main problem with RP is the hedonic utilitarianism assumption which is a fair challenge. I’m mainly challenging the appeal to neuron count.)
EDIT: Adding some citations since the comment got a reaction asking for cites.
This paper describes a living patient born without a cerebellum. The effect of being born without a cerebellum leads to impaired motor function but no impact to sustaining a conscious state.
Neuron counts in this paper put the cerebellum around ~70 billion neurons and the cortex (associated with consciousness) around ~15 billion neurons.
You can’t have “strong behavioral evidence of consciousness”. At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
Like, modern video game characters (without any use of AI) would also check a huge number of these “behavioral evidence” checkboxes, and really very obviously aren’t conscious or moral patients of non-negligible weight.
You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it.
Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn’t seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).
I agree strongly with both of the above points—we should be supplementing the behavioural picture by examining which functional brain regions are involved and whether these functional brain regions bear similarities with regions we know to be associated with consciousness in humans (e.g. the pallium in birds bears functional similarity with the human cortex).
Your original comment calls out that neuron counts are “not great” as a proxy but I think a more suitable proxy would be something like functional similarity + behavioural evidence.
(Also edited the original comment with citatons)
To be clear, my opinion is that we have no idea what “areas of the brain are associated with consciousness” and the whole area of research that claims otherwise is bunk.
This comment is a longer and more articulate statement of the comment that I might have written. It gets my endorsement and agreement.
Namely, I don’t think that high levels of confidence in particular view about “level of consciousness” or moral weight of particular animals is justified, and it especially seems incorrect to state that any particular view is obvious.
Further, it seems plausible to me that at reflective equilibrium, I would regard a pain-moment of an individual bee as approximately morally equivalent to a that of a pain-moment individual human.
Strongly seconded.
Suppose that two dozen bees sting a human, and the human dies of anaphylaxis. Is the majority of the tragedy in this scenario the deaths of the bees?
I could be convinced that I have an overly-rosy view of honey production. I have no real information on it besides random internet memes, which give me an impression like ‘bees are free to be elsewhere, but stay in a hive where some honey sometimes gets taken because it’s a fair trade for a high-quality artificial hive and an indestructible protector.’ That might be propaganda by Big Bee. That might be an accurate summary of small-scale beekeepers but not of large-scale honey production. I am not sure, but I could be convinced on this point.
But the general epistemics on display here do not encourage me to view this as a more trustworthy source than internet memes.
FYI, this isn’t a good characterization of the view that I’m sympathetic to here.
The moral relevance of pain and the moral relevance of death are importantly different. The badness of pain is very simple, and doesn’t have to have have much relationship to higher-order functions relating to planning, goal-tracking, or narrativizing, or relationships with others. The badness of death is tied up in all that.
I could totally believe that, at reflective equilibrium, I’ll think that if I were to amputate the limb of a bee without anesthetic, the resulting pain is morally equivalent to that of amputating a human limb without anesthetic. But I would be surprised if I come to think that it’s equally bad for a human to die and a bee to die.
Expand? This seems like a crucial and very false part of this argument.
I agree with this, but would strike the ‘extremely’. I don’t actually have gears level models for how some algorithms produce qualia. ‘Something something, self modelling systems, strange loops’ is not a gears level model. I mostly don’t think a million neuron bee brain would be doing qualia, but I wouldn’t say I’m extremely confident.
Consequently, I don’t think people who say bees are likely to be conscious are so incredibly obviously making a mistake that we have to go looking for some signalling explanation for them producing those words.
This number is nonsense by the way. If you click through to the original source you’ll see that it excludes shrimp and other marine animals.
To me, this comment seems very overconfident. We have no idea what it is like to be anything other than humans. I think it makes sense to use things like e.g. number of neurons as an extremely rough estimate of capacity for suffering, but that’s just because we have no good metrics to go off, and something that you can plausibly argue is maybe correlated with capacity for suffering is better than just saying “well, I guess we don’t know”.
Perhaps certain animals in certain niches experience pain much more intensely than humans, because it was adaptive in their environment. Is this true? Probably not! I have no idea! But we have no idea what other animals experience, so for all we know, it could be true, and then all of our rough estimates and approximations are completely worthless!
I just don’t think that saying things like “extremely unlikely” or implying someone hasn’t “thought about [x] reasonably at all” is either productive or particularly accurate when we’re talking about something for which we have very little well-grounded knowledge.
And just to be clear, I do think we should be prioritising based on the little information we do have. I’m not for throwing our hands up and giving in to ignorance. I just think a lot more epistemic humility is warranted around subjects like this where we really know very very little and the stakes are extremely high.
(if I’ve misunderstood you or if something I’ve said is inaccurate, please correct me!)
I agree that some amount of extreme uncertainty is appropriate, but this doesn’t mean that no conclusions are therefore insane. If someone was doing estimates that take into account extreme uncertainty, I would be much less upset! Instead the post says things like this:
That is not a position of extreme uncertainty! And I really don’t think there exist any arguments that would collapse this uncertainty in a reasonable way for the OP here, that I just haven’t encountered.
I think a reasonable position on ethical values is extreme uncertainty. This post is not holding that position. It seems to think that it’s a conservative estimate that a day of honey bee life is 15% as bad as a bad human day.
I don’t think it’s obvious at all?
Clearly you agree you at least have to multiply by some other number. You clearly can’t many any decisions on the basis of just years of animal life (which number to choose is the content of the rest of my comment).
Also, this number isn’t even correct! RP says directly:
I think the 97% number is just completely made up, or at least I have no idea where it comes from. I don’t see it following from obvious fermi estimates, and RP research reports, which the post itself repeatedly uses as an authoritative source, directly contradict it.
I don’t think it is obvious that you have to multiply by some other number.
I don’t know how conscious experience works. Some views (such as Eliezer’s) hold that it’s binary: either a brain has the machinery to generate conscious experience or it doesn’t. That there aren’t gradations of consciousness where some brains are “more sentient” than others. This is not intuitive to me, and it’s not my main guess. But it’s on the table, given my state of knowledge.
Most moral theories, and moral folk theories, hold to the common sense claim that “pain is bad, and extreme pain is extremely bad.” There might be other things that are valuable or meaningful or bad. We don’t need to buy into hedonistic utilitarianism wholesale, to think that pain is bad.
Insofar as we care about reducing pain and it might be that brains are either conscious or not, it might totally be the case that we should be “adding up the experience hours”, when attempting to minimize pain.
And in particular, after we understand the details of the information processing involved in producing consciousness, we might think that weighting by neuron count is as dumb as weighting by the “the thickness of the coper wires in the computer running an AGI.” (Though I sure don’t know, and neuron count seems like one reasonable guess amongst several.)
I mean, I agree that if you want to entertain this as one remote possibility, sure, go ahead, I am not saying morality could not turn out to be weird. But clearly you can construct arguments of similar quality for at least hundreds if not thousands or tens of thousands distinct conclusions.
If you currently want to argue that this is true, and a reasonable assumption on which to make your purchase decisions, I would contend that yes, you are also very very confused about how ethics works.
Like, you can have a mutual state of knowledge about the uncertainty and the correct way to process that uncertainty. There are many plausible arguments for why random.org will spit out a specific number if you ask it for a random number, but it is also obvious that you are supposed to have uncertainty about what number it outputs. If someone shows up and claims to be confident that random.org will spit out a specific number next, they are obviously wrong, even if there was actually a non-trivial chance the number they were confident in will be picked.
The top-level post calculates an estimate in-expectation. If you calculate something in-expectation you are integrating your uncertainty. If you estimate that a randomly publicy traded company is worth 10x its ticker price, you might not be definitely wrong, but it is clear that you need to have a good argument, and if you do not have one, then you are obviously wrong.
Also, to be fair, most of this seems addressable with somewhat more sustainable apiculture practices. Unlike with meat, killing the bees isn’t a necessary step of the process, it’s just a side effect of carelessness or excessively cheap shortcuts. Bee suffering free honey would just cost a bit more and that’s it.