Don’t Eat Honey
Crosspost from my blog.
(I think this is a pretty important article so I’d appreciate you sharing and restacking it—thanks!)
There are lots of people who say of themselves “I’m vegan except for honey.” This is a bit like someone saying “I’m a law-abiding citizen, never violating the law, except sometimes I’ll bring a young boy to the woods and slay him.” These people abstain from all the animal products except honey, even though honey is by far the worst of the commonly eaten animal products.
Now, this claim sounds outrageous. Why do I think it’s worse to eat honey than beef, eggs, chicken, dairy, and even foie gras? Don’t I know about the months-long torture process needed to fatten up ducks sold for foie gras? Don’t I know about the fact that they grind up baby male chicks in the egg industry and keep the females in tiny cages too small to turn around in? Don’t I know, don’t I know, don’t I know?
Indeed I do. I am no fan of these animal products. I fastidiously avoid eating them. In fact, I think that factory farming is a horror of unprecedented proportions, a crime, a tragedy, an embarrassment, a work of Satan himself that induces both cruelty and wickedness in those involved and perpetrates suffering on a scale so vast it can scarcely be fathomed. I can be accused of many things, but being a fan of most animal products is not one of them.
But I assure you, honey is worse (at least in expectation).
If you eat a kilogram of beef, you’ll cause about an extra 2 days of factory farming. It’s 3 days for pork, 14 for turkey, 23 for chicken, and 31 for eggs. In contrast, if you eat a kg of honey, you’ll cause over 200,000 days of bee farming. Of all the farming years brought about by the honey, chicken, cow, sheep, turkey, duck, pig, and goat farming industries, 97% have been brought about by honey.
If honey is bad, therefore, it is likely to be very bad! If we assume a day of bee life is only .1% as bad in absolute terms as a day of chicken life, honey is still many times worse than eating chicken (at least, if you eat similar amounts). As we’ll see, taking into account serious estimates of suffering caused makes honey seem many times worse than all other animal products, so that your occasional honey consumption could very well be worse than all the rest of your consumption of animal products combined.
Let’s first establish that bees in the honey industry do not live good lives. First of all, their lives are very short. They live just a few weeks. They die painfully. So even putting aside grievous industry abuse, their lives aren’t likely to be great. Predation, starvation, succumbing to disease, and wear and tear are all common.
Second of all, the honey industry treats bees unimaginably terribly (most of the points I make here are drawn from the Rethink Priorities essay I just linked). They’re mostly kept in artificial structures, that are routinely inspected in ways that are very stressful for the bees, who feel like the hive is under attack. Often, the bees sting themselves to death. In order to prevent this, the industry uses a process called smoking—lighting a fire, sending smoke into the hives, to prevent alarm pheromones from being detected and the bees from being (beeing) sent into a frenzy. Sometimes, however, smoking melts the wings of the bees. Reassembly of the hive after inspections often crushes bees to death.
These structures, called Langstroth hives, also have poor thermal insulation, increasing the risk of bees freezing to death or overheating. About 30% of hives die off during the winter, meaning this probably kills about 8 billion bees in the U.S. alone every single year. The industry also keeps the bees crammed together, leading to infestations of harmful parasites.
Oftentimes, beekeepers take too much honey and leave all of the bees to starve to death. This is a frequent cause of the mass bee die-offs that, remember, cause about a third of bee colonies not to survive the winter. Because beekeepers take honey, the bees main source of food, bees are left chronically malnourished, leading to higher risk of death, weakness, and disease. Bees in the commercial honey industry generally lack the ability to forage, which exacerbates nutrition problems.
Bees also undergo unpleasant transport conditions. More than half of bee colonies are transported at some point. Tragically, “bees from migratory colonies have a shorter lifespan and higher levels of oxidative stress than workers at stationary apiaries.” The transport process is very stressful for bees, just as it is for other animals. It also weirdly leads to bees having underdeveloped food glands, perhaps due to vibration from transport. Transport often is poorly ventilated, leading to bees overheating or freezing to death. Also, transport brings bees from many different colonies together, leading to rapid spread of disease.
Honey bees are often afflicted by parasites, poisoned with pesticides, and killed in other ways. Queen bees are routinely killed years before they’d die naturally, have their wings clipped, and are stressfully and invasively artificially inseminated. This selective breeding leaves bees more efficient commercially but with lower welfare levels than they’d otherwise have. Often bees are killed intentionally in the winter because it’s cheaper than keeping them around—by diesel, petrol, cyanide, freezing, drowning, and suffocation.
So, um, not great!
In short, bees are kept in unpleasant, artificial conditions, where a third of the hives die off during the winter from poor insulation—often being baked alive or freezing to death. They’re overworked and left chronically malnourished, all while riddled with parasites and subject to invasive and stressful inspections. And given the profound extent to which the honey industry brings invasive disease to wild bees and crowds out other pollinators, the net environmental impact is relatively unclear. The standard notion that honey should be eaten to preserve bees is a vast oversimplification.
Thus, if you eat even moderate amounts of honey, you cause extremely large numbers of bees to experience extremely unpleasant fates for extremely long times. If bees matter even negligibly, this is very bad!
Indeed, bees seem to matter a surprising amount. They are far more cognitively sophisticated than most other insects, having about a million neurons—far more than our current president. Bees make complex tradeoffs between pain and reward, display pessimism, show recognition of their bodies, make transitive inference (which some philosophers don’t do), and dream. Rethink Priorities notes bees have been shown to display every behavioral proxy of consciousness, including:
Displaying individual personality.
Foregoing temporary benefit for greater long term reward.
Not acting on one’s impulses.
Exhibiting a pessimism bias (thinking, if they’re been exposed to new positive and negative stimuli at an equal rate, probably the next stimuli will be beneficial).
Skill at navigating.
Making tradeoffs between pain and gain.
Recognizing numbers (if bees were offered some reward when offered, say, 4 things, even of different types, they learned to get excited when seeing four things).
Problem solving.
Responding cautiously to novel experiences.
Quickly identifying when some reward conditioning has been reversed (for instance, if a creature is initially rewarded when a bell is rung and then they’re shocked when it’s rung, they quickly learn to dread the bell).
Learning from others.
Mentally representing where in space other creatures are.
Discounting rewards longer in the future.
Using tools to manipulate a ball.
Judging which of two things it regards as more likely to happen (bees opt out of difficult trials, in favor of easy ones, to try to get a reward).
Being anxious.
Learning from pain.
Fidgeting in response to stress.
Parental care.
Being afraid.
Being helpful.
Self medicating.
Having their response be modified by pain killers.
Comparatively assessing the relative value of different nectars, and other potential rewards.
Disliking particular tastes.
The median estimate, from the most detailed report ever done on the intensity of pleasure and pain in animals, was that bees suffer 7% as intensely as humans. The mean estimate was around 15% as intensely as people. Bees were guessed to be more intensely conscious than salmon!
If we assume conservatively that a bee’s life is 10% as unpleasant as chicken life, and then downweight it by the relative intensity of their suffering, then consuming a kg of honey is over 500 times worse than consuming a kg of chicken! And these estimates were fairly conservative. I think it’s more plausible that eating honey is thousands of times worse than eating comparable amounts of chicken, which is itself over a dozen times worse than eating comparable amounts of beef. If we assume very very very conservatively that a day of honey bee life is as unpleasant as a day spent attending a boring lecture, and then multiply by .15 to take into account the fact bees are probably less sentient than people, eating a kg of honey causes about as much suffering as forcing a person to attend boring lectures continuously for 30,000 days. That’s about an entire lifetime of a human, spent entirely on drudgery. That’s like being forced to read an entire Curtis Yarvin article from start to finish. And that is wildly conservative.
I feel I’ve already repeated my shtick often enough about the badness of pain being because of how it feels, so I won’t repeat it in detail. Headaches are bad because they hurt, not (entirely at least) because the people having them are smart. Causing staggeringly, mind-blowingly large quantities of animal pain is bad because pain is bad. Unpleasant experiences are unpleasant. And while in practice we don’t take seriously bee interests, they’re complex, likely able to suffer, and surprisingly intelligent. It’s not okay to mass starve and roast such creatures just because they’re small. If you wouldn’t be fine doing such things to larger creatures with similar behavior, you shouldn’t be fine doing them to bees.
So don’t eat honey! If you eat honey, you are causing staggeringly large amounts of very intense suffering. Eating honey is many times worse than eating other animal products, which are themselves bad enough. If you want to make an easy change to your diet to prevent a lot of the suffering that you cause, please, for the love of God, avoid honey.
(You wouldn’t hurt this little guy, would you?)
- 2 Jul 2025 4:27 UTC; 5 points) 's comment on [Meta] New moderation tools and moderation guidelines by (
- 3 Jul 2025 0:44 UTC; 1 point) 's comment on Kaj’s shortform feed by (
My guess is this is obvious, but IMO it seems extremely unlikely to me that bee-experience is remotely as important to care about as cow experience. Enough as to make statements like this just sound approximately insane:
Like, no, this isn’t how this works. This obviously isn’t how this works. You can’t add up experience hours like this. At the very least use some kind of neuron basis.
If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me that I find myself wanting to look somewhere else but the arguments in things like the Rethink Priorities report (which I have read, and argued with people about for many hours, and still sound insane to me, and IMO do not hold up), but instead look towards things like there being some kind of social signaling madness where someone is trying to signal commitment to some group standard of dedication, which involves some runaway set of extreme beliefs.
Edit: And to avoid a slipping of local norms here. I am only leaving this comment here now after I have seriously entertained the hypothesis that I might be wrong, that maybe there do exist good arguments for moral weights that seem crazy to from where I was originally, but no, after looking into the arguments for quite a while, they still seem crazy to me, and so now I feel comfortable moving on and trying to think about what psychological or social process produces posts like this. And still, I am hesitant about it, because many readers have probably not gone through the same journey, and I don’t want a culture of dismissing things just because they are big and would imply drastic actions.
I think that it’s pretty reasonable to think that bee suffering is plausibly similarly bad to human suffering. (Though I’ll give some important caveats to this in the discussion below.)
More precisely, I think it’s plausible that I (and others) think that on reflection[1] that the “bad” part of suffering is present in roughly the same “amount” in bees as in humans such that suffering in both is very comparable. (It’s also plausible I’d end up thinking that bee suffering is worse due to e.g. higher clock speed.) This is mostly because I don’t strongly think that on reflection I would care about the complex aspects of the suffering or end up caring in a way which is more proportional to neuron count (though these are also plausible).
See also Luke Muehlhauser’s post on moral weights which also discusses a way of compute moral weights which implies it’s plausible that bees have similar moral weight to humans.[2]
I find the idea that we should be radically uncertain about moral-weight-upon-reflection-for-bees pretty intuitive: I feel extremely uncertain about core questions in morality and philosophy which leaves extremely wide intervals. Upon hearing that some people put substantial moral weight on insects, my initial thought was that this was maybe reasonable but not very action relevant. I haven’t engaged with the Rethink Priorities work on moral weights and this isn’t shaping my perspective; my perspective is driven by mostly simpler and earlier views. I don’t feel very sympathetic to perspectives which are extremely confident in low moral weights (like this one) due to general skepticism about extreme confidence in most salient questions in morality.
Just because I think it’s plausible that I’ll end up with a high moral-weight-upon-reflection for bees relative to humans doesn’t mean that I necessarily think the aggregated moral weight should be high; this is because of two envelope problems. But, I think moral aggregation approaches that end up aggregating our current uncertainty in a way that assigns high overall moral weight to bees (e.g. a 15% weight like in the post) aren’t unreasonable. My off-the-cuff guess would be more like 1% if it was important to give an estimate now, but this isn’t very decision relevant from my perspective as I don’t put much moral weight on perspectives that care about this sort of thing. (To oversimplify: I put most terminal weight on longtermism, which doesn’t care about current bees, and then a bit of weight on something like common sense ethics which doesn’t care about this sort of calculation.) And, to be clear, I have a hard time imagining reasonable perspectives which put something like a >1% weight on bees without focusing on stuff other than getting people to eat less honey given that they are riding the crazy train this far.
Overall, I’m surprised by extreme confidence that a view which puts high moral weight on bees is unreasonable. It seems to me like a very uncertain and tricky question at a minimum. And, I’m sympathetic to something more like 1% (which isn’t orders of magnitude below 15%), though this mostly doesn’t seem decision relevant for me due to longtermism.
(Also, I appreciate the discussion of the norm of seriously entertaining ideas before dismissing them as crazy. But, then I find myself surprised you’d dismiss this idea as crazy when I feel like we’re so radically uncertain about the domain and plausible views about moral weights and plausible aggregations over these views end up with putting a bunch of weight on the bees.)
Separately, I don’t particularly like this post for several reasons, so don’t take this comment as an endorsement of the post overall. I’m not saying that this that this post argues effectively for its claims, just that these claims aren’t totally crazy.
As in, if I followed my preferred high effort (e.g. takes vast amounts of computational resources and probably at least thousands of subjective years) reflection procedure with access to an obedient powerful AI and other affordances.
Somewhat interestingly, you curated this post. The perspective expressed in the post is very similar to one that gets you substantial moral weight on bees, though two envelope problems are of course tricky.
I think we both agree that the underlying question is probably pretty confused, and importantly and relatedly, both probably agree that what we ultimately care about probably will not be grounded in the kind of analysis where you assign moral weights to entities and then sum up their experiences.
The thing that creates a strong feeling of “I feel like people are just being crazy here” in me is the following chain of logic:
That hedonic utilitarianism of this kind is the right choice of moral foundation,
then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons),
then arriving at an extreme conclusion using that methodology (despite it still admitting a bunch of saving throws and reasonably adjustments one could make to have the conclusion not come out crazy),
and then saying that the thing you should take away from this is to stop eating honey.
There are many additional steps here beyond the “if you take a hedonic utilitarian frame as given, what is your distribution over welfare estimates”, each one of which seems crazy to me. Together, they arrive at the answer “marginal bee experience is ~15% as important to care about as human experience”[1], which is my critique.
the last step of seeing what implications it would have on your behavior is still relevant for this, because it’s the saving throw you have for noticing when a belief implies extremely conclusions, which is one of the core feedback loops for updating your beliefs
And to be clear, the step where even if you take it as a given you arrive at a mean of 1% or 15% also seems crazy to me, but not alone crazy enough that start desperately looking for answers unrelated to the logical coherence of empirical evidence of the chain of arguments that have brought us here. Luke’s post doesn’t really give an answer here, it just gives huge enormous ranges (though IMO not ranges with enough room at the bottom), and the basic arguments that post makes for high variance makes sense.
I think I narrowly agree on my moral views which are strongly influenced by longtermist-style thinking, though I think “assign weights and add experiences” isn’t way off of a perspective I might end up putting a bunch of weight on[1]. However, I do think “what moral weight should we assign bees” isn’t a notably more confused question in the context of animal welfare than “how should we prioritize between chicken welfare interventions and pig welfare interventions”. So, I think there at least exists a pretty common and broadly reasonable-ish perspective in which this question is sane.
This feels a bit like a motte and bailey to me. Your original claim was “If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me”. This feels feels very different from claiming that the chain of logic you point out is crazy. One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline. I don’t think it’s good practice to dismiss a claim in the way you did (in particular calling the specific claim crazy) because someone saying a claim also appears to be exhibiting a bunch of bad epistemic practices and you think they followed a specific chain of logic that you think is problematic. (I’m not necessarily saying this is what you did, just that this justification would have been bad.)
Maybe you both think “the claim in isolation is crazy” (what you originally said and what I disagree with) and “the process used to reach that claim here seems particularly crazy”. Or maybe you want to partially walk back your original statement and focus on the process (if so, seems good to make this more explicit).
Separately, it’s worth noting that while Bentham’s Bulldog emphasizes the takeaway of “don’t eat honey”, they also do seem do be aware of and endorse other extreme conclusions of high moral weight on insects. (I wish they would also note in the post that this obviously has other more important implications than don’t eat honey!) So, I’m not sure that that point (4) is that much evidence about a bad epistemic process in this particular case.
Considerations like an arbitrarily large multiverse make questions around diversity of cognitive experience more complex and makes literally linear population ethics incoherant due to infinities. But, I think you pretty plausibly end up with something that roughly resembles linear aggregation via something like UDASSA.
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
oops, I meant except. My terrible spelling strikes again.
Do you have a preferred writeup for the critique of these methods and how they ignore our evidence about brain mechanisms?
[Edit: though to clarify, it’s not particularly cruxy to me. I hadn’t heard of this report and it’s not causal in my views here.]
There are lots of critiques spread across lots of forum comments, but no single report I could link to. But you can see the relevant methodology section of the RP report yourself here:
https://rethinkpriorities.org/research-area/the-welfare-range-table/
You can see they approximately solely rely on behavioral proxies. I remember there was some section somewhere in the sequence arguing explicitly for this methodology, using some “we want to make a minimum assumption analysis and anything that looks at brain internals and neuron counts would introduce more assumptions” kind of reasoning, which I always consider very weak.
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great, and to be clear, consider hedonic utilitarianism also in general not a great foundation for ethics of any kind).
I agree that neuron count carries some information as a proxy for consciousness or welfare, but it seems like a really bad and noisy one that we shouldn’t place much weight on. For example, in humans the cerebellum is the brain region with the largest neuron count but it has nothing to do with consciousness.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
(To be clear, I get that your main problem with RP is the hedonic utilitarianism assumption which is a fair challenge. I’m mainly challenging the appeal to neuron count.)
EDIT: Adding some citations since the comment got a reaction asking for cites.
This paper describes a living patient born without a cerebellum. The effect of being born without a cerebellum leads to impaired motor function but no impact to sustaining a conscious state.
Neuron counts in this paper put the cerebellum around ~70 billion neurons and the cortex (associated with consciousness) around ~15 billion neurons.
You can’t have “strong behavioral evidence of consciousness”. At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
Like, modern video game characters (without any use of AI) would also check a huge number of these “behavioral evidence” checkboxes, and really very obviously aren’t conscious or moral patients of non-negligible weight.
You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it.
Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn’t seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).
I agree strongly with both of the above points—we should be supplementing the behavioural picture by examining which functional brain regions are involved and whether these functional brain regions bear similarities with regions we know to be associated with consciousness in humans (e.g. the pallium in birds bears functional similarity with the human cortex).
Your original comment calls out that neuron counts are “not great” as a proxy but I think a more suitable proxy would be something like functional similarity + behavioural evidence.
(Also edited the original comment with citatons)
To be clear, my opinion is that we have no idea what “areas of the brain are associated with consciousness” and the whole area of research that claims otherwise is bunk.
This comment is a longer and more articulate statement of the comment that I might have written. It gets my endorsement and agreement.
Namely, I don’t think that high levels of confidence in particular view about “level of consciousness” or moral weight of particular animals is justified, and it especially seems incorrect to state that any particular view is obvious.
Further, it seems plausible to me that at reflective equilibrium, I would regard a pain-moment of an individual bee as approximately morally equivalent to a that of a pain-moment individual human.
Strongly seconded.
Suppose that two dozen bees sting a human, and the human dies of anaphylaxis. Is the majority of the tragedy in this scenario the deaths of the bees?
I could be convinced that I have an overly-rosy view of honey production. I have no real information on it besides random internet memes, which give me an impression like ‘bees are free to be elsewhere, but stay in a hive where some honey sometimes gets taken because it’s a fair trade for a high-quality artificial hive and an indestructible protector.’ That might be propaganda by Big Bee. That might be an accurate summary of small-scale beekeepers but not of large-scale honey production. I am not sure, but I could be convinced on this point.
But the general epistemics on display here do not encourage me to view this as a more trustworthy source than internet memes.
FYI, this isn’t a good characterization of the view that I’m sympathetic to here.
The moral relevance of pain and the moral relevance of death are importantly different. The badness of pain is very simple, and doesn’t have to have have much relationship to higher-order functions relating to planning, goal-tracking, or narrativizing, or relationships with others. The badness of death is tied up in all that.
I could totally believe that, at reflective equilibrium, I’ll think that if I were to amputate the limb of a bee without anesthetic, the resulting pain is morally equivalent to that of amputating a human limb without anesthetic. But I would be surprised if I come to think that it’s equally bad for a human to die and a bee to die.
Expand? This seems like a crucial and very false part of this argument.
I agree with this, but would strike the ‘extremely’. I don’t actually have gears level models for how some algorithms produce qualia. ‘Something something, self modelling systems, strange loops’ is not a gears level model. I mostly don’t think a million neuron bee brain would be doing qualia, but I wouldn’t say I’m extremely confident.
Consequently, I don’t think people who say bees are likely to be conscious are so incredibly obviously making a mistake that we have to go looking for some signalling explanation for them producing those words.
This number is nonsense by the way. If you click through to the original source you’ll see that it excludes shrimp and other marine animals.
To me, this comment seems very overconfident. We have no idea what it is like to be anything other than humans. I think it makes sense to use things like e.g. number of neurons as an extremely rough estimate of capacity for suffering, but that’s just because we have no good metrics to go off, and something that you can plausibly argue is maybe correlated with capacity for suffering is better than just saying “well, I guess we don’t know”.
Perhaps certain animals in certain niches experience pain much more intensely than humans, because it was adaptive in their environment. Is this true? Probably not! I have no idea! But we have no idea what other animals experience, so for all we know, it could be true, and then all of our rough estimates and approximations are completely worthless!
I just don’t think that saying things like “extremely unlikely” or implying someone hasn’t “thought about [x] reasonably at all” is either productive or particularly accurate when we’re talking about something for which we have very little well-grounded knowledge.
And just to be clear, I do think we should be prioritising based on the little information we do have. I’m not for throwing our hands up and giving in to ignorance. I just think a lot more epistemic humility is warranted around subjects like this where we really know very very little and the stakes are extremely high.
(if I’ve misunderstood you or if something I’ve said is inaccurate, please correct me!)
I agree that some amount of extreme uncertainty is appropriate, but this doesn’t mean that no conclusions are therefore insane. If someone was doing estimates that take into account extreme uncertainty, I would be much less upset! Instead the post says things like this:
That is not a position of extreme uncertainty! And I really don’t think there exist any arguments that would collapse this uncertainty in a reasonable way for the OP here, that I just haven’t encountered.
I think a reasonable position on ethical values is extreme uncertainty. This post is not holding that position. It seems to think that it’s a conservative estimate that a day of honey bee life is 15% as bad as a bad human day.
I don’t think it’s obvious at all?
Clearly you agree you at least have to multiply by some other number. You clearly can’t many any decisions on the basis of just years of animal life (which number to choose is the content of the rest of my comment).
Also, this number isn’t even correct! RP says directly:
I think the 97% number is just completely made up, or at least I have no idea where it comes from. I don’t see it following from obvious fermi estimates, and RP research reports, which the post itself repeatedly uses as an authoritative source, directly contradict it.
I don’t think it is obvious that you have to multiply by some other number.
I don’t know how conscious experience works. Some views (such as Eliezer’s) hold that it’s binary: either a brain has the machinery to generate conscious experience or it doesn’t. That there aren’t gradations of consciousness where some brains are “more sentient” than others. This is not intuitive to me, and it’s not my main guess. But it’s on the table, given my state of knowledge.
Most moral theories, and moral folk theories, hold to the common sense claim that “pain is bad, and extreme pain is extremely bad.” There might be other things that are valuable or meaningful or bad. We don’t need to buy into hedonistic utilitarianism wholesale, to think that pain is bad.
Insofar as we care about reducing pain and it might be that brains are either conscious or not, it might totally be the case that we should be “adding up the experience hours”, when attempting to minimize pain.
And in particular, after we understand the details of the information processing involved in producing consciousness, we might think that weighting by neuron count is as dumb as weighting by the “the thickness of the coper wires in the computer running an AGI.” (Though I sure don’t know, and neuron count seems like one reasonable guess amongst several.)
I mean, I agree that if you want to entertain this as one remote possibility, sure, go ahead, I am not saying morality could not turn out to be weird. But clearly you can construct arguments of similar quality for at least hundreds if not thousands or tens of thousands distinct conclusions.
If you currently want to argue that this is true, and a reasonable assumption on which to make your purchase decisions, I would contend that yes, you are also very very confused about how ethics works.
Like, you can have a mutual state of knowledge about the uncertainty and the correct way to process that uncertainty. There are many plausible arguments for why random.org will spit out a specific number if you ask it for a random number, but it is also obvious that you are supposed to have uncertainty about what number it outputs. If someone shows up and claims to be confident that random.org will spit out a specific number next, they are obviously wrong, even if there was actually a non-trivial chance the number they were confident in will be picked.
The top-level post calculates an estimate in-expectation. If you calculate something in-expectation you are integrating your uncertainty. If you estimate that a randomly publicy traded company is worth 10x its ticker price, you might not be definitely wrong, but it is clear that you need to have a good argument, and if you do not have one, then you are obviously wrong.
Also, to be fair, most of this seems addressable with somewhat more sustainable apiculture practices. Unlike with meat, killing the bees isn’t a necessary step of the process, it’s just a side effect of carelessness or excessively cheap shortcuts. Bee suffering free honey would just cost a bit more and that’s it.
This doesn’t seem at all conservative based on your description of how honey bees are treated, which reads like it was selecting for the worst possible things you could find plausible citations for. In fact, very little of your description makes an argument about how much we should expect such bees to be suffering in an ongoing way day-to-day. What I know of how broiler chickens are treated makes suffering ratios like 0.1% (rather than 10%) seem reasonable to me. This also neglects the quantities that people are likely to consume, which could trivially vary by 3 OoM.
If you’re a vegan I think there are a bunch of good reasons not to make exceptions for honey. If you’re trying to convince non-vegans who want to cheaply reducing their own contributions to animal suffering, I don’t think they should find this post very convincing.
Do you think that makes sense? I haven’t looked into how well Salmon compare to bees at problem-solving and the various other stuff you mention, but it feels pretty sus offhand.
Bees are more social than salmon. I haven’t put serious thought into it, but I can see an argument that sociality is an important factor in determining intensity-of-consciousness. (Perhaps because sociality requires complex neuron interactions that give rise to certain conscious experiences?)
Bees are at the other end, like ants, where they are so social that you have to start wondering where the individual bee ends and the hivemind begins. We go to those questions of how does consciousness relate simply to complexity of information processing vs integration.
This to me is one of those Hofstradterian arguments that sounds (and is) very clever and definitely is logically possible but doesn’t seem very likely to me when you look at it numerically. Not an expert, but as I understand it intra-bee communication still has many more bits than inter-bee communication, even among the most eusocial ones. So bees are much closer to comrades working together for a shared goal than individual cells in a human body, in terms of their individuality.
A fair point, but more relevant to the issue at hand is—is it sociality that gives rise to consciousness, or is it having to navigate social strategy? Even though there is likely no actual single “beehivemind”, so to speak, is consciousness more necessary when you’re so social that simply going along with very well established hierarchies and patterns of behaviour is all you need to do to do your part, or is it superfluous at that point since distinction between self and other and reflection on it aren’t all that important?
Salmon is incredibly unlikely to have qualia, there’s approximately nothing in its evolutionary history that correlates with what qualia could be useful for or a side-effect of. I’m fine with eating salmon. Bees are social; I wouldn’t eat bees.
I’m happy to make a bet that you win if salmon have qualia and bees don’t, I win if bees have qualia and salmon don’t, and N/A otherwise, resolves via asking a CEV-aligned AGI.
Can you elaborate on this? I ask because this is far from obvious to me (in fact quite implausible), and I think you probably have beliefs about qualia that I don’t share, but I want to know if I’m missing out on any strong arguments/supporting facts (either for those foundational views, or something salmon-specific).
Sure! Mostly, it’s just that a lot of stuff that correlates with specific qualia in humans doesn’t provide any evidence about qualia in other animals; reinforcement learning- behavior that seeks the things that when encountered update the brain to seek more of them, and tries to avoid the things that update the brain to avoid them- doesn’t mean that there are any circuits in the animal’s brain for experiencing these updates from the inside, as qualia, the way humans do when we suffer. If I train a very simple RL agent with the feedback that salmon get via mechanisms that produce pain in humans, the RL agent will learn to demonstrate salmon’s behavior while we can be very confident there’s no qualia in that RL agent. Basically almost all of the evidence Rethink and others present are of the kind that RL agents and don’t provide evidence that would add anything on top of “it’s a brain of that size that can do RL and has this evolutionary history”.
The reason we know other humans have qualia circuits in their brains is that these circuits have outputs that make humans talk about qualia even if they’ve not heard others talk about qualia (this would’ve been very surprising if that happened randomly).
We don’t have anything remotely close to that for any non-human animals.
For many things, we can assume that something like what led to humans having qualia has been present in the evolutionary history of that thing; or have tests (such as a correct mirror test) that likely correlates with the kinds of things that lead to qualia; but among all known fish species we’ve done these experiments on, there are very few that have any social dynamics of the kind that would maybe correlate with qualia or can remotely pass anything like a mirror test, and salmon is not among those species.
∆, I will cease eating honey and eat more mussels instead.
Plausibly worth it to post my updated my completely guessed hierarchy of animal foods by how much suffering they cause (ignoring wild animal suffering concerns, based on this post):
Shrimp
Honey
Farmed catfish
Farmed salmon
Battery cage eggs
Chicken
Free range eggs
Turkey
Pork
Beef
Mussels
Milk and other milk products
Is your placement of free-range eggs because it’s a watered-down term, or because you think even actual-free-range/pastured chickens are suffering immensely?
I know this answer is disappointing, but I vaguely remember looking into in a longer LLM conversation what free-range entails and concluding it’s not much better than e.g. indoor-farmed eggs. It’s somewhere in the middle of my priority queue to improve and expand on the Tomasik numbers, but I probably don’t have capacity.
I am a beekeeper. I feel that all these assumptions that honeybees in the care of a keeper live horrible lives. In fact, when in the care of a beekeeper, usually their lives are above the average life of a feral colony. The only exception to this is migratory beekeepers, who even with best effort, find it hard to care for the bees under the constant stress of being moved. But, if you like your almonds and oranges for California, I wouldn’t complain too much.
Feral colonies live in cavities in trees, usually consisting of 40 liters of space. Even with walls up to 5 inches thick, colonies living in the tree cavities have a %25 to %50 mortality rate, and almost a %75 mortality rate over winter if the colony was established that year. That makes the %30 fatality of bees kept by beekeepers seem rather small. Langstroth hives do have fairy thin walls, but most beekeepers provide a nice windbreak and might even wrap them in black plastic to keep them warm. Even if these were not offered, honeybees can generate their own heat as long as they have calories (honey) to burn. Keepers feed their bees if they seem a little low on stores to assure their survival.
1 kg of honey consumed probably doesn’t mean 200,000 days of extra bee farming! My bees produced 30 pounds (roughly 15 kg) of honey in a single week. As for parasites and diseases, honeybees are viewed as a beekeepers responsibility. We treat our bees for parasites, such as varroa, and diseases that occur regularly. Feral colonies do not have this privilege of being looked after.
I see that the argument of a very short life in bees was made. Honeybees work themselves literally to death to gather nectar to turn into honey. It is part of a honeybee’s biology. They would do it in the wild, and they would do it even if they had more than enough food. They do it to themselves for the good of the colony. Honeybees should not be viewed as individuals, but as a single organism.
All in all, honey is technically”farmed” by honeybees, but should be safe to eat for vegans. It is all natural. Of honey can’t be eaten by vegans because it was produced by bees, then neither can fruits and vegetables because more than %30 of them were pollinated by honeybees, making them therefore a product of them. Eating honey does not contribute to bee farming and bee death.
It seems to me that literally every one of these arguments can be applied to farming which kills untold numbers of insects and other animals no matter hope it is performed. How do you justify eating anything that is not foraged?
In almost all cases, animals are fed farmed alfalfa and grain several times the caloric value of the meat they produce, so even if you’re worried about wild animal suffering to grow crops, we’d grow less crops producing food for people to eat directly rather than food for animals to inefficiently convert to meat.
That’s all well and good but specifically all forms of farming kills hundreds of times more insects than beekeeping does as a byproduct to protect crops from pests and untold numbers of rodents and insects are destroyed by the act of soil preparation, even barring any attempts at pest control. The fact is, your single little patch of organic farm to produce enough vegetables to keep you alive for a year requires the deaths, and therefore suffering of many more organisms than beekeeping does. If you have a different calculation, I would love to see it, but quite honestly I think it is going to be self justifying.
That’s a good point, that farming also causes large numbers of insects to die whether we bring bees in or not. The post seems to argue that bees in particular are smarter/more important than other insects though. I’d also expect in a most cases that bees are being brought to farms, not wild fields, so the (alleged) suffering of bees is on top of the suffering of other insects on the farm, not an alternative to it. Although, maybe once you’ve done the work of preparing a field, having bees produce honey from it is less bad than preparing additional fields.
I tend to think farming decreases wild animal suffering by lowering wild animal populations https://reducing-suffering.org/humanitys-net-impact-on-wild-animal-suffering/
The whole “wild animals suffer, therefore they should be eradicated for their own good” argument is obviously broken to me. To wit—if an alien civilization reached Earth in antiquity, would they have been right to eradicate humanity to free it from its suffering since everyone was toiling the whole day on the fields and suffering from hunger and disease? What if they reached us now but found our current lifestyle similarly horrible compared to their lofty living standards?
Living beings have some kind of adjustable happiness baseline level. Making someone happy isn’t as simple as triggering their pleasure centres all the time and making someone not unhappy isn’t as simple as preventing their pain centres to ever be triggered (even if this means destroying them).
It’s not clear that Bentham would advocate eradicating those species. There could very well be utilitarian value in keeping a species around, just at reduced population counts. In your alien example, I think you could plausibly argue that it’d be good if the aliens reduced the suffering human population to a lower number, until we were advanced enough to be on-net happy. Or if having a larger suffering population would be good because it would speed up technological progress, that would be an important disanalogy between your thought experiment and the wild animal case.
The argument also doesn’t rely on any of this? It just relies on it being possible to compare the value of two different world-states.
I hold it that in general trying to sum the experiences of a bunch of living beings into a single utility function is nonsense, but in particular I’d say it does matter even without that. My point is that we judge wild animal welfare from the viewpoint of our own baseline. We think “oh, always on the run, half starved, scared of predators/looking for prey, subject to disease and weather of all sorts? What a miserable life that would be!” but that’s just us imagining ourselves in the animal’s shoes, while still holding onto our current baseline. The animals have known nothing else, in fact have evolved in those specific conditions for millions of years, so it would actually be strange if they experienced nothing but pain and fear and stress all the time—what would be the point of evolving different emotional states at all if the dial is always on “everything is awful”? So my guess is, no, that’s not how it works, those animals do have lives with some alternation of bad and good mental states, and may even fall on the net positive end of the utility scale. Factory farming is different because those are deeply unnatural conditions that happen to be all extreme stressors in the wild, meaning the animals, even with some capability to adjust, are thrown into an out-of-distribution end of the scale, just like we have raised ourselves to a different out-of-distribution end (where even the things that were just daily occurrences for us at the inception of our species look like intolerable suffering because we’ve raised our standard of living so high).
Nonsense feels too strong to me? That seems like the type of thing we should be pretty uncertain about—it’s not like we have lots of good evidence either way on meta-ethics that we can use to validate or disprove these theories. I’d be curious what your reasoning is here? Something like a person-affecting view?
This seems like a different point than the one I responded to (which is fine obviously), but though I share the general intuition that it’d make sense for life in the wild to be roughly neutral on the whole, I think there are also some reasons to be skeptical of that view.
First, I don’t see any strong positive reason why evolution should make sure it isn’t the case that “they experienced nothing but pain and fear and stress all the time”. It’s not like evolution “cares” whether animals feel a lot more pain and stress than they feel pleasure and contentment, or vice versa. And it seems like animals—like humans—could function just as well if their lives were 90% bad experiences and 10% good experiences, as with a 50⁄50 split. They’d be unhappy of course, but they’d still get all the relevant directional feedback from various stimuli.
Second, I think humans generally don’t feel that intense pleasure (e.g., orgasms or early jhanas) is more preferable than intense pain (e.g., from sudden injury or chronic disease) is dispreferable. (Cf. when we are in truly intense pain nothing else matters than making the pain go away.) But if we observe wild animals, they probably experience pain more often than pleasure, just based on the situations they’re in. E.g., disease, predation, and starvation seem pretty common in the animal kingdom, whereas sexual pleasure seems pretty rare (almost always tied to reproduction).
Third and relatedly, from an evolutionary perspective, bad events are typically more bad (for the animal’s reproductive fitness) than good events are good. For example, being eaten alive and suffering severe injury means you’re ~0% likely to carry on your genes, whereas finding food and mating doesn’t make you 100% likely to carry on your genes. So there’s an asymmetry. That would be a reason for evolution to make negative experiences more intense than positive experiences. And many animals are at risk of predation and disease continuously through their lives, whereas they may only have relatively few opportunities for e.g., mating or seeing the births of their offspring.
Fourth, most animals follow r-selection strategies, producing many offspring of which only a few survive. Evolution probably wouldn’t optimize for those non-surviving offspring to have well-tuned valence systems, and so they could plausibly just be living very short lives of deprivation and soon death.
I agree.
If it is good to lower wild animal populations, what do you think their optimum population would be?
Not OP, but my best guess in unmanaged natural environments is zero70%.
Your best guess about the OP’s view, or your own view?
My own view, sorry for the confusion. But I’d be interested in Bentham’s Bulldog’s position on this is! It’s also always tricky to make ethical statements that are not on the margin.
The thing that gets me is I eat a vegan diet and forgo honey but I also use bug spray, so I’ve got a revealed preference to kill insects if it’ll make my life moderately better.
That said, I’m probably not killing insects on the scale of bee suffering given for honey, especially given if I stopped being strict about honey I’d just be eating e.g. cereal that has honey as an ingredient rather than getting it as an ingredient in its own right, so the amounts would be miniscule. IDK.
I don’t believe that most insect repellents cause direct permanent harm to insects in the way that they’re used in on-body spray, if that’s your main concern. I’m far from an expert on the subject, but it seems like the two major synthetic repellents only (temporarily?) masks insects’ perception of the odorants leading them to you (https://en.wikipedia.org/wiki/Icaridin#Mechanism_of_action) rather than causing any overt sensation
I was talking about the spray that kills insects, so yeah, my revealed preferences definitely call for insect death.
I’m not sure about the rest of the arguments in the post, but it’s worth flagging that a kg to kg comparison of honey to chicken is kind of inappropriate. Essentially no one is eating a comparable amount of honey as a typical carnivore eats chicken (I didn’t, like, try to calculate this, but it seems obviously right).
Is there artificial honey that is almost indistinguishable ?
This company claims theirs is https://www.elevenmadisonhome.com/story/mellody-honey
It’s for sale here ($28/9oz) https://www.elevenmadisonhome.com/product/mellody-plant-based-honey
Nope.
Having these numbers be weight seems less useful than having them by calorie, since not all animal products are equally calorically dense.
(I admit, calories are a proxy for nutrition, and weight is perhaps a proxy for calories, but the less proxies we can have of the thing we need to measure to perform a consequentialist accounting the better!)
Honey is about 3000 cal per kg, beef 2900, so pretty similar. I’m more concerned that they’re not doing typical consumption rates—you could stop eating pork and eat a similar amount of beef instead (or tofu or Beyond, of course), but nobody is replacing a 200g steak with 200g of honey.
I’d imagine the average serving size of honey is 10-30g and a heavy honey consumer eats on the order of 10 servings a week. My dad was a beekeeper growing up and we didn’t go through 1kg of (free—he sold it at markets for an overall profit—extremely high quality honey) a week as a family but we went through several kilos of meat.
After an initially harsh reaction to this, upon reflection I realized I do care about bee experience, want bees to be healthy and have a good time, and think the conventional honey industry is quite bad. I’ve thought this for a while.
I’ve spent a lot of time around bees and I’ve eaten lots of honey that I’ve seen them making. I think in the contexts in which I’ve interacted with bees, I’d guess it’s very unlikely they are having a bad time relative to bees in the wild. I’d guess that if there’s any mean valence associated with their experience it’s definitely positive. I’m aware that lots of bees die and suffer as part of the process.
I will therefore continue buying and eating honey from my local beekeepers at https://www.howeverwildhoney.com/ and am grateful to them for producing it.
Is this supposed to be “harmful”? As worded, this sentence is confusing.
I’m very sympathetic to this general case, but the post does raise a bunch of red flags. I’ve asked Claude to summarize how good the life of the typical bee is, and it presented a far less negative picture. I’m not sure I can trust this article more than that.
Although I don’t super like honey, so I might stop eating/eat less anyway.
One reason to think that bee suffering and human suffering are comparably important (within one or two orders of magnitude) is just that suffering is suffering. When you feel pain you don’t really feel much else than pain; when it’s intense enough you can’t really experience much other than the pain, you can’t think clearly, you can’t do all of the cognitive things that seem to separate us from bees, you just experience suffering in some raw form, and that seems very bad. If we can imagine bee’s suffering is something like this, it seems like it’s bad in a similar way to human suffering.
But one (not the only) issue here is that this way of viewing human suffering treats the human mind as a discrete entity. There is one individual who is suffering, there is one bee which is suffering, and these seem like comparable things.
I don’t think that’s a reasonable model of the mind. Instead, there are many separate but interconnected parts of the mind, all of them suffering when we are in pain. The bee, by nature of being a simpler creature, has a mind made up of many fewer such parts, and thus there are just fewer beings who are suffering in this way when a bee suffers than when a human does.
Of course, these separate parts of the mind integrate into a larger whole, but that doesn’t make them not present. And I think noticing that the mind is made up of many distinct parts gives a better intuitive picture of what a person is than does thinking of us as discrete entities. But if we take this picture seriously it clearly justifies a moral distinction (not of kind but of quantity) between more complex and less complex beings. That simplification is to see a human mind as made up of more ‘people’ than a bee’s mind. This justifies ideas like treating neuron count as an important moral distinction.
Again, the separate agents within the mind interact and merge to create a larger emergent entity, yet there remain distinctions between them which should make us think that treating a human as a single agent and a bee as a single agent on par with them is misguided.
Do you ever use LLMs? (They have a lot more neurons than bees, and it’s unclear why consuming honey is worse than using LLMs.)
Yeah, especially within the framing that upweights the behavioral proxies by such a huge margin. And they also have more neurons, (like under million in a bee, around a trillion in frontier models), although pretty different ones.
Other comments have addressed your comparison of bee to human suffering, so I would like to set it aside and comment on “don’t eat honey” as a call to action. I think people who eat honey (except for near-vegans who were already close to giving it up) are not likely to be persuaded to stop. However, similar to meat-eaters who want to reduce animal suffering caused by the meat industry, they can probably be persuaded to buy honey harvested from bees kept in more suitable[1] conditions. For those people, you could advocate for a “free-range” type of informal standard for honey that means the bees were kept outside in warmer hives, etc. Outdoor vs. indoor is a particularly easy Schelling point. Even with the kind of cheating the “free-range” label has been subject to, it seems like it would incentivize beekeeping practices that are better for the bees.
This can mean “more natural” in the sense of “the way bees are adapted to live in nature” but not necessarily “more natural” in the sense of using natural materials and pre-modern practices. The article “To save honey bees we need to design them new hives” linked in the post notes: “We already know that simply building hives from polystyrene instead of wood can significantly increase the survival rate and honey yield of the bees.” (Link in the original.)
Well argued, and addresses the obvious question “okey but if they’re sentient at all it’s got to be a tiny amount, right?”
This seems like the crux of disagreement I’ve had with some of your previous points; sure, pain is bad, but the intensity or “realness” of pain has to be on a spectrum of some sort it seems to me.
It does seem like you’ve got to make that adjustment to avoid another implausible conclusion, that sentience “switches on” at some point, leaving a spider non-sentient (say) but a beetle fully sentient (or some other narrow dividing line)?
Intelligence woulgn’t be the same spectrum, but it would seem like a bacterium isn’t complex enough to have a subjective feeling of suffering, and very simple insects probably have very little of it.
Bees are an interesting exception by having relativelly complex behavior and learning. But being able to manage 7% of human suffering while having only 1/1000 of neurons doesn’t seem likely… I think their cognitive abilities probably aren’t correlated with general mental sophistication in the same way mammals or avian brains with cortexes are.