I think that it’s pretty reasonable to think that bee suffering is plausibly similarly bad to human suffering. (Though I’ll give some important caveats to this in the discussion below.)
More precisely, I think it’s plausible that I (and others) think that on reflection[1] that the “bad” part of suffering is present in roughly the same “amount” in bees as in humans such that suffering in both is very comparable. (It’s also plausible I’d end up thinking that bee suffering is worse due to e.g. higher clock speed.) This is mostly because I don’t strongly think that on reflection I would care about the complex aspects of the suffering or end up caring in a way which is more proportional to neuron count (though these are also plausible).
See also Luke Muehlhauser’s post on moral weights which also discusses a way of compute moral weights which implies it’s plausible that bees have similar moral weight to humans.[2]
I find the idea that we should be radically uncertain about moral-weight-upon-reflection-for-bees pretty intuitive: I feel extremely uncertain about core questions in morality and philosophy which leaves extremely wide intervals. Upon hearing that some people put substantial moral weight on insects, my initial thought was that this was maybe reasonable but not very action relevant. I haven’t engaged with the Rethink Priorities work on moral weights and this isn’t shaping my perspective; my perspective is driven by mostly simpler and earlier views. I don’t feel very sympathetic to perspectives which are extremely confident in low moral weights (like this one) due to general skepticism about extreme confidence in most salient questions in morality.
Just because I think it’s plausible that I’ll end up with a high moral-weight-upon-reflection for bees relative to humans doesn’t mean that I necessarily think the aggregated moral weight should be high; this is because of two envelope problems. But, I think moral aggregation approaches that end up aggregating our current uncertainty in a way that assigns high overall moral weight to bees (e.g. a 15% weight like in the post) aren’t unreasonable. My off-the-cuff guess would be more like 1% if it was important to give an estimate now, but this isn’t very decision relevant from my perspective as I don’t put much moral weight on perspectives that care about this sort of thing. (To oversimplify: I put most terminal weight on longtermism, which doesn’t care about current bees, and then a bit of weight on something like common sense ethics which doesn’t care about this sort of calculation.) And, to be clear, I have a hard time imagining reasonable perspectives which put something like a >1% weight on bees without focusing on stuff other than getting people to eat less honey given that they are riding the crazy train this far.
Overall, I’m surprised by extreme confidence that a view which puts high moral weight on bees is unreasonable. It seems to me like a very uncertain and tricky question at a minimum. And, I’m sympathetic to something more like 1% (which isn’t orders of magnitude below 15%), though this mostly doesn’t seem decision relevant for me due to longtermism.
(Also, I appreciate the discussion of the norm of seriously entertaining ideas before dismissing them as crazy. But, then I find myself surprised you’d dismiss this idea as crazy when I feel like we’re so radically uncertain about the domain and plausible views about moral weights and plausible aggregations over these views end up with putting a bunch of weight on the bees.)
Separately, I don’t particularly like this post for several reasons, so don’t take this comment as an endorsement of the post overall. I’m not saying that this that this post argues effectively for its claims, just that these claims aren’t totally crazy.
As in, if I followed my preferred high effort (e.g. takes vast amounts of computational resources and probably at least thousands of subjective years) reflection procedure with access to an obedient powerful AI and other affordances.
Somewhat interestingly, you curated this post. The perspective expressed in the post is very similar to one that gets you substantial moral weight on bees, though two envelope problems are of course tricky.
I think we both agree that the underlying question is probably pretty confused, and importantly and relatedly, both probably agree that what we ultimately care about probably will not be grounded in the kind of analysis where you assign moral weights to entities and then sum up their experiences.
The thing that creates a strong feeling of “I feel like people are just being crazy here” in me is the following chain of logic:
That hedonic utilitarianism of this kind is the right choice of moral foundation,
then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons),
then arriving at an extreme conclusion using that methodology (despite it still admitting a bunch of saving throws and reasonably adjustments one could make to have the conclusion not come out crazy),
and then saying that the thing you should take away from this is to stop eating honey.
There are many additional steps here beyond the “if you take a hedonic utilitarian frame as given, what is your distribution over welfare estimates”, each one of which seems crazy to me. Together, they arrive at the answer “marginal bee experience is ~15% as important to care about as human experience”[1], which is my critique.
the last step of seeing what implications it would have on your behavior is still relevant for this, because it’s the saving throw you have for noticing when a belief implies extremely conclusions, which is one of the core feedback loops for updating your beliefs
And to be clear, the step where even if you take it as a given you arrive at a mean of 1% or 15% also seems crazy to me, but not alone crazy enough that start desperately looking for answers unrelated to the logical coherence of empirical evidence of the chain of arguments that have brought us here. Luke’s post doesn’t really give an answer here, it just gives huge enormous ranges (though IMO not ranges with enough room at the bottom), and the basic arguments that post makes for high variance makes sense.
I think we both agree that the underlying question is probably pretty confused, and importantly and relatedly, both probably agree that what we ultimately care about probably will not be grounded in the kind of analysis where you assign moral weights to entities and then sum up their experiences.
I think I narrowly agree on my moral views which are strongly influenced by longtermist-style thinking, though I think “assign weights and add experiences” isn’t way off of a perspective I might end up putting a bunch of weight on[1]. However, I do think “what moral weight should we assign bees” isn’t a notably more confused question in the context of animal welfare than “how should we prioritize between chicken welfare interventions and pig welfare interventions”. So, I think there at least exists a pretty common and broadly reasonable-ish perspective in which this question is sane.
The thing that creates a strong feeling of “I feel like people are just being crazy here” in me is the following chain of logic:
This feels a bit like a motte and bailey to me. Your original claim was “If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me”. This feels feels very different from claiming that the chain of logic you point out is crazy. One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline. I don’t think it’s good practice to dismiss a claim in the way you did (in particular calling the specific claim crazy) because someone saying a claim also appears to be exhibiting a bunch of bad epistemic practices and you think they followed a specific chain of logic that you think is problematic. (I’m not necessarily saying this is what you did, just that this justification would have been bad.)
Maybe you both think “the claim in isolation is crazy” (what you originally said and what I disagree with) and “the process used to reach that claim here seems particularly crazy”. Or maybe you want to partially walk back your original statement and focus on the process (if so, seems good to make this more explicit).
Separately, it’s worth noting that while Bentham’s Bulldog emphasizes the takeaway of “don’t eat honey”, they also do seem do be aware of and endorse other extreme conclusions of high moral weight on insects. (I wish they would also note in the post that this obviously has other more important implications than don’t eat honey!) So, I’m not sure that that point (4) is that much evidence about a bad epistemic process in this particular case.
Considerations like an arbitrarily large multiverse make questions around diversity of cognitive experience more complex and makes literally linear population ethics incoherant due to infinities. But, I think you pretty plausibly end up with something that roughly resembles linear aggregation via something like UDASSA.
One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline.
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I’ve generally updated away from putting much weight on moral intuitions / heuristics expect with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc.
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons)
Do you have a preferred writeup for the critique of these methods and how they ignore our evidence about brain mechanisms?
[Edit: though to clarify, it’s not particularly cruxy to me. I hadn’t heard of this report and it’s not causal in my views here.]
There are lots of critiques spread across lots of forum comments, but no single report I could link to. But you can see the relevant methodology section of the RP report yourself here:
You can see they approximately solely rely on behavioral proxies. I remember there was some section somewhere in the sequence arguing explicitly for this methodology, using some “we want to make a minimum assumption analysis and anything that looks at brain internals and neuron counts would introduce more assumptions” kind of reasoning, which I always consider very weak.
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great, and to be clear, consider hedonic utilitarianism also in general not a great foundation for ethics of any kind).
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great
I agree that neuron count carries some information as a proxy for consciousness or welfare, but it seems like a really bad and noisy one that we shouldn’t place much weight on. For example, in humans the cerebellum is the brain region with the largest neuron count but it has nothing to do with consciousness.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
(To be clear, I get that your main problem with RP is the hedonic utilitarianism assumption which is a fair challenge. I’m mainly challenging the appeal to neuron count.)
EDIT: Adding some citations since the comment got a reaction asking for cites.
This paper describes a living patient born without a cerebellum. The effect of being born without a cerebellum leads to impaired motor function but no impact to sustaining a conscious state.
Neuron counts in this paper put the cerebellum around ~70 billion neurons and the cortex (associated with consciousness) around ~15 billion neurons.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
You can’t have “strong behavioral evidence of consciousness”. At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
Like, modern video game characters (without any use of AI) would also check a huge number of these “behavioral evidence” checkboxes, and really very obviously aren’t conscious or moral patients of non-negligible weight.
You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it.
Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn’t seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).
At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things)
I agree strongly with both of the above points—we should be supplementing the behavioural picture by examining which functional brain regions are involved and whether these functional brain regions bear similarities with regions we know to be associated with consciousness in humans (e.g. the pallium in birds bears functional similarity with the human cortex).
Your original comment calls out that neuron counts are “not great” as a proxy but I think a more suitable proxy would be something like functional similarity + behavioural evidence.
we know to be associated with consciousness in humans
To be clear, my opinion is that we have no idea what “areas of the brain are associated with consciousness” and the whole area of research that claims otherwise is bunk.
This comment is a longer and more articulate statement of the comment that I might have written. It gets my endorsement and agreement.
Namely, I don’t think that high levels of confidence in particular view about “level of consciousness” or moral weight of particular animals is justified, and it especially seems incorrect to state that any particular view is obvious.
Further, it seems plausible to me that at reflective equilibrium, I would regard a pain-moment of an individual bee as approximately morally equivalent to a that of a pain-moment individual human.
I think that it’s pretty reasonable to think that bee suffering is plausibly similarly bad to human suffering. (Though I’ll give some important caveats to this in the discussion below.)
More precisely, I think it’s plausible that I (and others) think that on reflection[1] that the “bad” part of suffering is present in roughly the same “amount” in bees as in humans such that suffering in both is very comparable. (It’s also plausible I’d end up thinking that bee suffering is worse due to e.g. higher clock speed.) This is mostly because I don’t strongly think that on reflection I would care about the complex aspects of the suffering or end up caring in a way which is more proportional to neuron count (though these are also plausible).
See also Luke Muehlhauser’s post on moral weights which also discusses a way of compute moral weights which implies it’s plausible that bees have similar moral weight to humans.[2]
I find the idea that we should be radically uncertain about moral-weight-upon-reflection-for-bees pretty intuitive: I feel extremely uncertain about core questions in morality and philosophy which leaves extremely wide intervals. Upon hearing that some people put substantial moral weight on insects, my initial thought was that this was maybe reasonable but not very action relevant. I haven’t engaged with the Rethink Priorities work on moral weights and this isn’t shaping my perspective; my perspective is driven by mostly simpler and earlier views. I don’t feel very sympathetic to perspectives which are extremely confident in low moral weights (like this one) due to general skepticism about extreme confidence in most salient questions in morality.
Just because I think it’s plausible that I’ll end up with a high moral-weight-upon-reflection for bees relative to humans doesn’t mean that I necessarily think the aggregated moral weight should be high; this is because of two envelope problems. But, I think moral aggregation approaches that end up aggregating our current uncertainty in a way that assigns high overall moral weight to bees (e.g. a 15% weight like in the post) aren’t unreasonable. My off-the-cuff guess would be more like 1% if it was important to give an estimate now, but this isn’t very decision relevant from my perspective as I don’t put much moral weight on perspectives that care about this sort of thing. (To oversimplify: I put most terminal weight on longtermism, which doesn’t care about current bees, and then a bit of weight on something like common sense ethics which doesn’t care about this sort of calculation.) And, to be clear, I have a hard time imagining reasonable perspectives which put something like a >1% weight on bees without focusing on stuff other than getting people to eat less honey given that they are riding the crazy train this far.
Overall, I’m surprised by extreme confidence that a view which puts high moral weight on bees is unreasonable. It seems to me like a very uncertain and tricky question at a minimum. And, I’m sympathetic to something more like 1% (which isn’t orders of magnitude below 15%), though this mostly doesn’t seem decision relevant for me due to longtermism.
(Also, I appreciate the discussion of the norm of seriously entertaining ideas before dismissing them as crazy. But, then I find myself surprised you’d dismiss this idea as crazy when I feel like we’re so radically uncertain about the domain and plausible views about moral weights and plausible aggregations over these views end up with putting a bunch of weight on the bees.)
Separately, I don’t particularly like this post for several reasons, so don’t take this comment as an endorsement of the post overall. I’m not saying that this that this post argues effectively for its claims, just that these claims aren’t totally crazy.
As in, if I followed my preferred high effort (e.g. takes vast amounts of computational resources and probably at least thousands of subjective years) reflection procedure with access to an obedient powerful AI and other affordances.
Somewhat interestingly, you curated this post. The perspective expressed in the post is very similar to one that gets you substantial moral weight on bees, though two envelope problems are of course tricky.
I think we both agree that the underlying question is probably pretty confused, and importantly and relatedly, both probably agree that what we ultimately care about probably will not be grounded in the kind of analysis where you assign moral weights to entities and then sum up their experiences.
The thing that creates a strong feeling of “I feel like people are just being crazy here” in me is the following chain of logic:
That hedonic utilitarianism of this kind is the right choice of moral foundation,
then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons),
then arriving at an extreme conclusion using that methodology (despite it still admitting a bunch of saving throws and reasonably adjustments one could make to have the conclusion not come out crazy),
and then saying that the thing you should take away from this is to stop eating honey.
There are many additional steps here beyond the “if you take a hedonic utilitarian frame as given, what is your distribution over welfare estimates”, each one of which seems crazy to me. Together, they arrive at the answer “marginal bee experience is ~15% as important to care about as human experience”[1], which is my critique.
the last step of seeing what implications it would have on your behavior is still relevant for this, because it’s the saving throw you have for noticing when a belief implies extremely conclusions, which is one of the core feedback loops for updating your beliefs
And to be clear, the step where even if you take it as a given you arrive at a mean of 1% or 15% also seems crazy to me, but not alone crazy enough that start desperately looking for answers unrelated to the logical coherence of empirical evidence of the chain of arguments that have brought us here. Luke’s post doesn’t really give an answer here, it just gives huge enormous ranges (though IMO not ranges with enough room at the bottom), and the basic arguments that post makes for high variance makes sense.
I think I narrowly agree on my moral views which are strongly influenced by longtermist-style thinking, though I think “assign weights and add experiences” isn’t way off of a perspective I might end up putting a bunch of weight on[1]. However, I do think “what moral weight should we assign bees” isn’t a notably more confused question in the context of animal welfare than “how should we prioritize between chicken welfare interventions and pig welfare interventions”. So, I think there at least exists a pretty common and broadly reasonable-ish perspective in which this question is sane.
This feels a bit like a motte and bailey to me. Your original claim was “If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me”. This feels feels very different from claiming that the chain of logic you point out is crazy. One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline. I don’t think it’s good practice to dismiss a claim in the way you did (in particular calling the specific claim crazy) because someone saying a claim also appears to be exhibiting a bunch of bad epistemic practices and you think they followed a specific chain of logic that you think is problematic. (I’m not necessarily saying this is what you did, just that this justification would have been bad.)
Maybe you both think “the claim in isolation is crazy” (what you originally said and what I disagree with) and “the process used to reach that claim here seems particularly crazy”. Or maybe you want to partially walk back your original statement and focus on the process (if so, seems good to make this more explicit).
Separately, it’s worth noting that while Bentham’s Bulldog emphasizes the takeaway of “don’t eat honey”, they also do seem do be aware of and endorse other extreme conclusions of high moral weight on insects. (I wish they would also note in the post that this obviously has other more important implications than don’t eat honey!) So, I’m not sure that that point (4) is that much evidence about a bad epistemic process in this particular case.
Considerations like an arbitrarily large multiverse make questions around diversity of cognitive experience more complex and makes literally linear population ethics incoherant due to infinities. But, I think you pretty plausibly end up with something that roughly resembles linear aggregation via something like UDASSA.
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
oops, I meant except. My terrible spelling strikes again.
Do you have a preferred writeup for the critique of these methods and how they ignore our evidence about brain mechanisms?
[Edit: though to clarify, it’s not particularly cruxy to me. I hadn’t heard of this report and it’s not causal in my views here.]
There are lots of critiques spread across lots of forum comments, but no single report I could link to. But you can see the relevant methodology section of the RP report yourself here:
https://rethinkpriorities.org/research-area/the-welfare-range-table/
You can see they approximately solely rely on behavioral proxies. I remember there was some section somewhere in the sequence arguing explicitly for this methodology, using some “we want to make a minimum assumption analysis and anything that looks at brain internals and neuron counts would introduce more assumptions” kind of reasoning, which I always consider very weak.
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great, and to be clear, consider hedonic utilitarianism also in general not a great foundation for ethics of any kind).
I agree that neuron count carries some information as a proxy for consciousness or welfare, but it seems like a really bad and noisy one that we shouldn’t place much weight on. For example, in humans the cerebellum is the brain region with the largest neuron count but it has nothing to do with consciousness.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
(To be clear, I get that your main problem with RP is the hedonic utilitarianism assumption which is a fair challenge. I’m mainly challenging the appeal to neuron count.)
EDIT: Adding some citations since the comment got a reaction asking for cites.
This paper describes a living patient born without a cerebellum. The effect of being born without a cerebellum leads to impaired motor function but no impact to sustaining a conscious state.
Neuron counts in this paper put the cerebellum around ~70 billion neurons and the cortex (associated with consciousness) around ~15 billion neurons.
You can’t have “strong behavioral evidence of consciousness”. At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
Like, modern video game characters (without any use of AI) would also check a huge number of these “behavioral evidence” checkboxes, and really very obviously aren’t conscious or moral patients of non-negligible weight.
You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it.
Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn’t seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).
I agree strongly with both of the above points—we should be supplementing the behavioural picture by examining which functional brain regions are involved and whether these functional brain regions bear similarities with regions we know to be associated with consciousness in humans (e.g. the pallium in birds bears functional similarity with the human cortex).
Your original comment calls out that neuron counts are “not great” as a proxy but I think a more suitable proxy would be something like functional similarity + behavioural evidence.
(Also edited the original comment with citatons)
To be clear, my opinion is that we have no idea what “areas of the brain are associated with consciousness” and the whole area of research that claims otherwise is bunk.
This comment is a longer and more articulate statement of the comment that I might have written. It gets my endorsement and agreement.
Namely, I don’t think that high levels of confidence in particular view about “level of consciousness” or moral weight of particular animals is justified, and it especially seems incorrect to state that any particular view is obvious.
Further, it seems plausible to me that at reflective equilibrium, I would regard a pain-moment of an individual bee as approximately morally equivalent to a that of a pain-moment individual human.