One can totally arrive at conclusion similar to “bee suffering is 15% as important as a human suffering” via epistemic routes different to the one you outline.
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I’ve generally updated away from putting much weight on moral intuitions / heuristics expect with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc.
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
I am not familiar with any! I’ve only seen these estimates arrived at via this IMO crazy chain of logic. It’s plausible there are others, though I haven’t seen them. I also really have no candidates that don’t route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I’ve gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into “remotely plausible” territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into “reasonable to take as a given in a blogpost without extensive caveats”.
I think if someone came to me and was like “yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human” then I would of course hear them out. I don’t think considering this as a hypothesis is crazy.
If someone comes to me and says “Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making” then… I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don’t think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
oops, I meant except. My terrible spelling strikes again.