I think I was being pithy instead of clear, I apologize. I’m trying to emphasize that your criticism of this argument is substantially wrong—this is indeed not feasible, but that ( (a) in your enumeration) is the smallest of these criticisms and the only one that applies, and this error does not damage the implied critique of the original argument as I read it. This would actually accomplish a decent portion of the goals that are implied by this valuation of insect pain and the “horrible consequences” would not actually be horrible in this account.
This argument against eating honey relies on a sort of social trust—you probably don’t accept that bee suffering is 15% as important as time-equivalent human suffering, but somebody else is saying that it is, and the result of that belief is really catastrophic. Rhetorically this makes sense and is completely valid—to paraphrase and generalize, “here’s a method of reasoning that you probably accept, here’s a conclusion it can reach, here’s the amount of uncertainty you can have before that conclusion is no longer reached, that somebody else who thinks like you and cares about the things you do believes these numbers to be correct is a good reason to trust this argument at least a small amount, so don’t eat honey.”
The implicit criticism here, as I read it, is something like “sure, but these numbers are very obviously batshit. There’s a silent addendum if you keep thinking this way, ‘don’t eat honey’ is the bottom line but the postscript is ‘also this is only a hedge until we can, by whatever means available, completely eliminate non-reflective or semi-reflective life in the universe even at the cost of all reflective life’, and if this conclusion actually does follow, then the social trust that compels me towards the original conclusion disappears.”
That is, there’s a straightforward counterargument to the original post, which is that the argument is completely numerical and the numbers make no sense. (You could also, as you note, take issue with the structure of the argument, but outside of nitpicks that I feel pretty confident are fixable, I do not—a calculation more-or-less of this form could actually compel me to seemingly bizarre conclusions if I bought the numbers). The obvious response is that the conclusion is very strong and if you think it’s at all plausible that you’re wrong you should still stop eating honey to be sure. If you assign a normal amount of social credence to the post’s author, which you probably do if they clearly buy your moral and epistemic framework, you should probably give their argument enough credence to accept its conclusion even if you don’t think it’s right. But this extension shows that you should probably not extend that much social credence.
I think I was being pithy instead of clear, I apologize. I’m trying to emphasize that your criticism of this argument is substantially wrong—this is indeed not feasible, but that ( (a) in your enumeration) is the smallest of these criticisms and the only one that applies, and this error does not damage the implied critique of the original argument as I read it. This would actually accomplish a decent portion of the goals that are implied by this valuation of insect pain and the “horrible consequences” would not actually be horrible in this account.
This argument against eating honey relies on a sort of social trust—you probably don’t accept that bee suffering is 15% as important as time-equivalent human suffering, but somebody else is saying that it is, and the result of that belief is really catastrophic. Rhetorically this makes sense and is completely valid—to paraphrase and generalize, “here’s a method of reasoning that you probably accept, here’s a conclusion it can reach, here’s the amount of uncertainty you can have before that conclusion is no longer reached, that somebody else who thinks like you and cares about the things you do believes these numbers to be correct is a good reason to trust this argument at least a small amount, so don’t eat honey.”
The implicit criticism here, as I read it, is something like “sure, but these numbers are very obviously batshit. There’s a silent addendum if you keep thinking this way, ‘don’t eat honey’ is the bottom line but the postscript is ‘also this is only a hedge until we can, by whatever means available, completely eliminate non-reflective or semi-reflective life in the universe even at the cost of all reflective life’, and if this conclusion actually does follow, then the social trust that compels me towards the original conclusion disappears.”
That is, there’s a straightforward counterargument to the original post, which is that the argument is completely numerical and the numbers make no sense. (You could also, as you note, take issue with the structure of the argument, but outside of nitpicks that I feel pretty confident are fixable, I do not—a calculation more-or-less of this form could actually compel me to seemingly bizarre conclusions if I bought the numbers). The obvious response is that the conclusion is very strong and if you think it’s at all plausible that you’re wrong you should still stop eating honey to be sure. If you assign a normal amount of social credence to the post’s author, which you probably do if they clearly buy your moral and epistemic framework, you should probably give their argument enough credence to accept its conclusion even if you don’t think it’s right. But this extension shows that you should probably not extend that much social credence.