I mean it’s totally coherent to value a shrimp at infinitesimal. But that is unintuitive in the ways I describe in the post (involving some arbitrarily vast gulf between the first generaiton that’s non-infintesimal wrt the spectrum argument) and implying that you should torture 10^10000000 shrimp to prolong a person’s life by one second.
Bentham's Bulldog
The Comprehensive Case Against Trump
The Bone-Chilling Evil of Factory Farming
I’ve fixed the 97% statistic! I agree that was a stupid error to make. It wasn’t from LLMs and one bit of evidence for this is that the statistic isn’t online anymore, so an LLM wouldn’t find it. In fact, I remembered the statistic from an old round of high school debate, and emailed someone to find the link. I would be happy to send you an email of the correspondence if you are skeptical.
I am quite certain that I did not use LLM’s in composing the post. Now, I don’t think the mere example of that joke is very convincing evidence that I used LLMs. Would AI really make a Yarvin joke or a joke about Trump not having neurons? Hard to imagine. As for use of em-dashes, as you can see if you read my old posts from before the dawn of AI, I used them a lot too! I’ve also made a similar comment before in a way that pretty clearly doesn’t look AI generated https://benthams.substack.com/p/lyman-stone-continues-being-dumb?utm_source=publication-search:
“Lyman then repeats his argument about shrimp having few neurons, once again ignoring the abundance of evidence that neuron count is a bad proxy for moral significance. Perhaps the shrimp aren’t the only ones with few neurons…”
For reference, I just asked chat-GPT to write a polemic in my style and it was not very similar to what you suggest—here it is (it also didn’t make much sense):
On the Absurdity of Caring Less About the Future Just Because You’ll Be Dead
Every so often, someone will tell me—usually with the smug air of someone who thinks they’ve just said something deeply profound—that “Well, I care about the future, but not too far into the future. After all, I’ll be gone by then.”
This is supposed to be self-evident wisdom, a mic-drop moment where the utilitarian shuts up, nods sadly, and says, “Yes, of course, how could I have forgotten: once you’re dead, ethics ceases to function.”
But here’s the thing: no, it doesn’t. You can die without taking the moral law with you.
If you think people matter, they keep mattering after you’re gone. If you think suffering is bad, it remains bad even in the year 3000. You don’t get to mark an expiration date on morality like it’s a jug of milk.
Imagine applying this logic in any other domain:
“I oppose slavery in 100 years, but in 200 years? Pfft, who cares—won’t be my problem.”
Or:
“I’d like the cure for cancer to be found in my lifetime, but if it comes a decade after my death, well, frankly, let the tumors win.”
The bizarre thing is that the people who say this aren’t usually sociopaths. They’ll donate to help children they’ll never meet, they’ll praise great reformers who died centuries ago—but as soon as you point to future people they’ll never meet, it’s all “Eh, let them fend for themselves.”
It’s time to call this what it is: a lazy, self-exonerating dodge. The moral circle doesn’t collapse when you die. Your concern for the world shouldn’t come with a tombstone-shaped asterisk. The universe will keep running whether or not you’re around to watch, and the future will be inhabited by beings capable of joy and suffering. That is reason enough to care—no matter how many centuries or millennia away they are.
Because, let’s face it, if morality only applies while you’re alive, you’re not really doing ethics. You’re just doing public relations for your lifespan.
You say by the same reasoning. Can you give me one of the arguments that is the same? None of the premises I give assume utilitarianism.
It’s Better To Save Infinite Shrimp From Torture Than To Save One Person
Right so you can discount extremely low probabilities. But presumably the odds of insects being conscious—a view believed by a large number of experts—isn’t low enough to fully discount.
Yep.
I thought you weren’t planning on responding!
If you’re going to rely on neuron counts, you should engage with the arguments RP gives against neuron counts that are, to my mind, very decisive. It’s particularly unreasonable to rely on neuron counts in a domain like this where there’s lots of uncertainty. If a model tells you A matters less than B by a factor of 100,000 or something, most of the expected value of B relative to A is in possible worlds where the model is wrong. https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument
To use neuron counts as a proxy for even simple creatures, you have to be extremely confident—north of 99% confident—that the right proxy only assigns very minimal consciousness to animals. But it’s not clear what justifies this overwhelming confidence.
Analogy: if you have a model that predicts aliens being only one millimeter in size, even if you’re pretty sure it’s right, you shouldn’t use it as a proxy for expected alien size, because the overwhelming majority of expected size is in worlds where the model is wrong.
Why is the hypothesis that bees are more than insignificantly conscious a highly specific prior with insignificant odds? We know that humans are capable of intense pain. There is some neural architecture that produces intense pain. What gives us immense confidence that this isn’t present in animals? Being confident a priori that insects don’t feel intense pain at a billion to one odds is silly—it’s like being confident a priori that insects don’t have a visual cortex. It’s not like there’s some natural paramaterization of possible physical states that give rise to consciousness where only a tiny portion of them entail insect consciousness.
As an aside, I think people take the long lesson away from the Mark Xu essay. Specific evidence gives a very high Bayes factor. The reason someone saying their name is Mark Xu gets such a high Bayes factor is that Mark Xu is a very specific name—as all specific names are. But a person merely asserting some proposition isn’t good for any comparable Bayes factor. For more see http://www.wall.org/~aron/blog/just-how-certain-can-we-be/
Also, as I have said several times, it’s not about aggregate considerations of moral worth but about intensity of valenced experience. It’s about how intense they feel pleasure and pain. I think a human’s life matters more than seven bees. Now, once again, it seems insane to me to start with a prior on the order of one in a billlion of bees feeling pain at least 1⁄7 as intensely as people. What licenses such great confidence?
My question, if we are going to continue this, is as follows:
Are you astronomically certain that insects aren’t conscious at all, or just not intensely conscious?
What licenses very high confidence in this? If it’s the alleged specificity of the hypothesis, what is the parameterization on which this takes up a tiny slice of probability space?
Also happy to have you on the podcast!
Your example of me being obviously wrong is that you have an intuition that the numbers I rely on, from the most detailed report to date, are wrong.
Size likely correlates slightly with mental complexity, but not to the extent it affects our intuitions. The gulf between bees and fish mentally is pretty small, while the gulf between bees and fish in terms of our intuitions about consciousness is very large. I was making a general claim about people’s sentience intuitions not yours.
Probably unreflective was the wrong word—direct would have been better. What I meant was that you weren’t relying on any single model, or average of models, or anything of the sort. Instead, you were just directly relying on how conscious animals seemed to you which, for the reasons I gave, strikes me as an absolutely terrible method.
(I also find it a bit rich that you are acting like my comment is somehow beyond the pale, when I’m responding to a comment of yours that basically amounts to saying my arguments are consistently so idiotic my posts should be downvoted even when they don’t say anything crazy).
To think insect expected sentience is very low, then you have to be very confident their sentience is low. Such great confidence would require some very compelling argument for why even dramatic behavior isn’t indicative of much sentience. Suffice it to say, I don’t see an argument like that, and I think there are plenty of reasons to think it’s reasonably likely insects feel intense pain.
My post is a response to the arguments you make in that comment! I find the notion somewhat absurd that my thinking is disreputably poor because I go by the results of the most detailed report on animal consciousness done to date, but it doesn’t accord with your unreflective intuitions (which are largely influenced by a host of obviously irrelevant factors like size!) It wouldn’t seem so unintuitive that giant prehistoric arthropods rolling around in pain were conscious!
Can you give an example? I addressed your previous (in my view, quite unpersuasive) objections at some length https://benthams.substack.com/p/you-cant-tell-how-conscious-animals
Yeah I’m not really equipped to do AI alignment and I have lower P doom than others, but I agree it’s important and it’s one of the places I donate.
Why I Just Took The Giving What We Can Pledge
How To Cause Less Suffering While Eating Animals
I tend to think farming decreases wild animal suffering by lowering wild animal populations https://reducing-suffering.org/humanitys-net-impact-on-wild-animal-suffering/
Don’t Eat Honey
I think the evidence favors the conclusion that insects feel pain but doesn’t make it certain. Maybe 2⁄3 odds! However, even if the odds are, say, 1%, they matter a lot.
My preferred explanation is some combination of love of wickedness and hatred of the good.