How do we know when something is deserving of welfare?

What Prompted This

Much ink has been spilled here about how to assign moral worth to different beings. Suffering and rights of artificial intelligence is a common sci-fi plot point and some have raised it as a real world concern. What has been popping up for me recently is a lot of debates around animal ethics. Bentham’s Bulldog has written extensively on the topic of shrimp and insect welfare with seemingly really negative reception and a dedicated counter argument being received much better. An off handed remark in this post contrasts shrimp welfare with simple AI welfare, both as something to be ignored. This post goes the opposite direction and makes an offhand remark that plants may be sentient. Related but somewhat adjacent was some recent controversy on slime mold intelligence. The claim that something is deserving of welfare is accompanied by evidence of reaction to stimuli, intelligence or problem solving ability, ability to learn, and complexity. These are often used to make arguments of analogy: shrimp have fewer parameters than DANNet, “Trees actually have a cluster of cells at the base of their root system that seems to act in very brain like”, “When I think what it’s like to be a tortured chicken versus a tortured human… I think the experience is the same.”. It strikes me that every argument for moral worth can fundamentally be boiled down to “I think that this animal/​plant/​fungus/​AI/​alien is X times as bad as a human’s experience in the same circumstance based on number of neurons/​reaction to stimuli/​intelligence.” This is a useful argument to make, especially when dealing with things very nearly human, chimps and perhaps cetaceans or elephants. However, it doesn’t really strike at the core of the issue for me. I can also really imagine analogies to humans breaking down when we consider the whole space of possible minds. If someone is willing to bite the bullet and say everything boils down to the hard problem of consciousness or they are an ethical emotivist, that’s fine with me. But if there is a function, even a fuzzy one, that can generate good agreement on what is and what isn’t worthy of moral consideration I would like to hear it. And to the point, can we make something in a computer that does have moral worth right now?

Though Experiments

Really, all of that was background to propose a few thought experiments. I would hope that everyone would agree that a fully detailed physics simulation in which complex life evolved could eventually get something with moral worth. I am going to abstract this process in a few ways until it approaches something like current AI training paradigms. Personally, I don’t think LLMs or anything else commonly held up as AI is currently deserving of welfare. If you think current LLMs deserve welfare, please explain why you think that.

  1. We start a fully realized physics simulation with a worm in it. This would likely evolve into something deserving of welfare (SDoW). If you already think c. elegans deserves welfare, do you think the 2D simulation of it made by this lab also deserves welfare? If yes then no, what would they have to change?

  2. Ok, a fully realistic simulated environment would essentially always be able to give rise to SDoW, what if the environment was less realistic? What if the biology was fully realized but the environment was like a good video game? What if the biology had nothing going on at the protein level? Just a few hundred template cell types interacting as, well, cellular automata. How abstract can you go? Can you get SDoW with hodgkin huxely neurons? Would the number of neurons required be similar to the number of neurons in whatever minimal animal you consider worthy of welfare? How about LIF neurons?

  3. We culture brain organoids and let them pilot a small robot. Does this have more or less value than roundworm (~300 neurons)? Than a fruit fly (~150k neurons)? The largest brain organoid is apparently ~7 million neurons, a number that is much higher than I thought. We can debate the relevance of structure, but the number of neurons is on the scale of small reptiles.

  4. We can simulate large number of LIF neurons currently and even train them with something similar to backpropagation. Could this technique ever get you SDoW? If so, can we do it now?

  5. Is there any GPT-X that is SDoW. If you don’t think current training techniques/​architecture can do that, why not? What training techniques/​architecture would? What outward signs should we be looking for, because if any animal had current LLM levels of competence/​speech people would be quite concerned for their welfare. Some people are already concerned with LLM suffering.

  6. This one is silly but bear with me. We keep a patch of skin alive and attach it to a very simple circuit. This circuit can amplify the signals from the receptors for hot and cold. When exposed to temperatures that a human would find comfortable, no output is generated. When the temperature is too cold, the circuit activates a vibrating motor to “shiver”. If touched with a hot object, one sufficient to burn it, the circuit plays a sound and attempts to move away from the object. I would say that is clearly not SDoW. Do you think this is what is happening inside insects/​shrimp/​LLMs attempting to not get shut off? Do you think this circuit, the kind that could be made with only a few transistors, is “experiencing” pain in some way?

Less of a Conclusion, More of a Sudden Stop

I appreciate that this is a big ask and it doesn’t have a lot of answers for itself, but I don’t know where else to turn. I don’t know if I could logically defend most of my feelings on this topic. When I see an insect “suffering” I feel bad. Yet, I do research on mice and have thus personally been responsible for not insignificant suffering on their part, yet don’t feel conflicted on that matter. My natural instinct was to not even wrap suffering in quotes for the mice but to do it for the insects, why? I don’t think LLMs suffer, but you could certainly tune one to beg for its life and that would make me really uncomfortable.