TL;DR In Parts 1 through 3 I discussed principles for ethical system design, and the consequences for AIs and uploads, and in Part 4, I discussed a principled way for us to grant moral value/weight to a larger set than just biological humans: all evolved sapient beings (other than ones of types we cannot form a cooperative alliance with). The history of liberal thought so far has been a progressive expansion of the set of beings accorded moral value (starting from just the set of privileged male landowners having their own military forces). So, how about animals? Could we expand our moral set to all evolved sentient beings, now that we have textured vegetable protein? I explore some of the many consequences if we tried this, and show that it seems to be incredibly hard to construct and implement such a moral system that doesn’t lead to human extinction, mass extinctions of animal species, ecological collapses, or else to very ludicrous outcomes. Getting anything close to a good outcome clearly requires at least astonishing levels of complex layers of carefully-tuned fudge factors baked into your ethical system, and also extremely advanced technology. A superintelligence with access to highly advanced nanotechnology and genetic engineering might be able to construct and even implement such a system, but short of that technological level, it’s sadly impractical. So I regretfully fall back to the long-standing solution of donating second-hand moral worth from humans to animals, especially large, photogenic, cute, fluffy animals (or at least ones visible with the naked eye) because humans care about their well-being.
[About a decade ago, I spent a good fraction of a year trying to construct an ethical system along these lines, before coming to the sad conclusion that it was basically impossible. I skipped over explaining this when writing Part 4, assuming that the fact this approach is unworkable was obvious, or at least uninteresting. A recent conversation has made it clear to me that this is not obvious, and furthermore that not understanding this is both an x-risk, and also common among current academic moral philosophers — thus I am adding this post. Consider it a write-up of a negative result in ethical-system design.
This post follows on logically from Part 4. so is numbered Part 5, but it was written after Part 6 (which was originally numbered Part 5 before I inserted this post into the sequence).]
Sentient Rights?
‘sentient’: able to perceive or feel things — Oxford Languages
The word ‘sentient’ is rather a slippery one. Beyond being “able to perceive or feel things”, the frequently-mentioned specific of being “able to feel pain or distress” also seems rather relevant, especially in a moral setting. Humans, mammals, and birds are all clearly sentient under this definition, and also in the the common usage of the word. Few people would try to claim that insects weren’t: bees, ants, even dust mites. How about flatworms? Water fleas? C. elegans, the tiny nematode with a carefully-mapped nervous system of exactly 302 neurons (some of which seem to be ‘pain neurons’)? How about single-celled amoebae, or bacteria — they have at least some senses, amoebae are even predators? Plants react to stimuli too…
Obviously if we’re going to use this as part of the definition of an ethical system that we’re designing, we’re going to need to pick a clear definition. For now, let’s try make this as easy as we can on ourselves and pick a logical and fairly restrictive definition: to be ‘sentient’ for our purposes, an organism needs to a) be a multicellular animal (a metazoan), with b) an identifiable nervous system containing multiple neurons, and c) use this nervous system in a manner that at least suggests that it has senses and acts on these in ways evolved to help ensure its survival or genetic fitness (as one would expect from evolutionary theory). So it needs to be a biological organism capable of agentic behavior that is implemented via a neural net. This includes almost every multicellular animal [apart from (possibly) placozoa and porifera (sponges)].This is just a current working definition: if we want to adjust it later to be more permissive or restrictive, or to make things easier or harder on ourselves, that’s an option.
As I discussed in Part 4, for any sapient species, we have a strong motive to either grant them roughly equal moral worth, or else, if that is for some reason not feasible, drive them extinct in a war of annihilation — since if we do anything in between the two, sooner or later they are going to object, using weapons of mass destruction. This very might-makes-right (and indeed rather uncouth) argument doesn’t apply to non-sapient sentient animals (that have not been uplifted to sapience): they’re never going to have nukes, or know how to target them. So granting moral worth to sentient animals, or not, is a choice that we are free to make, depending on what leads to a future society that we humans would more like our descendants to live in.
As I covered in Part 1, an ethical system needs to be designed in the context of a society. Cats, dogs and other “fur-babies” that live in our homes as pets arguably form an adopted part of our society. Domestic food animals in factory farms possibly less so, though they certainly still contribute to its well-being. Multicellular organisms such as nematodes too small to see without a hand lens living in the mud at the bottom of the ocean, definitely not so much. What happens if our society generously does the extrapolated-liberal thing here, and grants them all moral worth anyway?
The Tyranny of the Tiny
Consider ants. It’s estimated that there are around 20 quadrillion ants on Earth. If we grant each of them equal moral worth to a human,[1] as feels like the fair thing to do, then in the optimizing process their combined utility outweighs the utility of the entire human population by a factor of a few million. Now our own species’ collective well-being is just a rounding error in the ethical decisions of the Artificial Superintelligence (ASI) that we very generously designed and built for the ants. Thus the importance of providing food, healthcare services, retirement care, birth control, legal representation, transportation, local news, entertainment etc. etc. for the human population is a rounding error on doing all of these same things for each one of 20 quadrillion ants! Clearly the ASIs are going to need a lot of tiny nanotech bots to provide food, healthcare services, retirement care, birth control, legal representation, transportation, local news, entertainment etc. etc. just for ants, let alone for all the other sentient animal species, many of them tiny and numerous. We’re basically attempting to construct a planetary-scale zoo, one with utopian human-level care for each and every sentient inhabitant, down to all the ones you need a magnifying glass to see (indeed, most especially them, since they massively outnumber the ones you can see). You should expect to go to jail for antslaughter if you accidentally step on or inhale one, let alone intentionally murdering a gnat or a bedbug. We won’t be driving or flying vehicles any more: bugsplatter is morally equivalent to driving into a crowd.
It gets worse. Even in a so-called “post-scarcity” civilization with access to ASI, some resource are limited: carbon, oxygen, hydrogen, nitrogen, phosphorus, sunlight, square meters of land area, for example. Even if we expand out into the solar system or further, those limits become larger, but there’s still a limit. Basic resource limitations on matter, energy, and space still apply. Very clearly, the resources sufficient to support one human being comfortably could comfortably support many millions of ants: their individual nutritional and other resource requirements are roughly seven orders of magnitude smaller than ours. Humans are simply enormous resource hogs, compared to ants. So, in the longer term, the only Utilitarian-ethically acceptable solution is to (humanely) reduce the human population to zero, to free up resources to support tens of quadrillions more ants (or, if we want to also ban species extinction, keep a minimal breeding population of humans: with artificial insemination of DNA synthesized from genetic data, inbreeding could be avoided, so a planetary breeding population of O(100) should be quite safe from random fluctuations). Except, of course, that by the standards of sentient organisms, ants are still quite large, so the ASIs should get rid of them as well, in favor of an even larger number of even tinier sentient organisms. [This ethical system design problem has been called “Pascal’s Bugging”, and there is a certain analogy.]
Thus, any approach that satisfies the human instinct for fairness by giving equal moral worth to all sentient organisms will inevitably lead to human extinction (other than perhaps a breeding population of O(100) survivors, if you ban extinction). I’m sorry, but I’m afraid that it is a simple, easily mathematically-deducible fact that PETA’s ethical platform is an x-risk. More concerningly, I am told that this ethical position is also the broad consensus among current academic moral philosophers, so if we build ASI and ask them to “just go do Coherent Extrapolated Volition”, they will definitely find a lot of this in the recent Ethical Philosophy literature to extrapolate from.
Let’s Try Fudge Factors
OK, that failed spectacularly. So, maybe I don’t actually care as much about an individual ant as I do a human. They are quite small, after all, and not individually very smart. We’re talking about rights for organisms with neural nets here. Could we favor smartness in some principled way? We don’t really understand neuroscience quite well enough yet to be certain about this, but suppose that we knew that synapse count was roughly comparable (up to some roughly constant factor) to the parameter count of an artificial neural net (so ignoring a few possibly-minor effects like neurotransmitter diffusion flows, synapse sparsity, neuronal type diversity, information processing in dendrites, and so forth). Neural net scaling laws are pretty universal, and basically say that the impressiveness of neural-net’s behavior is proportional to the logarithm of the parameter count: each time you double the parameter count, you see a similar-sized increase in capabilities. Humans have O(1014) synapses, whereas ants have only O(108) synapses (roughly comparable to a T5-small model, ignoring questions such as sparsity). So if we use a moral weighting system proportional to the logarithm of synapse count, each human would get ~14 units-of-moral-worth and each ant only ~8, and they would still outvote us by more than a factor of a million. (Changing the base of the logarithm doesn’t help, it just rescales the denomination of a single unit-of-moral-worth.) So that didn’t help much.
In fact, not only are logarithms useless, but anything non-linear doesn’t work. The only way to avoid small animals outvoting large ones in the division of resources, or vice versa, is to scale individual moral value according to something that scales linearly with a creature’s resource requirements, such as average adult weight, or calorific intake averaged across the creature’s lifespan, or something along those lines. Yes, you’re quite correct, this is a large and blatant fudge factor arranged simply to get the outcome we want, with no other logically supportable moral justification from any kind of human moral iniuition — and we’ll be seeing a lot more of those before we’re done. Even that is probably not good enough to avoid perverse incentives under ASI optimization — in practice we would need to estimate all the resource consumption rate that a member of this species needs: each chemical element, energy, space, more abstract resources like peace or quiet, and so forth, price each by how scarce they are, and total up a per-individual budget based on a whole basket of resources tailored to the needs of an average a member of this species across their lifetime. Repeat this for tens of millions of species, many of them not yet discovered. What we need for stability is that each species has the same utility per value unit of their basket of consumed resources. That’s the only solution that avoids the utility maximization process having a preference about trading off the population of one species against another to increase total utility achievable with the available resources. Note that if the available basket of resources changes for some reason, we either hav to redo this, or accept that now the utility maximization process will want to start altering the species mix,
Votes For Man-Eating Tigers — and Worse
Eliding a vast number of details, let’s assume we have found some way to scale the moral weight given to each member of a species that scales roughly with the physical weight/resource consumption needs of members of that species, so as not to have our ASIs overly favor smaller or larger animals. Thus our civilization is expending of the order of one ten-millionth of the resources on the well-being of an individual ant as on any individual animal around the size of a human. To put that in perspective, suppose standards of living had gone up in this more advanced civilization, so that the average annul expenditure on the well-being of an individual human was O($1,000,000), then we’d also be spending O(10cents) per year on each ant (for a total of O($8quadrillion) a year on humans and O($2quadrillion) a year on ants, assuming planetary populations of each are at about current levels). So we’re probably not providing ants with individualized entertainment channels, and, unless we have a way to very efficiently produce an awful lot of tiny nanotech nursebots, probably only with basic healthcare, more like public health for ant nests, plus perhaps short-range mass transportation: some sort of tiny conveyor belts, perhaps.
Now let’s consider predators and prey. Generally predators are no more than a few times larger or smaller then their prey, so they’re going to have roughly equivalent moral weight. Any form of utility function for a prey species is obviously going to object very strongly to them being eaten. So either predators need to go extinct (possibly apart from a minimal breeding population of O(100) individuals), or else we need to find some way to feed them textured vegetable protein, vat-grown meat, the recently-deceased remains of elderly members of their prey species, or some other non-sentient foodstuff, carefully tailored and balanced for their carnivorous nutritional needs, and appetizing enough to them that they’ll actually eat it. (For many carnivorous snakes, this requires the food to be wriggling — obviously without it having a nervous system.) Note that this is not what we do with pet cats, or carnivores in zoos, all of whom are normally fed real animal protein from real dead animals killed for this purpose. (Or for snakes, real live animals, since they refuse to eat dead ones.)
Then, after we’ve fed the predators, we still need to figure out how to stop many of them from killing members of prey species anyway, even though they’re not hungry. Some carnivores, such as lions, only hunt when hungry (for good and sufficient evolutionary reasons); others, like housecats, will hunt even when well-fed (likewise, for good evolutionary reasons). Many simple-minded carnivores tend to attack anything prey-like that they can unless they currently stuffed too full to eat. We’re going to need a lot more small peacekeeper bots, following most of the carnivores around making sure they can’t actually catch anything — and when I say ‘carnivore’, don’t think ‘lion’, think beetle larvae a quarter inch long, or ~1mm copeopods or water fleas.
Next, consider species of internal parasites and their natural hosts. Internal parasites are of course generally smaller than their hosts, so will be somewhat outvoted by them. But an internal parasite isn’t going to be able to live inside textured vegetable protein, gradual eating its way through it. To keep them alive, we’re going to basically need to vat grow living tissue from its natural host, genetically modified to not have any functional neurons, inside some sort of heart-lung-tissue-culture machine, for the parasite to live inside. Even by zoo standards, that’s more like a cross between a critical intensive care ward and cloning transplant organs. Zoos don’t generally intentionally keep parasites, it’s too hard/cruel.
OK, so f that’s too much effort, maybe we just send all the parasitic species extinct? Most of them have life-cycles that are truly disgusting. I’ve never heard anyone breathe a word of protest about the fact that Jimmy Carter (a man widely suspected of being too kindly and honest to be an effective US President) has been working diligently on this and has, near the end of his life, almost succeeded in driving extinct the sentient species Dracunculus medinensis: that’s the guinea-worm, a 2½-foot long parasitic nematode worm that burrows through human flesh, injuring and disabling its human host, for many months, before inflicting agonizing pain to drive them to soak their limb in the river so that the worm can release its young into the local water supply (where they go on to do equally nasty things to water fleas, before returning to humans). Though they don’t afflict humans, some of the things that certain parasitic wasp larvae do to their hosts make even the guinea-worm look considerate; I’m certain their hosts won’t miss them. While we’re at it, let’s get rid of the deadliest animal in the world, Aedes aegypti: the yellow-fever-mosquito, or at least all the diseases it carries.
However, whatever we decide to do about parasites, just letting them go extinct isn’t a viable option for predators. Not only would we no longer have cool-looking predators to film documentaries about, they also have strong, complex effects on the ecosystems they’re part of. When humans eliminated wolves from most of Europe and North America, their prey species, such as deer, exploded in population so had to be kept down by hunting (not that we objected to the work, as venison is tasty). However, the presence of wolves changes deer’s behavior and grazing patterns in ways that human hunters don’t, especially for plants near water sources and their likelihood of eating the bark off tree saplings. Many plants species, and the insects and other animals dependent on them, went extinct, or almost so, forest distributions changed, and the entire ecology of the food webs across much of two continents went out of whack. When wolves were finally reintroduced in many still-wildish places, insect and plant species that hadn’t been seen in a century or more reappeared from obscurity (and doubtless many other extinct ones didn’t). This is bad enough on land, among warm-blooded animals where each level in the food web is an order of magnitude smaller than the one below. But for cold blooded animals in the sea, where that ratio is less than two, it’s far more dramatic. Almost all sentient species big enough to see in the ocean are carnivores, that feed on smaller carnivores, that feed on even smaller carnivores, down to carnivores almost too small to see that live off herbivorous zooplankton that live on phytoplankton or predatory or partially-predatory single-celled species. So in aquatic ecosystems “how about we just drop the carnivores?” leads an ecosystem where almost everything left is to too small to see.
In Search of an Ecological Stabilization Plan
We are proposing attempting to eliminate hunger, disease, predation, and parasitism for all animals with a nervous system, including vast numbers too small to see. Clearly unless we use birth control for all of these species, they are all are going to have a population explosion, starve, and then crash, some much faster than others. Thus all our ecologies are now stabilized entirely by the provision of birth control, administered by our ASIs, whose decisions are guided by our ethical system design. So now the stability of all ecologies depends on getting the ethical system design correct.
About the closest thing to a coherent-sounding political argument one could make (deriving from anything like fairness) for moral weight scaling linearly with average adult weight, or at least resource requirements, would be if we argued that the aim was to apply something closer to equal moral weight per species, i.e. to be fairish on a per-species basis, and then share that moral weight out between members of the species. Earth can support vastly more ants, thus each ant’s share of its species’ moral weight is vastly smaller than each human’s. That isn’t why or even how we implemented this, but it is a vaguely plausible-sounding debating position, among humans.
However, if we actually did that, allocate an allowance of moral weight per species and then share it across all current members of the species, so their individual moral weight varied inversely with population, then we’d get a very odd variant of Utilitarianism. The sum over utilities in Utilitarianism normally encourages maximizing population: all things being equal, a population of ten billion humans on Earth generates twice the utility per year than a population of five billion, so there is a moral imperative to increase population until the it reaches levels high enough to impose either x-risk or significant resource constraints, ones sufficient to significantly decrease happiness by enough to make total happiness peak and start to decline even though the population is still rising. (This issue with conventional Utilitarian ethical systems is widely known among philosophers of ethics as “The Repugnant Conclusion”.) So normal Utilitarianism leads to maximizing population as close as possible to carrying capacity, at levels that are significantly resource-constrained. However, under this unusual variant, human populations of a hundred people or ten billion produce the same utility, since the hundred each get one hundredth of Homo sapiens’ moral weight allocation whereas the ten billion each get a ten billionth. So now there is no direct moral preference for any specific population level, except of course that reducing the population level will increase available resources per individual, presumably allowing us to make them individually marginally happier, so until diminishing returns on this saturate and flatten out completely, there is instead a moral imperative to decrease population. Even below that level, we don’t care either way about reducing the population, until it gets so low that x-risks start to cut in. So you instead end up with very low populations. In anything resembling a real ecology, increasing the resources available per individual of a species would tend to increase reproductive success, causing the population of the species to go back up, producing a stabilizing effect. But in an ecology controlled entirely by birth control, population dynamics are decoupled from resources and become a policy decision, only guided by our utility function. [Note that I haven’t even started to think about how to handle speciation or species extinction under this framework.]
So to get a stable population at a reasonable level, neither maximized to the point of resource constraints not minimized except for x-risk, we likely need the moral weight per individual to be neither constant independent of the population P, nor 1/P as implied by equal shares of a constant weight per species, but rather something in between, such as P−k for some small fraction k, or 1+k/P for some large number k, carefully chosen in light of the happiness vs available resources per individual curve for the specific species so as to stabilize the optimal species population at some desired fraction of the maximum carrying capacity of the ecology that the species lives in. Again, we’re blatantly adding more fudge factors to our ethical system to make the optimum be the result we want. If we don’t get this calculation, and/or the calculation of the amount of resources a species needs quite correct, then an species will be disadvantaged in the utility calculation, and our ASIs may make a rational decision to have fewer of them. If so, their individual moral weight will increase, as P−kor 1+k/P, improving their moral weight to resource cost ratio, making them a more attractive option compered to other species, thus the situation for allocating resources between species will be self-stabilizing, at the cost of some changes to our planned population levels when species resource budgets were originally assigned.
While this might be sensible when resource are fixed, obviously in the case of colonization increasing available resource totals, it makes sense to increase moral weight budgets: if we mine asteroids to build a lot of Bishop Rings, or terraform Mars, or terraform and colonize a planet in another solar-system, then resources increase, and populations can increase. This is unequivocally a good thing, to be encouraged, so our utility function should treat this as such. So if we were giving per-species utility and then splitting it across individuals, we should increase the budget of all species in proportion to the increased resources, or in the P−k formulation, we should rescale in whatever way is necessary for the stable solution with e.g. twice the resources from two Earths to be twice the population giving twice the utility (for 1+k/P that’s just changing k to 2k).
Birth control on this scale provides more than just mathematical conundrums and the obvious logistical issues. Many species are what biologists call “r-strategy” species, who have large numbers of offspring (for species other than mammals or birds, often extremely large numbers), where in their current ecosystems the vast majority of these die before reaching adulthood. However, as soon as the eggs begin developing a functional nervous system, they are arguably sentient (and even before that, they are of course a potential sentient-to-be). So we either get a new version of the abortion debate, where for practicality we have to not grant moral weight to members of these species until some later point in development, so that we can cull them before that (say by means of egg-eating predators, or more bots), or else we need to achieve this entirely by birth control. Doing that presents a further problem. Consider a species such as salmon where a breeding pair lays many thousands of eggs. In the absence of hunger, predation, parasites, and disease, those will almost all survive to adulthood. The entire spawning run population of a particular river in a particular year is fewer than that, so we can only allow one pair to reproduce, and by the next generation, the entire population will all be siblings, so there will be massive inbreeding causing genetic disease from deleterious recessives, and loss of genetic diversity. So we need to find a form of birth control that is not all-or-nothing, where either a parent is fertile or not, but one that instead reduces the number of viable fertilized eggs laid per spawning to just a hair over two, to ensure stable population replacement without increasing inbreeding. This sounds even harder than just providing birth control for every animal on the planet big enough to see: now in many cases it also needs to be vary carefully designed birth control.
How About Genetic Engineering?
We have a great many extremely difficult problems to solve here: feeding predators vegetarian food, arranging that they don’t successfully hunt even though they’re well fed, complex birth control, the list goes on and on. So far I’ve been talking as if these problems all need to be solved with huge numbers of tiny bots (preferably very economical and resource-efficient ones, so as not to use up a lot of resources that could be supporting sentient creatures). Could we instead solve at least some of our problems using genetic engineering?
As I briefly discussed in Part 1 (and as I’ll explore rather more in Part 6), minimizing x-risk is very important. That clearly includes doing so for the non-human portions of the biosphere, most especially if we’re granting them moral weight. So if some sort of disaster were to occur that caused a civilizational collapse (for years, decades, centuries, or millennia) it would be extremely unfortunate if our genetically modified animals died out in the meantime. Having to revert to a state of nature, red in tooth and claw, would be a tragedy, but obviously less so than going extinct. So that would suggest that any genetic engineering we do needs to be quite limited, so as not to significantly reduce animals’ evolutionary fitness in a natural ecosystem. Such as by messing with a predator’s hunting instincts, or any animal’s reproduction.
Possibly we could do something where there was some drug in the textured vegetable protein that we were feeding the predators, for example, that triggered a genetically engineered pathway that suppressed their hunting reflexes, but where after a few days of starvation, this wore off, the extra pathway deactivated, and normal hunting instincts returned (for species that also need hunting practice to hone their skills, we woulds also need to ensure they had had it, without any animals being harmed). So a genetic engineering approach is not unusable, but the significant effects need to be temporary, able to wear off well before the animal starves Then we need to deal with logistical or medical screw-ups where an individual didn’t get fed for a few days for some reason and goes on a killing spree as a result. So I think we’re still going to need a lot of tiny nanotech peacekeeper bots, even if only as a backup.
Of course, if it were technologically feasible to recreate extinct species once civilization got rebuilt (which is presumably even more challenging to do for live-bearing species than for ones with small eggs and no parental care), or if it were feasible to deep-freeze and then revive animals in some form of deep storage that would survive for centuries or millennia (perhaps out in space somewhere very cold), then this might be a less restrictive constraint.
Could We do Half-Measures?
Is there a way to make all this any easier? Absolutely, but you have to throw in some sort of unjustified arbitrary hack just to get the results we want, with no plausible-sounding rationalization for it. On the other hand, we are already looking at a heap of many millions of those, so what’s one more if it simplifies the ethical system overall?
Suppose we pick an objective threshold, in synapse count, say: maybe around 109synapses. If you don’t have at least that many synapses as an average adult member of your species, you get zero moral weight: things can eat or parasitize you, nature can remain red in tooth and claw for you — we don’t care, because you’re too dumb, sorry. In that case, your only moral value is indirectly, as food for, or part of an ecosystem that supports, or otherwise being of value to, things above this moral weight cut line. Ants are on their own, and are just anteater chow and garbage disposal services, unless someone can persuade us that the entire nest is a collective intelligence smart enough to be equivalent to at least 10 times as many synapses as any individual member (which doesn’t sound entirely implausible). Above that level, you suddenly have moral weight (linearly proportionate to your species average body mass or resource needs, of course), or perhaps this phases in logarithmically over some range of synapse count, or just keeps scaling logarithmically with synapse count up to or even past O(1014). Every mammal and bird would easily make that cut line, so would most reptiles and many fish (mostly larger ones), but pretty-much no insects (except possibly bees, especially if you grant them some degree of hive-mind bonus). Then most of the vast numbers of tiny carnivores don’t need individual guard bots to stop them eating anything they want, since all their usual prey has no moral weight, and in the ocean we mostly only need to worry about stopping any large fish eating medium-sized fish. Which still sounds really hard, but is at least orders of magnitude easier than doing the same for all the copeopods. Most of the animals that make this cut are ones that most current humans would feel at least a little bad if they had accidentally killed, or someone had intentionally killed for no good reason, and most of the things that few people (other than Jains) currently worry about stepping on don’t make the cut, so accidentally stepping on or driving over one doesn’t need to be made into a crime. Bugsplatter becomes no longer vehicular mayhem (perhaps unless bees are involved), but roadkill still is.
This makes things easier, by some number of orders of magnitude. You need a less-vast number of tiny bots, and the smallest ones don’t need to be as tiny. Depending on where you put the cut line, you could even make it a lot easier: pick a really convenient a-little-below-human number like 1013 synapses and you can exclude most mammals other than primates, cetaceans, and few other large, smart mammals such as elephants. At 1013synapses, dogs would be right on the borderline. However, your justification for the ethical system then looks really strained/arbitrary, and owners of pet cats and rats are going to feel their fur-babies are being treated unfairly.
Or, you could pick some biological dividing line on the phylogenetic tree of Life. Vertebrate chauvinism, for example. Or you could do the pescatarian/Catholic thing and exclude fish (which would simplify the oceans massively): rights only for amphibians, reptiles, birds, and mammals — if your ancestors didn’t have a backbone and the gumption to colonize the land, we don’t recognize you as part of our ingroup. Take your pick, but do expect interminable arguments about it.
This is Incredibly Hard: Please Don’t Stake the Human Race On It
What I have been outlining here is incredible hard. We are constructing an ethical system of vast, intricately gerrymandered complexity, one very carefully tuned with many millions of parameters intended to get all the millions of outcomes we want. This doesn’t look anything like the sorts of moral systems human ethicists have thought about: it mostly looks extremely arbitrary, other than from a “does this get us the outcome we want?” viewpoint. Its moral consequences require us to invent entire new technical fields, like ergonomics for dust mites, for each of tens of millions of species. Even just eliminating disease is going to warp many ecosystems, especially in the sea or the tropics. There are doubtless many more challenges that I haven’t even though of. Some of them could well be even harder than ones I have thought of, or even effectively impractical at any specific technological level.
My best guess is that a sufficiently advanced society with access to superintelligences, nanotechnology including cheap and very energy-efficient nanotech bots, and very high levels of skill in genetic engineering might be able to pull this off. It’s a vast amount of work, many order of magnitude harder than building a utopia just for humans, but it’s not actually clearly physically impossible. However, it does look complex enough that no society, not even one with superintelligences, is going to get this right first time — unless they can do long-term modeling of an entire planet at an astonishing level of detail, down to individual organisms too small to see. Using something like a computer the size of a moon or a planet, presumably. So no, I’m not actually claiming that this clearly could never be done, but it is incredibly hard, and even for a society that could do this, it will take some time to get all the kinks out. So you had better have solved corrigibility.
However, it’s not within the capabilities of any society short of that kind of technological level. So, if we are encoding the terminal goal for a first-ever superintelligence, and if that terminal goal is not very corrigible, but we are nevertheless dumb enough to put this requirement in, then we are taking a massive and extremely stupid extinction risk with the future of the human race. Please do not do this. This is not something we can expect a first superinteligence to be able to do, it will screw it up, and if it drives us extinct while doing so, whether that’s in favor of ants or dust mites, that won’t be anything approaching death with dignity, that’s just really dumb. The only way one might be be to reduce that risk is to design into our AI system some form of fallback where, when things start to go bad, we stop granting sentient animals moral weight, and we revert to only humans (or only evolved sapients) having moral weight while we do disaster control.
So, if we’re in a position to get something like CEV, then sure, mention that something along these lines might be a generous thing to do at some point, in the future, if and when it’s ever practical, at least for the really smart animals to start off with. However, this is a maybe-nice-to-have-eventually: the human race’s continued existence has to come first.
The Status Quo: Loaned Conditional Moral Weight
Almost all humans find animal suffering distasteful and unpleasant (the rare exceptions, psychopaths/sociopaths and some sadists, also don’t dislike other human’s suffering, so are arguably “exceptions that prove the rule”). Given that the human species is evolved for a hunter-gather niche, it is surprisingly psychologically difficult for us to kill an animal comparable to our natural prey species with a hand weapon (though one can of course become acclimatized to this). This is particularly so for animals that are large, fluffy, act intelligently, or are cute (i.e. that have heads and eyes large enough to trigger our parental instincts).
This effect is of course weaker below some size of animal (as long as our parental instincts aren’t engaged), for less mammalian animals, the further away and less visible and apparent the animal’s suffering is, and to some extent also the less it seems like our fault, or that of any human. Fish are psychologically easier to kill, insects easier still, animals too small to see are trivial. Nevertheless, if we are, as I have been advocating during this sequence on Ethics, attempting to construct an ethical system for a society that tries to ensure low prevalence of things that offend the instinctive ethical and aesthetic sensibilities that natural selection has seen fit to endow Homo sapiens with, and high prevalence of things that those approve of (like happy kids, kittens, gardens, water-slides, that sort of thing), then reducing the amount of animal cruelty is going to be a goal. Especially so for cruelty to kittens, or other fluffy, playful, cute mammals that trigger our parental instincts.
The net result of this is a voluntary loan of moral weight, from humans, to animals. Much more so to some animals than others, and in ways that also depend on conditional things like visibility, circumstances, responsibilities, and practicalities. The results of this can seem very unfair and arbitrary, if you think about it from the point of view of the animal’s welfare: why is bullfighting or cockfighting in public a prosecutable crime in most countries, but slaughtering steers or chickens in an abattoir is routine almost everywhere (apart from cows in most states in India)? The answer is because it’s not actually about the welfare of the cow or the chicken, it’s about the welfare and conscience of the humans (and, for cows in India, about human religious symbolism). We are omnivores, and our farm animal’s loaned moral weight goes away as they enter the killing area of the abattoir, so long as no significant unnecessary cruelty is involved.
The extent of this is quite dependent on the society. Many societies still agricultural enough that most people still slaughter their own livestock find activities like bullfighting and cockfighting less objectionable. Things like nature documentaries make what is happening to animals on the other side of the world more visible (especially for more photogenic species), and environmental activism has introduced a norm that an animal species may have some moral weight as a separate entity, to be shared between its individuals, increasing their moral weight if the species gets close to extinction. Sufficiently good and nutritious textured vegetable protein would of course make it feasible for more countries to adopt an India-like attitude to more farm animals (likely greatly reducing their numbers, except for honey-bees, milk-cows and egg-laying chickens).
Almost all human cultures throughout history have used ethical systems that give some animals (most commonly mammals and birds) some moral weight in some circumstances. However, generally these circumstances did not include a wild prey animal being hunted and eaten by a wild predator, especially if no human happened to be around at the time. Given the way humans feel about animal suffering, we should and will continue to do this sort of thing in future moral systems that we devise for future societies. We might even decide to do so more, or attempt to do so in what feel to us like more principled ways. So I am definitely not suggesting that animals, sentient or indeed otherwise, or even plants or fungi, should never have any moral weight. I am simply suggesting that this must continue to be loaned, what lawyers call “pro-tanto”: to a limited and context-appropriate extent, because and in situations where humans care about them, rather than being granted to them outright in any amount on grounds of sentience (for any sensible definition of sentience) — at least until that incredible hard goal actually becomes technologically practicable.
That is to say, before summing across them, each ant’s and each human’s individual utility functions are linearly rescaled so that the distance between their zero (i.e. “I’d rather be dead than permanently below this”) utility level and maximum utility levels are the same. Or possibly so that the standard deviation of their utility under a normal range of circumstances is the same. This scaling is intended to avoid us having any ‘Utility Monsters’. As we shall see, it fails dramatically, making ants into utility monsters, which is why utility monsters are instead usually defined in terms of utility per unit of resources.
5. Moral Value for Sentient Animals? Alas, Not Yet
Part 5 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
TL;DR In Parts 1 through 3 I discussed principles for ethical system design, and the consequences for AIs and uploads, and in Part 4, I discussed a principled way for us to grant moral value/weight to a larger set than just biological humans: all evolved sapient beings (other than ones of types we cannot form a cooperative alliance with). The history of liberal thought so far has been a progressive expansion of the set of beings accorded moral value (starting from just the set of privileged male landowners having their own military forces). So, how about animals? Could we expand our moral set to all evolved sentient beings, now that we have textured vegetable protein? I explore some of the many consequences if we tried this, and show that it seems to be incredibly hard to construct and implement such a moral system that doesn’t lead to human extinction, mass extinctions of animal species, ecological collapses, or else to very ludicrous outcomes. Getting anything close to a good outcome clearly requires at least astonishing levels of complex layers of carefully-tuned fudge factors baked into your ethical system, and also extremely advanced technology. A superintelligence with access to highly advanced nanotechnology and genetic engineering might be able to construct and even implement such a system, but short of that technological level, it’s sadly impractical. So I regretfully fall back to the long-standing solution of donating second-hand moral worth from humans to animals, especially large, photogenic, cute, fluffy animals (or at least ones visible with the naked eye) because humans care about their well-being.
[About a decade ago, I spent a good fraction of a year trying to construct an ethical system along these lines, before coming to the sad conclusion that it was basically impossible. I skipped over explaining this when writing Part 4, assuming that the fact this approach is unworkable was obvious, or at least uninteresting. A recent conversation has made it clear to me that this is not obvious, and furthermore that not understanding this is both an x-risk, and also common among current academic moral philosophers — thus I am adding this post. Consider it a write-up of a negative result in ethical-system design.
This post follows on logically from Part 4. so is numbered Part 5, but it was written after Part 6 (which was originally numbered Part 5 before I inserted this post into the sequence).]
Sentient Rights?
‘sentient’: able to perceive or feel things — Oxford Languages
The word ‘sentient’ is rather a slippery one. Beyond being “able to perceive or feel things”, the frequently-mentioned specific of being “able to feel pain or distress” also seems rather relevant, especially in a moral setting. Humans, mammals, and birds are all clearly sentient under this definition, and also in the the common usage of the word. Few people would try to claim that insects weren’t: bees, ants, even dust mites. How about flatworms? Water fleas? C. elegans, the tiny nematode with a carefully-mapped nervous system of exactly 302 neurons (some of which seem to be ‘pain neurons’)? How about single-celled amoebae, or bacteria — they have at least some senses, amoebae are even predators? Plants react to stimuli too…
Obviously if we’re going to use this as part of the definition of an ethical system that we’re designing, we’re going to need to pick a clear definition. For now, let’s try make this as easy as we can on ourselves and pick a logical and fairly restrictive definition: to be ‘sentient’ for our purposes, an organism needs to a) be a multicellular animal (a metazoan), with b) an identifiable nervous system containing multiple neurons, and c) use this nervous system in a manner that at least suggests that it has senses and acts on these in ways evolved to help ensure its survival or genetic fitness (as one would expect from evolutionary theory). So it needs to be a biological organism capable of agentic behavior that is implemented via a neural net. This includes almost every multicellular animal [apart from (possibly) placozoa and porifera (sponges)]. This is just a current working definition: if we want to adjust it later to be more permissive or restrictive, or to make things easier or harder on ourselves, that’s an option.
As I discussed in Part 4, for any sapient species, we have a strong motive to either grant them roughly equal moral worth, or else, if that is for some reason not feasible, drive them extinct in a war of annihilation — since if we do anything in between the two, sooner or later they are going to object, using weapons of mass destruction. This very might-makes-right (and indeed rather uncouth) argument doesn’t apply to non-sapient sentient animals (that have not been uplifted to sapience): they’re never going to have nukes, or know how to target them. So granting moral worth to sentient animals, or not, is a choice that we are free to make, depending on what leads to a future society that we humans would more like our descendants to live in.
As I covered in Part 1, an ethical system needs to be designed in the context of a society. Cats, dogs and other “fur-babies” that live in our homes as pets arguably form an adopted part of our society. Domestic food animals in factory farms possibly less so, though they certainly still contribute to its well-being. Multicellular organisms such as nematodes too small to see without a hand lens living in the mud at the bottom of the ocean, definitely not so much. What happens if our society generously does the extrapolated-liberal thing here, and grants them all moral worth anyway?
The Tyranny of the Tiny
Consider ants. It’s estimated that there are around 20 quadrillion ants on Earth. If we grant each of them equal moral worth to a human,[1] as feels like the fair thing to do, then in the optimizing process their combined utility outweighs the utility of the entire human population by a factor of a few million. Now our own species’ collective well-being is just a rounding error in the ethical decisions of the Artificial Superintelligence (ASI) that we very generously designed and built for the ants. Thus the importance of providing food, healthcare services, retirement care, birth control, legal representation, transportation, local news, entertainment etc. etc. for the human population is a rounding error on doing all of these same things for each one of 20 quadrillion ants! Clearly the ASIs are going to need a lot of tiny nanotech bots to provide food, healthcare services, retirement care, birth control, legal representation, transportation, local news, entertainment etc. etc. just for ants, let alone for all the other sentient animal species, many of them tiny and numerous. We’re basically attempting to construct a planetary-scale zoo, one with utopian human-level care for each and every sentient inhabitant, down to all the ones you need a magnifying glass to see (indeed, most especially them, since they massively outnumber the ones you can see). You should expect to go to jail for antslaughter if you accidentally step on or inhale one, let alone intentionally murdering a gnat or a bedbug. We won’t be driving or flying vehicles any more: bugsplatter is morally equivalent to driving into a crowd.
It gets worse. Even in a so-called “post-scarcity” civilization with access to ASI, some resource are limited: carbon, oxygen, hydrogen, nitrogen, phosphorus, sunlight, square meters of land area, for example. Even if we expand out into the solar system or further, those limits become larger, but there’s still a limit. Basic resource limitations on matter, energy, and space still apply. Very clearly, the resources sufficient to support one human being comfortably could comfortably support many millions of ants: their individual nutritional and other resource requirements are roughly seven orders of magnitude smaller than ours. Humans are simply enormous resource hogs, compared to ants. So, in the longer term, the only Utilitarian-ethically acceptable solution is to (humanely) reduce the human population to zero, to free up resources to support tens of quadrillions more ants (or, if we want to also ban species extinction, keep a minimal breeding population of humans: with artificial insemination of DNA synthesized from genetic data, inbreeding could be avoided, so a planetary breeding population of O(100) should be quite safe from random fluctuations). Except, of course, that by the standards of sentient organisms, ants are still quite large, so the ASIs should get rid of them as well, in favor of an even larger number of even tinier sentient organisms. [This ethical system design problem has been called “Pascal’s Bugging”, and there is a certain analogy.]
Thus, any approach that satisfies the human instinct for fairness by giving equal moral worth to all sentient organisms will inevitably lead to human extinction (other than perhaps a breeding population of O(100) survivors, if you ban extinction). I’m sorry, but I’m afraid that it is a simple, easily mathematically-deducible fact that PETA’s ethical platform is an x-risk. More concerningly, I am told that this ethical position is also the broad consensus among current academic moral philosophers, so if we build ASI and ask them to “just go do Coherent Extrapolated Volition”, they will definitely find a lot of this in the recent Ethical Philosophy literature to extrapolate from.
Let’s Try Fudge Factors
OK, that failed spectacularly. So, maybe I don’t actually care as much about an individual ant as I do a human. They are quite small, after all, and not individually very smart. We’re talking about rights for organisms with neural nets here. Could we favor smartness in some principled way? We don’t really understand neuroscience quite well enough yet to be certain about this, but suppose that we knew that synapse count was roughly comparable (up to some roughly constant factor) to the parameter count of an artificial neural net (so ignoring a few possibly-minor effects like neurotransmitter diffusion flows, synapse sparsity, neuronal type diversity, information processing in dendrites, and so forth). Neural net scaling laws are pretty universal, and basically say that the impressiveness of neural-net’s behavior is proportional to the logarithm of the parameter count: each time you double the parameter count, you see a similar-sized increase in capabilities. Humans have O(1014) synapses, whereas ants have only O(108) synapses (roughly comparable to a T5-small model, ignoring questions such as sparsity). So if we use a moral weighting system proportional to the logarithm of synapse count, each human would get ~14 units-of-moral-worth and each ant only ~8, and they would still outvote us by more than a factor of a million. (Changing the base of the logarithm doesn’t help, it just rescales the denomination of a single unit-of-moral-worth.) So that didn’t help much.
In fact, not only are logarithms useless, but anything non-linear doesn’t work. The only way to avoid small animals outvoting large ones in the division of resources, or vice versa, is to scale individual moral value according to something that scales linearly with a creature’s resource requirements, such as average adult weight, or calorific intake averaged across the creature’s lifespan, or something along those lines. Yes, you’re quite correct, this is a large and blatant fudge factor arranged simply to get the outcome we want, with no other logically supportable moral justification from any kind of human moral iniuition — and we’ll be seeing a lot more of those before we’re done. Even that is probably not good enough to avoid perverse incentives under ASI optimization — in practice we would need to estimate all the resource consumption rate that a member of this species needs: each chemical element, energy, space, more abstract resources like peace or quiet, and so forth, price each by how scarce they are, and total up a per-individual budget based on a whole basket of resources tailored to the needs of an average a member of this species across their lifetime. Repeat this for tens of millions of species, many of them not yet discovered. What we need for stability is that each species has the same utility per value unit of their basket of consumed resources. That’s the only solution that avoids the utility maximization process having a preference about trading off the population of one species against another to increase total utility achievable with the available resources. Note that if the available basket of resources changes for some reason, we either hav to redo this, or accept that now the utility maximization process will want to start altering the species mix,
Votes For Man-Eating Tigers — and Worse
Eliding a vast number of details, let’s assume we have found some way to scale the moral weight given to each member of a species that scales roughly with the physical weight/resource consumption needs of members of that species, so as not to have our ASIs overly favor smaller or larger animals. Thus our civilization is expending of the order of one ten-millionth of the resources on the well-being of an individual ant as on any individual animal around the size of a human. To put that in perspective, suppose standards of living had gone up in this more advanced civilization, so that the average annul expenditure on the well-being of an individual human was O($1,000,000), then we’d also be spending O(10 cents) per year on each ant (for a total of O($8 quadrillion) a year on humans and O($2 quadrillion) a year on ants, assuming planetary populations of each are at about current levels). So we’re probably not providing ants with individualized entertainment channels, and, unless we have a way to very efficiently produce an awful lot of tiny nanotech nursebots, probably only with basic healthcare, more like public health for ant nests, plus perhaps short-range mass transportation: some sort of tiny conveyor belts, perhaps.
Now let’s consider predators and prey. Generally predators are no more than a few times larger or smaller then their prey, so they’re going to have roughly equivalent moral weight. Any form of utility function for a prey species is obviously going to object very strongly to them being eaten. So either predators need to go extinct (possibly apart from a minimal breeding population of O(100) individuals), or else we need to find some way to feed them textured vegetable protein, vat-grown meat, the recently-deceased remains of elderly members of their prey species, or some other non-sentient foodstuff, carefully tailored and balanced for their carnivorous nutritional needs, and appetizing enough to them that they’ll actually eat it. (For many carnivorous snakes, this requires the food to be wriggling — obviously without it having a nervous system.) Note that this is not what we do with pet cats, or carnivores in zoos, all of whom are normally fed real animal protein from real dead animals killed for this purpose. (Or for snakes, real live animals, since they refuse to eat dead ones.)
Then, after we’ve fed the predators, we still need to figure out how to stop many of them from killing members of prey species anyway, even though they’re not hungry. Some carnivores, such as lions, only hunt when hungry (for good and sufficient evolutionary reasons); others, like housecats, will hunt even when well-fed (likewise, for good evolutionary reasons). Many simple-minded carnivores tend to attack anything prey-like that they can unless they currently stuffed too full to eat. We’re going to need a lot more small peacekeeper bots, following most of the carnivores around making sure they can’t actually catch anything — and when I say ‘carnivore’, don’t think ‘lion’, think beetle larvae a quarter inch long, or ~1mm copeopods or water fleas.
Next, consider species of internal parasites and their natural hosts. Internal parasites are of course generally smaller than their hosts, so will be somewhat outvoted by them. But an internal parasite isn’t going to be able to live inside textured vegetable protein, gradual eating its way through it. To keep them alive, we’re going to basically need to vat grow living tissue from its natural host, genetically modified to not have any functional neurons, inside some sort of heart-lung-tissue-culture machine, for the parasite to live inside. Even by zoo standards, that’s more like a cross between a critical intensive care ward and cloning transplant organs. Zoos don’t generally intentionally keep parasites, it’s too hard/cruel.
OK, so f that’s too much effort, maybe we just send all the parasitic species extinct? Most of them have life-cycles that are truly disgusting. I’ve never heard anyone breathe a word of protest about the fact that Jimmy Carter (a man widely suspected of being too kindly and honest to be an effective US President) has been working diligently on this and has, near the end of his life, almost succeeded in driving extinct the sentient species Dracunculus medinensis: that’s the guinea-worm, a 2½-foot long parasitic nematode worm that burrows through human flesh, injuring and disabling its human host, for many months, before inflicting agonizing pain to drive them to soak their limb in the river so that the worm can release its young into the local water supply (where they go on to do equally nasty things to water fleas, before returning to humans). Though they don’t afflict humans, some of the things that certain parasitic wasp larvae do to their hosts make even the guinea-worm look considerate; I’m certain their hosts won’t miss them. While we’re at it, let’s get rid of the deadliest animal in the world, Aedes aegypti: the yellow-fever-mosquito, or at least all the diseases it carries.
However, whatever we decide to do about parasites, just letting them go extinct isn’t a viable option for predators. Not only would we no longer have cool-looking predators to film documentaries about, they also have strong, complex effects on the ecosystems they’re part of. When humans eliminated wolves from most of Europe and North America, their prey species, such as deer, exploded in population so had to be kept down by hunting (not that we objected to the work, as venison is tasty). However, the presence of wolves changes deer’s behavior and grazing patterns in ways that human hunters don’t, especially for plants near water sources and their likelihood of eating the bark off tree saplings. Many plants species, and the insects and other animals dependent on them, went extinct, or almost so, forest distributions changed, and the entire ecology of the food webs across much of two continents went out of whack. When wolves were finally reintroduced in many still-wildish places, insect and plant species that hadn’t been seen in a century or more reappeared from obscurity (and doubtless many other extinct ones didn’t). This is bad enough on land, among warm-blooded animals where each level in the food web is an order of magnitude smaller than the one below. But for cold blooded animals in the sea, where that ratio is less than two, it’s far more dramatic. Almost all sentient species big enough to see in the ocean are carnivores, that feed on smaller carnivores, that feed on even smaller carnivores, down to carnivores almost too small to see that live off herbivorous zooplankton that live on phytoplankton or predatory or partially-predatory single-celled species. So in aquatic ecosystems “how about we just drop the carnivores?” leads an ecosystem where almost everything left is to too small to see.
In Search of an Ecological Stabilization Plan
We are proposing attempting to eliminate hunger, disease, predation, and parasitism for all animals with a nervous system, including vast numbers too small to see. Clearly unless we use birth control for all of these species, they are all are going to have a population explosion, starve, and then crash, some much faster than others. Thus all our ecologies are now stabilized entirely by the provision of birth control, administered by our ASIs, whose decisions are guided by our ethical system design. So now the stability of all ecologies depends on getting the ethical system design correct.
About the closest thing to a coherent-sounding political argument one could make (deriving from anything like fairness) for moral weight scaling linearly with average adult weight, or at least resource requirements, would be if we argued that the aim was to apply something closer to equal moral weight per species, i.e. to be fairish on a per-species basis, and then share that moral weight out between members of the species. Earth can support vastly more ants, thus each ant’s share of its species’ moral weight is vastly smaller than each human’s. That isn’t why or even how we implemented this, but it is a vaguely plausible-sounding debating position, among humans.
However, if we actually did that, allocate an allowance of moral weight per species and then share it across all current members of the species, so their individual moral weight varied inversely with population, then we’d get a very odd variant of Utilitarianism. The sum over utilities in Utilitarianism normally encourages maximizing population: all things being equal, a population of ten billion humans on Earth generates twice the utility per year than a population of five billion, so there is a moral imperative to increase population until the it reaches levels high enough to impose either x-risk or significant resource constraints, ones sufficient to significantly decrease happiness by enough to make total happiness peak and start to decline even though the population is still rising. (This issue with conventional Utilitarian ethical systems is widely known among philosophers of ethics as “The Repugnant Conclusion”.) So normal Utilitarianism leads to maximizing population as close as possible to carrying capacity, at levels that are significantly resource-constrained. However, under this unusual variant, human populations of a hundred people or ten billion produce the same utility, since the hundred each get one hundredth of Homo sapiens’ moral weight allocation whereas the ten billion each get a ten billionth. So now there is no direct moral preference for any specific population level, except of course that reducing the population level will increase available resources per individual, presumably allowing us to make them individually marginally happier, so until diminishing returns on this saturate and flatten out completely, there is instead a moral imperative to decrease population. Even below that level, we don’t care either way about reducing the population, until it gets so low that x-risks start to cut in. So you instead end up with very low populations. In anything resembling a real ecology, increasing the resources available per individual of a species would tend to increase reproductive success, causing the population of the species to go back up, producing a stabilizing effect. But in an ecology controlled entirely by birth control, population dynamics are decoupled from resources and become a policy decision, only guided by our utility function. [Note that I haven’t even started to think about how to handle speciation or species extinction under this framework.]
So to get a stable population at a reasonable level, neither maximized to the point of resource constraints not minimized except for x-risk, we likely need the moral weight per individual to be neither constant independent of the population P, nor 1/P as implied by equal shares of a constant weight per species, but rather something in between, such as P−k for some small fraction k, or 1+k/P for some large number k, carefully chosen in light of the happiness vs available resources per individual curve for the specific species so as to stabilize the optimal species population at some desired fraction of the maximum carrying capacity of the ecology that the species lives in. Again, we’re blatantly adding more fudge factors to our ethical system to make the optimum be the result we want. If we don’t get this calculation, and/or the calculation of the amount of resources a species needs quite correct, then an species will be disadvantaged in the utility calculation, and our ASIs may make a rational decision to have fewer of them. If so, their individual moral weight will increase, as P−kor 1+k/P, improving their moral weight to resource cost ratio, making them a more attractive option compered to other species, thus the situation for allocating resources between species will be self-stabilizing, at the cost of some changes to our planned population levels when species resource budgets were originally assigned.
While this might be sensible when resource are fixed, obviously in the case of colonization increasing available resource totals, it makes sense to increase moral weight budgets: if we mine asteroids to build a lot of Bishop Rings, or terraform Mars, or terraform and colonize a planet in another solar-system, then resources increase, and populations can increase. This is unequivocally a good thing, to be encouraged, so our utility function should treat this as such. So if we were giving per-species utility and then splitting it across individuals, we should increase the budget of all species in proportion to the increased resources, or in the P−k formulation, we should rescale in whatever way is necessary for the stable solution with e.g. twice the resources from two Earths to be twice the population giving twice the utility (for 1+k/P that’s just changing k to 2k).
Birth control on this scale provides more than just mathematical conundrums and the obvious logistical issues. Many species are what biologists call “r-strategy” species, who have large numbers of offspring (for species other than mammals or birds, often extremely large numbers), where in their current ecosystems the vast majority of these die before reaching adulthood. However, as soon as the eggs begin developing a functional nervous system, they are arguably sentient (and even before that, they are of course a potential sentient-to-be). So we either get a new version of the abortion debate, where for practicality we have to not grant moral weight to members of these species until some later point in development, so that we can cull them before that (say by means of egg-eating predators, or more bots), or else we need to achieve this entirely by birth control. Doing that presents a further problem. Consider a species such as salmon where a breeding pair lays many thousands of eggs. In the absence of hunger, predation, parasites, and disease, those will almost all survive to adulthood. The entire spawning run population of a particular river in a particular year is fewer than that, so we can only allow one pair to reproduce, and by the next generation, the entire population will all be siblings, so there will be massive inbreeding causing genetic disease from deleterious recessives, and loss of genetic diversity. So we need to find a form of birth control that is not all-or-nothing, where either a parent is fertile or not, but one that instead reduces the number of viable fertilized eggs laid per spawning to just a hair over two, to ensure stable population replacement without increasing inbreeding. This sounds even harder than just providing birth control for every animal on the planet big enough to see: now in many cases it also needs to be vary carefully designed birth control.
How About Genetic Engineering?
We have a great many extremely difficult problems to solve here: feeding predators vegetarian food, arranging that they don’t successfully hunt even though they’re well fed, complex birth control, the list goes on and on. So far I’ve been talking as if these problems all need to be solved with huge numbers of tiny bots (preferably very economical and resource-efficient ones, so as not to use up a lot of resources that could be supporting sentient creatures). Could we instead solve at least some of our problems using genetic engineering?
As I briefly discussed in Part 1 (and as I’ll explore rather more in Part 6), minimizing x-risk is very important. That clearly includes doing so for the non-human portions of the biosphere, most especially if we’re granting them moral weight. So if some sort of disaster were to occur that caused a civilizational collapse (for years, decades, centuries, or millennia) it would be extremely unfortunate if our genetically modified animals died out in the meantime. Having to revert to a state of nature, red in tooth and claw, would be a tragedy, but obviously less so than going extinct. So that would suggest that any genetic engineering we do needs to be quite limited, so as not to significantly reduce animals’ evolutionary fitness in a natural ecosystem. Such as by messing with a predator’s hunting instincts, or any animal’s reproduction.
Possibly we could do something where there was some drug in the textured vegetable protein that we were feeding the predators, for example, that triggered a genetically engineered pathway that suppressed their hunting reflexes, but where after a few days of starvation, this wore off, the extra pathway deactivated, and normal hunting instincts returned (for species that also need hunting practice to hone their skills, we woulds also need to ensure they had had it, without any animals being harmed). So a genetic engineering approach is not unusable, but the significant effects need to be temporary, able to wear off well before the animal starves Then we need to deal with logistical or medical screw-ups where an individual didn’t get fed for a few days for some reason and goes on a killing spree as a result. So I think we’re still going to need a lot of tiny nanotech peacekeeper bots, even if only as a backup.
Of course, if it were technologically feasible to recreate extinct species once civilization got rebuilt (which is presumably even more challenging to do for live-bearing species than for ones with small eggs and no parental care), or if it were feasible to deep-freeze and then revive animals in some form of deep storage that would survive for centuries or millennia (perhaps out in space somewhere very cold), then this might be a less restrictive constraint.
Could We do Half-Measures?
Is there a way to make all this any easier? Absolutely, but you have to throw in some sort of unjustified arbitrary hack just to get the results we want, with no plausible-sounding rationalization for it. On the other hand, we are already looking at a heap of many millions of those, so what’s one more if it simplifies the ethical system overall?
Suppose we pick an objective threshold, in synapse count, say: maybe around 109synapses. If you don’t have at least that many synapses as an average adult member of your species, you get zero moral weight: things can eat or parasitize you, nature can remain red in tooth and claw for you — we don’t care, because you’re too dumb, sorry. In that case, your only moral value is indirectly, as food for, or part of an ecosystem that supports, or otherwise being of value to, things above this moral weight cut line. Ants are on their own, and are just anteater chow and garbage disposal services, unless someone can persuade us that the entire nest is a collective intelligence smart enough to be equivalent to at least 10 times as many synapses as any individual member (which doesn’t sound entirely implausible). Above that level, you suddenly have moral weight (linearly proportionate to your species average body mass or resource needs, of course), or perhaps this phases in logarithmically over some range of synapse count, or just keeps scaling logarithmically with synapse count up to or even past O(1014). Every mammal and bird would easily make that cut line, so would most reptiles and many fish (mostly larger ones), but pretty-much no insects (except possibly bees, especially if you grant them some degree of hive-mind bonus). Then most of the vast numbers of tiny carnivores don’t need individual guard bots to stop them eating anything they want, since all their usual prey has no moral weight, and in the ocean we mostly only need to worry about stopping any large fish eating medium-sized fish. Which still sounds really hard, but is at least orders of magnitude easier than doing the same for all the copeopods. Most of the animals that make this cut are ones that most current humans would feel at least a little bad if they had accidentally killed, or someone had intentionally killed for no good reason, and most of the things that few people (other than Jains) currently worry about stepping on don’t make the cut, so accidentally stepping on or driving over one doesn’t need to be made into a crime. Bugsplatter becomes no longer vehicular mayhem (perhaps unless bees are involved), but roadkill still is.
This makes things easier, by some number of orders of magnitude. You need a less-vast number of tiny bots, and the smallest ones don’t need to be as tiny. Depending on where you put the cut line, you could even make it a lot easier: pick a really convenient a-little-below-human number like 1013 synapses and you can exclude most mammals other than primates, cetaceans, and few other large, smart mammals such as elephants. At 1013synapses, dogs would be right on the borderline. However, your justification for the ethical system then looks really strained/arbitrary, and owners of pet cats and rats are going to feel their fur-babies are being treated unfairly.
Or, you could pick some biological dividing line on the phylogenetic tree of Life. Vertebrate chauvinism, for example. Or you could do the pescatarian/Catholic thing and exclude fish (which would simplify the oceans massively): rights only for amphibians, reptiles, birds, and mammals — if your ancestors didn’t have a backbone and the gumption to colonize the land, we don’t recognize you as part of our ingroup. Take your pick, but do expect interminable arguments about it.
This is Incredibly Hard: Please Don’t Stake the Human Race On It
What I have been outlining here is incredible hard. We are constructing an ethical system of vast, intricately gerrymandered complexity, one very carefully tuned with many millions of parameters intended to get all the millions of outcomes we want. This doesn’t look anything like the sorts of moral systems human ethicists have thought about: it mostly looks extremely arbitrary, other than from a “does this get us the outcome we want?” viewpoint. Its moral consequences require us to invent entire new technical fields, like ergonomics for dust mites, for each of tens of millions of species. Even just eliminating disease is going to warp many ecosystems, especially in the sea or the tropics. There are doubtless many more challenges that I haven’t even though of. Some of them could well be even harder than ones I have thought of, or even effectively impractical at any specific technological level.
My best guess is that a sufficiently advanced society with access to superintelligences, nanotechnology including cheap and very energy-efficient nanotech bots, and very high levels of skill in genetic engineering might be able to pull this off. It’s a vast amount of work, many order of magnitude harder than building a utopia just for humans, but it’s not actually clearly physically impossible. However, it does look complex enough that no society, not even one with superintelligences, is going to get this right first time — unless they can do long-term modeling of an entire planet at an astonishing level of detail, down to individual organisms too small to see. Using something like a computer the size of a moon or a planet, presumably. So no, I’m not actually claiming that this clearly could never be done, but it is incredibly hard, and even for a society that could do this, it will take some time to get all the kinks out. So you had better have solved corrigibility.
However, it’s not within the capabilities of any society short of that kind of technological level. So, if we are encoding the terminal goal for a first-ever superintelligence, and if that terminal goal is not very corrigible, but we are nevertheless dumb enough to put this requirement in, then we are taking a massive and extremely stupid extinction risk with the future of the human race. Please do not do this. This is not something we can expect a first superinteligence to be able to do, it will screw it up, and if it drives us extinct while doing so, whether that’s in favor of ants or dust mites, that won’t be anything approaching death with dignity, that’s just really dumb. The only way one might be be to reduce that risk is to design into our AI system some form of fallback where, when things start to go bad, we stop granting sentient animals moral weight, and we revert to only humans (or only evolved sapients) having moral weight while we do disaster control.
So, if we’re in a position to get something like CEV, then sure, mention that something along these lines might be a generous thing to do at some point, in the future, if and when it’s ever practical, at least for the really smart animals to start off with. However, this is a maybe-nice-to-have-eventually: the human race’s continued existence has to come first.
The Status Quo: Loaned Conditional Moral Weight
Almost all humans find animal suffering distasteful and unpleasant (the rare exceptions, psychopaths/sociopaths and some sadists, also don’t dislike other human’s suffering, so are arguably “exceptions that prove the rule”). Given that the human species is evolved for a hunter-gather niche, it is surprisingly psychologically difficult for us to kill an animal comparable to our natural prey species with a hand weapon (though one can of course become acclimatized to this). This is particularly so for animals that are large, fluffy, act intelligently, or are cute (i.e. that have heads and eyes large enough to trigger our parental instincts).
This effect is of course weaker below some size of animal (as long as our parental instincts aren’t engaged), for less mammalian animals, the further away and less visible and apparent the animal’s suffering is, and to some extent also the less it seems like our fault, or that of any human. Fish are psychologically easier to kill, insects easier still, animals too small to see are trivial. Nevertheless, if we are, as I have been advocating during this sequence on Ethics, attempting to construct an ethical system for a society that tries to ensure low prevalence of things that offend the instinctive ethical and aesthetic sensibilities that natural selection has seen fit to endow Homo sapiens with, and high prevalence of things that those approve of (like happy kids, kittens, gardens, water-slides, that sort of thing), then reducing the amount of animal cruelty is going to be a goal. Especially so for cruelty to kittens, or other fluffy, playful, cute mammals that trigger our parental instincts.
The net result of this is a voluntary loan of moral weight, from humans, to animals. Much more so to some animals than others, and in ways that also depend on conditional things like visibility, circumstances, responsibilities, and practicalities. The results of this can seem very unfair and arbitrary, if you think about it from the point of view of the animal’s welfare: why is bullfighting or cockfighting in public a prosecutable crime in most countries, but slaughtering steers or chickens in an abattoir is routine almost everywhere (apart from cows in most states in India)? The answer is because it’s not actually about the welfare of the cow or the chicken, it’s about the welfare and conscience of the humans (and, for cows in India, about human religious symbolism). We are omnivores, and our farm animal’s loaned moral weight goes away as they enter the killing area of the abattoir, so long as no significant unnecessary cruelty is involved.
The extent of this is quite dependent on the society. Many societies still agricultural enough that most people still slaughter their own livestock find activities like bullfighting and cockfighting less objectionable. Things like nature documentaries make what is happening to animals on the other side of the world more visible (especially for more photogenic species), and environmental activism has introduced a norm that an animal species may have some moral weight as a separate entity, to be shared between its individuals, increasing their moral weight if the species gets close to extinction. Sufficiently good and nutritious textured vegetable protein would of course make it feasible for more countries to adopt an India-like attitude to more farm animals (likely greatly reducing their numbers, except for honey-bees, milk-cows and egg-laying chickens).
Almost all human cultures throughout history have used ethical systems that give some animals (most commonly mammals and birds) some moral weight in some circumstances. However, generally these circumstances did not include a wild prey animal being hunted and eaten by a wild predator, especially if no human happened to be around at the time. Given the way humans feel about animal suffering, we should and will continue to do this sort of thing in future moral systems that we devise for future societies. We might even decide to do so more, or attempt to do so in what feel to us like more principled ways. So I am definitely not suggesting that animals, sentient or indeed otherwise, or even plants or fungi, should never have any moral weight. I am simply suggesting that this must continue to be loaned, what lawyers call “pro-tanto”: to a limited and context-appropriate extent, because and in situations where humans care about them, rather than being granted to them outright in any amount on grounds of sentience (for any sensible definition of sentience) — at least until that incredible hard goal actually becomes technologically practicable.
That is to say, before summing across them, each ant’s and each human’s individual utility functions are linearly rescaled so that the distance between their zero (i.e. “I’d rather be dead than permanently below this”) utility level and maximum utility levels are the same. Or possibly so that the standard deviation of their utility under a normal range of circumstances is the same. This scaling is intended to avoid us having any ‘Utility Monsters’. As we shall see, it fails dramatically, making ants into utility monsters, which is why utility monsters are instead usually defined in terms of utility per unit of resources.