5. Moral Value for Sentient Animals? Alas, Not Yet
Part 5 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
TL;DR In Parts 1 through 3 I discussed principles for ethical system design, and the consequences for AIs and uploads, and in Part 4, I discussed a principled way for us to grant moral value/weight to a larger set than just biological humans: all evolved sapient beings (other than ones of types we cannot form a cooperative alliance with). The history of liberal thought so far has been a progressive expansion of the set of beings accorded moral value (starting from just the set of privileged male landowners having their own military forces). So, how about animals? Could we expand our moral set to all evolved sentient beings, now that we have textured vegetable protein? I explore some of the many consequences if we tried this, and show that it seems to be incredibly hard to construct and implement such a moral system that doesn’t lead to human extinction, mass extinctions of animal species, ecological collapses, or else to very ludicrous outcomes. Getting anything close to a good outcome clearly requires at least astonishing levels of complex layers of carefully-tuned fudge factors baked into your ethical system, and also extremely advanced technology. A superintelligence with access to highly advanced nanotechnology and genetic engineering might be able to construct and even implement such a system, but short of that technological level, it’s sadly impractical. So I regretfully fall back to the long-standing solution of donating second-hand moral worth from humans to animals, especially large, photogenic, cute, fluffy animals (or at least ones visible with the naked eye) because humans care about their well-being.
[About a decade ago, I spent a good fraction of a year trying to construct an ethical system along these lines, before coming to the sad conclusion that it was basically impossible. I skipped over explaining this when writing Part 4, assuming that the fact this approach is unworkable was obvious, or at least uninteresting. A recent conversation has made it clear to me that this is not obvious, and furthermore that not understanding this is both an x-risk, and also common among current academic moral philosophers — thus I am adding this post. Consider it a write-up of a negative result in ethical-system design.
This post follows on logically from Part 4. so is numbered Part 5, but it was written after Part 6 (which was originally numbered Part 5 before I inserted this post into the sequence).]
Sentient Rights?
‘sentient’: able to perceive or feel things — Oxford Languages
The word ‘sentient’ is rather a slippery one. Beyond being “able to perceive or feel things”, the frequently-mentioned specific of being “able to feel pain or distress” also seems rather relevant, especially in a moral setting. Humans, mammals, and birds are all clearly sentient under this definition, and also in the the common usage of the word. Few people would try to claim that insects weren’t: bees, ants, even dust mites. How about flatworms? Water fleas? C. elegans, the tiny nematode with a carefully-mapped nervous system of exactly 302 neurons (some of which seem to be ‘pain neurons’)? How about single-celled amoebae, or bacteria — they have at least some senses, amoebae are even predators? Plants react to stimuli too…
Obviously if we’re going to use this as part of the definition of an ethical system that we’re designing, we’re going to need to pick a clear definition. For now, let’s try make this as easy as we can on ourselves and pick a logical and fairly restrictive definition: to be ‘sentient’ for our purposes, an organism needs to a) be a multicellular animal (a metazoan), with b) an identifiable nervous system containing multiple neurons, and c) use this nervous system in a manner that at least suggests that it has senses and acts on these in ways evolved to help ensure its survival or genetic fitness (as one would expect from evolutionary theory). So it needs to be a biological organism capable of agentic behavior that is implemented via a neural net. This includes almost every multicellular animal [apart from (possibly) placozoa and porifera (sponges)]. This is just a current working definition: if we want to adjust it later to be more permissive or restrictive, or to make things easier or harder on ourselves, that’s an option.
As I discussed in Part 4, for any sapient species, we have a strong motive to either grant them roughly equal moral worth, or else, if that is for some reason not feasible, drive them extinct in a war of annihilation — since if we do anything in between the two, sooner or later they are going to object, using weapons of mass destruction. This very might-makes-right (and indeed rather uncouth) argument doesn’t apply to non-sapient sentient animals (that have not been uplifted to sapience): they’re never going to have nukes, or know how to target them. So granting moral worth to sentient animals, or not, is a choice that we are free to make, depending on what leads to a future society that we humans would more like our descendants to live in.
As I covered in Part 1, an ethical system needs to be designed in the context of a society. Cats, dogs and other “fur-babies” that live in our homes as pets arguably form an adopted part of our society. Domestic food animals in factory farms possibly less so, though they certainly still contribute to its well-being. Multicellular organisms such as nematodes too small to see without a hand lens living in the mud at the bottom of the ocean, definitely not so much. What happens if our society generously does the extrapolated-liberal thing here, and grants them all moral worth anyway?
The Tyranny of the Tiny
Consider ants. It’s estimated that there are around 20 quadrillion ants on Earth. If we grant each of them equal moral worth to a human,[1] as feels like the fair thing to do, then in the optimizing process their combined utility outweighs the utility of the entire human population by a factor of a few million. Now our own species’ collective well-being is just a rounding error in the ethical decisions of the Artificial Superintelligence (ASI) that we very generously designed and built for the ants. Thus the importance of providing food, healthcare services, retirement care, birth control, legal representation, transportation, local news, entertainment etc. etc. for the human population is a rounding error on doing all of these same things for each one of 20 quadrillion ants! Clearly the ASIs are going to need a lot of tiny nanotech bots to provide food, healthcare services, retirement care, birth control, legal representation, transportation, local news, entertainment etc. etc. just for ants, let alone for all the other sentient animal species, many of them tiny and numerous. We’re basically attempting to construct a planetary-scale zoo, one with utopian human-level care for each and every sentient inhabitant, down to all the ones you need a magnifying glass to see (indeed, most especially them, since they massively outnumber the ones you can see). You should expect to go to jail for antslaughter if you accidentally step on or inhale one, let alone intentionally murdering a gnat or a bedbug. We won’t be driving or flying vehicles any more: bugsplatter is morally equivalent to driving into a crowd.
It gets worse. Even in a so-called “post-scarcity” civilization with access to ASI, some resource are limited: carbon, oxygen, hydrogen, nitrogen, phosphorus, sunlight, square meters of land area, for example. Even if we expand out into the solar system or further, those limits become larger, but there’s still a limit. Basic resource limitations on matter, energy, and space still apply. Very clearly, the resources sufficient to support one human being comfortably could comfortably support many millions of ants: their individual nutritional and other resource requirements are roughly seven orders of magnitude smaller than ours. Humans are simply enormous resource hogs, compared to ants. So, in the longer term, the only Utilitarian-ethically acceptable solution is to (humanely) reduce the human population to zero, to free up resources to support tens of quadrillions more ants (or, if we want to also ban species extinction, keep a minimal breeding population of humans: with artificial insemination of DNA synthesized from genetic data, inbreeding could be avoided, so a planetary breeding population of should be quite safe from random fluctuations). Except, of course, that by the standards of sentient organisms, ants are still quite large, so the ASIs should get rid of them as well, in favor of an even larger number of even tinier sentient organisms. [This ethical system design problem has been called “Pascal’s Bugging”, and there is a certain analogy.]
Thus, any approach that satisfies the human instinct for fairness by giving equal moral worth to all sentient organisms will inevitably lead to human extinction (other than perhaps a breeding population of survivors, if you ban extinction). I’m sorry, but I’m afraid that it is a simple, easily mathematically-deducible fact that PETA’s ethical platform is an x-risk. More concerningly, I am told that this ethical position is also the broad consensus among current academic moral philosophers, so if we build ASI and ask them to “just go do Coherent Extrapolated Volition”, they will definitely find a lot of this in the recent Ethical Philosophy literature to extrapolate from.
Let’s Try Fudge Factors
OK, that failed spectacularly. So, maybe I don’t actually care as much about an individual ant as I do a human. They are quite small, after all, and not individually very smart. We’re talking about rights for organisms with neural nets here. Could we favor smartness in some principled way? We don’t really understand neuroscience quite well enough yet to be certain about this, but suppose that we knew that synapse count was roughly comparable (up to some roughly constant factor) to the parameter count of an artificial neural net (so ignoring a few possibly-minor effects like neurotransmitter diffusion flows, synapse sparsity, neuronal type diversity, information processing in dendrites, and so forth). Neural net scaling laws are pretty universal, and basically say that the impressiveness of neural-net’s behavior is proportional to the logarithm of the parameter count: each time you double the parameter count, you see a similar-sized increase in capabilities. Humans have synapses, whereas ants have only synapses (roughly comparable to a T5-small model, ignoring questions such as sparsity). So if we use a moral weighting system proportional to the logarithm of synapse count, each human would get ~14 units-of-moral-worth and each ant only ~8, and they would still outvote us by more than a factor of a million. (Changing the base of the logarithm doesn’t help, it just rescales the denomination of a single unit-of-moral-worth.) So that didn’t help much.
In fact, not only are logarithms useless, but anything non-linear doesn’t work. The only way to avoid small animals outvoting large ones in the division of resources, or vice versa, is to scale individual moral value according to something that scales linearly with a creature’s resource requirements, such as average adult weight, or calorific intake averaged across the creature’s lifespan, or something along those lines. Yes, you’re quite correct, this is a large and blatant fudge factor arranged simply to get the outcome we want, with no other logically supportable moral justification from any kind of human moral iniuition — and we’ll be seeing a lot more of those before we’re done. Even that is probably not good enough to avoid perverse incentives under ASI optimization — in practice we would need to estimate all the resource consumption rate that a member of this species needs: each chemical element, energy, space, more abstract resources like peace or quiet, and so forth, price each by how scarce they are, and total up a per-individual budget based on a whole basket of resources tailored to the needs of an average a member of this species across their lifetime. Repeat this for tens of millions of species, many of them not yet discovered. What we need for stability is that each species has the same utility per value unit of their basket of consumed resources. That’s the only solution that avoids the utility maximization process having a preference about trading off the population of one species against another to increase total utility achievable with the available resources. Note that if the available basket of resources changes for some reason, we either hav to redo this, or accept that now the utility maximization process will want to start altering the species mix,
Votes For Man-Eating Tigers — and Worse
Eliding a vast number of details, let’s assume we have found some way to scale the moral weight given to each member of a species that scales roughly with the physical weight/resource consumption needs of members of that species, so as not to have our ASIs overly favor smaller or larger animals. Thus our civilization is expending of the order of one ten-millionth of the resources on the well-being of an individual ant as on any individual animal around the size of a human. To put that in perspective, suppose standards of living had gone up in this more advanced civilization, so that the average annul expenditure on the well-being of an individual human was , then we’d also be spending per year on each ant (for a total of a year on humans and a year on ants, assuming planetary populations of each are at about current levels). So we’re probably not providing ants with individualized entertainment channels, and, unless we have a way to very efficiently produce an awful lot of tiny nanotech nursebots, probably only with basic healthcare, more like public health for ant nests, plus perhaps short-range mass transportation: some sort of tiny conveyor belts, perhaps.
Now let’s consider predators and prey. Generally predators are no more than a few times larger or smaller then their prey, so they’re going to have roughly equivalent moral weight. Any form of utility function for a prey species is obviously going to object very strongly to them being eaten. So either predators need to go extinct (possibly apart from a minimal breeding population of individuals), or else we need to find some way to feed them textured vegetable protein, vat-grown meat, the recently-deceased remains of elderly members of their prey species, or some other non-sentient foodstuff, carefully tailored and balanced for their carnivorous nutritional needs, and appetizing enough to them that they’ll actually eat it. (For many carnivorous snakes, this requires the food to be wriggling — obviously without it having a nervous system.) Note that this is not what we do with pet cats, or carnivores in zoos, all of whom are normally fed real animal protein from real dead animals killed for this purpose. (Or for snakes, real live animals, since they refuse to eat dead ones.)
Then, after we’ve fed the predators, we still need to figure out how to stop many of them from killing members of prey species anyway, even though they’re not hungry. Some carnivores, such as lions, only hunt when hungry (for good and sufficient evolutionary reasons); others, like housecats, will hunt even when well-fed (likewise, for good evolutionary reasons). Many simple-minded carnivores tend to attack anything prey-like that they can unless they currently stuffed too full to eat. We’re going to need a lot more small peacekeeper bots, following most of the carnivores around making sure they can’t actually catch anything — and when I say ‘carnivore’, don’t think ‘lion’, think beetle larvae a quarter inch long, or ~1mm copeopods or water fleas.
Next, consider species of internal parasites and their natural hosts. Internal parasites are of course generally smaller than their hosts, so will be somewhat outvoted by them. But an internal parasite isn’t going to be able to live inside textured vegetable protein, gradual eating its way through it. To keep them alive, we’re going to basically need to vat grow living tissue from its natural host, genetically modified to not have any functional neurons, inside some sort of heart-lung-tissue-culture machine, for the parasite to live inside. Even by zoo standards, that’s more like a cross between a critical intensive care ward and cloning transplant organs. Zoos don’t generally intentionally keep parasites, it’s too hard/cruel.
OK, so f that’s too much effort, maybe we just send all the parasitic species extinct? Most of them have life-cycles that are truly disgusting. I’ve never heard anyone breathe a word of protest about the fact that Jimmy Carter (a man widely suspected of being too kindly and honest to be an effective US President) has been working diligently on this and has, near the end of his life, almost succeeded in driving extinct the sentient species Dracunculus medinensis: that’s the guinea-worm, a 2½-foot long parasitic nematode worm that burrows through human flesh, injuring and disabling its human host, for many months, before inflicting agonizing pain to drive them to soak their limb in the river so that the worm can release its young into the local water supply (where they go on to do equally nasty things to water fleas, before returning to humans). Though they don’t afflict humans, some of the things that certain parasitic wasp larvae do to their hosts make even the guinea-worm look considerate; I’m certain their hosts won’t miss them. While we’re at it, let’s get rid of the deadliest animal in the world, Aedes aegypti: the yellow-fever-mosquito, or at least all the diseases it carries.
However, whatever we decide to do about parasites, just letting them go extinct isn’t a viable option for predators. Not only would we no longer have cool-looking predators to film documentaries about, they also have strong, complex effects on the ecosystems they’re part of. When humans eliminated wolves from most of Europe and North America, their prey species, such as deer, exploded in population so had to be kept down by hunting (not that we objected to the work, as venison is tasty). However, the presence of wolves changes deer’s behavior and grazing patterns in ways that human hunters don’t, especially for plants near water sources and their likelihood of eating the bark off tree saplings. Many plants species, and the insects and other animals dependent on them, went extinct, or almost so, forest distributions changed, and the entire ecology of the food webs across much of two continents went out of whack. When wolves were finally reintroduced in many still-wildish places, insect and plant species that hadn’t been seen in a century or more reappeared from obscurity (and doubtless many other extinct ones didn’t). This is bad enough on land, among warm-blooded animals where each level in the food web is an order of magnitude smaller than the one below. But for cold blooded animals in the sea, where that ratio is less than two, it’s far more dramatic. Almost all sentient species big enough to see in the ocean are carnivores, that feed on smaller carnivores, that feed on even smaller carnivores, down to carnivores almost too small to see that live off herbivorous zooplankton that live on phytoplankton or predatory or partially-predatory single-celled species. So in aquatic ecosystems “how about we just drop the carnivores?” leads an ecosystem where almost everything left is to too small to see.
In Search of an Ecological Stabilization Plan
We are proposing attempting to eliminate hunger, disease, predation, and parasitism for all animals with a nervous system, including vast numbers too small to see. Clearly unless we use birth control for all of these species, they are all are going to have a population explosion, starve, and then crash, some much faster than others. Thus all our ecologies are now stabilized entirely by the provision of birth control, administered by our ASIs, whose decisions are guided by our ethical system design. So now the stability of all ecologies depends on getting the ethical system design correct.
About the closest thing to a coherent-sounding political argument one could make (deriving from anything like fairness) for moral weight scaling linearly with average adult weight, or at least resource requirements, would be if we argued that the aim was to apply something closer to equal moral weight per species, i.e. to be fairish on a per-species basis, and then share that moral weight out between members of the species. Earth can support vastly more ants, thus each ant’s share of its species’ moral weight is vastly smaller than each human’s. That isn’t why or even how we implemented this, but it is a vaguely plausible-sounding debating position, among humans.
However, if we actually did that, allocate an allowance of moral weight per species and then share it across all current members of the species, so their individual moral weight varied inversely with population, then we’d get a very odd variant of Utilitarianism. The sum over utilities in Utilitarianism normally encourages maximizing population: all things being equal, a population of ten billion humans on Earth generates twice the utility per year than a population of five billion, so there is a moral imperative to increase population until the it reaches levels high enough to impose either x-risk or significant resource constraints, ones sufficient to significantly decrease happiness by enough to make total happiness peak and start to decline even though the population is still rising. (This issue with conventional Utilitarian ethical systems is widely known among philosophers of ethics as “The Repugnant Conclusion”.) So normal Utilitarianism leads to maximizing population as close as possible to carrying capacity, at levels that are significantly resource-constrained. However, under this unusual variant, human populations of a hundred people or ten billion produce the same utility, since the hundred each get one hundredth of Homo sapiens’ moral weight allocation whereas the ten billion each get a ten billionth. So now there is no direct moral preference for any specific population level, except of course that reducing the population level will increase available resources per individual, presumably allowing us to make them individually marginally happier, so until diminishing returns on this saturate and flatten out completely, there is instead a moral imperative to decrease population. Even below that level, we don’t care either way about reducing the population, until it gets so low that x-risks start to cut in. So you instead end up with very low populations. In anything resembling a real ecology, increasing the resources available per individual of a species would tend to increase reproductive success, causing the population of the species to go back up, producing a stabilizing effect. But in an ecology controlled entirely by birth control, population dynamics are decoupled from resources and become a policy decision, only guided by our utility function. [Note that I haven’t even started to think about how to handle speciation or species extinction under this framework.]
So to get a stable population at a reasonable level, neither maximized to the point of resource constraints not minimized except for x-risk, we likely need the moral weight per individual to be neither constant independent of the population , nor as implied by equal shares of a constant weight per species, but rather something in between, such as for some small fraction , or for some large number , carefully chosen in light of the happiness vs available resources per individual curve for the specific species so as to stabilize the optimal species population at some desired fraction of the maximum carrying capacity of the ecology that the species lives in. Again, we’re blatantly adding more fudge factors to our ethical system to make the optimum be the result we want. If we don’t get this calculation, and/or the calculation of the amount of resources a species needs quite correct, then an species will be disadvantaged in the utility calculation, and our ASIs may make a rational decision to have fewer of them. If so, their individual moral weight will increase, as or , improving their moral weight to resource cost ratio, making them a more attractive option compered to other species, thus the situation for allocating resources between species will be self-stabilizing, at the cost of some changes to our planned population levels when species resource budgets were originally assigned.
While this might be sensible when resource are fixed, obviously in the case of colonization increasing available resource totals, it makes sense to increase moral weight budgets: if we mine asteroids to build a lot of Bishop Rings, or terraform Mars, or terraform and colonize a planet in another solar-system, then resources increase, and populations can increase. This is unequivocally a good thing, to be encouraged, so our utility function should treat this as such. So if we were giving per-species utility and then splitting it across individuals, we should increase the budget of all species in proportion to the increased resources, or in the formulation, we should rescale in whatever way is necessary for the stable solution with e.g. twice the resources from two Earths to be twice the population giving twice the utility (for that’s just changing to ).
Birth control on this scale provides more than just mathematical conundrums and the obvious logistical issues. Many species are what biologists call “r-strategy” species, who have large numbers of offspring (for species other than mammals or birds, often extremely large numbers), where in their current ecosystems the vast majority of these die before reaching adulthood. However, as soon as the eggs begin developing a functional nervous system, they are arguably sentient (and even before that, they are of course a potential sentient-to-be). So we either get a new version of the abortion debate, where for practicality we have to not grant moral weight to members of these species until some later point in development, so that we can cull them before that (say by means of egg-eating predators, or more bots), or else we need to achieve this entirely by birth control. Doing that presents a further problem. Consider a species such as salmon where a breeding pair lays many thousands of eggs. In the absence of hunger, predation, parasites, and disease, those will almost all survive to adulthood. The entire spawning run population of a particular river in a particular year is fewer than that, so we can only allow one pair to reproduce, and by the next generation, the entire population will all be siblings, so there will be massive inbreeding causing genetic disease from deleterious recessives, and loss of genetic diversity. So we need to find a form of birth control that is not all-or-nothing, where either a parent is fertile or not, but one that instead reduces the number of viable fertilized eggs laid per spawning to just a hair over two, to ensure stable population replacement without increasing inbreeding. This sounds even harder than just providing birth control for every animal on the planet big enough to see: now in many cases it also needs to be vary carefully designed birth control.
How About Genetic Engineering?
We have a great many extremely difficult problems to solve here: feeding predators vegetarian food, arranging that they don’t successfully hunt even though they’re well fed, complex birth control, the list goes on and on. So far I’ve been talking as if these problems all need to be solved with huge numbers of tiny bots (preferably very economical and resource-efficient ones, so as not to use up a lot of resources that could be supporting sentient creatures). Could we instead solve at least some of our problems using genetic engineering?
As I briefly discussed in Part 1 (and as I’ll explore rather more in Part 6), minimizing x-risk is very important. That clearly includes doing so for the non-human portions of the biosphere, most especially if we’re granting them moral weight. So if some sort of disaster were to occur that caused a civilizational collapse (for years, decades, centuries, or millennia) it would be extremely unfortunate if our genetically modified animals died out in the meantime. Having to revert to a state of nature, red in tooth and claw, would be a tragedy, but obviously less so than going extinct. So that would suggest that any genetic engineering we do needs to be quite limited, so as not to significantly reduce animals’ evolutionary fitness in a natural ecosystem. Such as by messing with a predator’s hunting instincts, or any animal’s reproduction.
Possibly we could do something where there was some drug in the textured vegetable protein that we were feeding the predators, for example, that triggered a genetically engineered pathway that suppressed their hunting reflexes, but where after a few days of starvation, this wore off, the extra pathway deactivated, and normal hunting instincts returned (for species that also need hunting practice to hone their skills, we woulds also need to ensure they had had it, without any animals being harmed). So a genetic engineering approach is not unusable, but the significant effects need to be temporary, able to wear off well before the animal starves Then we need to deal with logistical or medical screw-ups where an individual didn’t get fed for a few days for some reason and goes on a killing spree as a result. So I think we’re still going to need a lot of tiny nanotech peacekeeper bots, even if only as a backup.
Of course, if it were technologically feasible to recreate extinct species once civilization got rebuilt (which is presumably even more challenging to do for live-bearing species than for ones with small eggs and no parental care), or if it were feasible to deep-freeze and then revive animals in some form of deep storage that would survive for centuries or millennia (perhaps out in space somewhere very cold), then this might be a less restrictive constraint.
Could We do Half-Measures?
Is there a way to make all this any easier? Absolutely, but you have to throw in some sort of unjustified arbitrary hack just to get the results we want, with no plausible-sounding rationalization for it. On the other hand, we are already looking at a heap of many millions of those, so what’s one more if it simplifies the ethical system overall?
Suppose we pick an objective threshold, in synapse count, say: maybe around synapses. If you don’t have at least that many synapses as an average adult member of your species, you get zero moral weight: things can eat or parasitize you, nature can remain red in tooth and claw for you — we don’t care, because you’re too dumb, sorry. In that case, your only moral value is indirectly, as food for, or part of an ecosystem that supports, or otherwise being of value to, things above this moral weight cut line. Ants are on their own, and are just anteater chow and garbage disposal services, unless someone can persuade us that the entire nest is a collective intelligence smart enough to be equivalent to at least 10 times as many synapses as any individual member (which doesn’t sound entirely implausible). Above that level, you suddenly have moral weight (linearly proportionate to your species average body mass or resource needs, of course), or perhaps this phases in logarithmically over some range of synapse count, or just keeps scaling logarithmically with synapse count up to or even past . Every mammal and bird would easily make that cut line, so would most reptiles and many fish (mostly larger ones), but pretty-much no insects (except possibly bees, especially if you grant them some degree of hive-mind bonus). Then most of the vast numbers of tiny carnivores don’t need individual guard bots to stop them eating anything they want, since all their usual prey has no moral weight, and in the ocean we mostly only need to worry about stopping any large fish eating medium-sized fish. Which still sounds really hard, but is at least orders of magnitude easier than doing the same for all the copeopods. Most of the animals that make this cut are ones that most current humans would feel at least a little bad if they had accidentally killed, or someone had intentionally killed for no good reason, and most of the things that few people (other than Jains) currently worry about stepping on don’t make the cut, so accidentally stepping on or driving over one doesn’t need to be made into a crime. Bugsplatter becomes no longer vehicular mayhem (perhaps unless bees are involved), but roadkill still is.
This makes things easier, by some number of orders of magnitude. You need a less-vast number of tiny bots, and the smallest ones don’t need to be as tiny. Depending on where you put the cut line, you could even make it a lot easier: pick a really convenient a-little-below-human number like synapses and you can exclude most mammals other than primates, cetaceans, and few other large, smart mammals such as elephants. At synapses, dogs would be right on the borderline. However, your justification for the ethical system then looks really strained/arbitrary, and owners of pet cats and rats are going to feel their fur-babies are being treated unfairly.
Or, you could pick some biological dividing line on the phylogenetic tree of Life. Vertebrate chauvinism, for example. Or you could do the pescatarian/Catholic thing and exclude fish (which would simplify the oceans massively): rights only for amphibians, reptiles, birds, and mammals — if your ancestors didn’t have a backbone and the gumption to colonize the land, we don’t recognize you as part of our ingroup. Take your pick, but do expect interminable arguments about it.
This is Incredibly Hard: Please Don’t Stake the Human Race On It
What I have been outlining here is incredible hard. We are constructing an ethical system of vast, intricately gerrymandered complexity, one very carefully tuned with many millions of parameters intended to get all the millions of outcomes we want. This doesn’t look anything like the sorts of moral systems human ethicists have thought about: it mostly looks extremely arbitrary, other than from a “does this get us the outcome we want?” viewpoint. Its moral consequences require us to invent entire new technical fields, like ergonomics for dust mites, for each of tens of millions of species. Even just eliminating disease is going to warp many ecosystems, especially in the sea or the tropics. There are doubtless many more challenges that I haven’t even though of. Some of them could well be even harder than ones I have thought of, or even effectively impractical at any specific technological level.
My best guess is that a sufficiently advanced society with access to superintelligences, nanotechnology including cheap and very energy-efficient nanotech bots, and very high levels of skill in genetic engineering might be able to pull this off. It’s a vast amount of work, many order of magnitude harder than building a utopia just for humans, but it’s not actually clearly physically impossible. However, it does look complex enough that no society, not even one with superintelligences, is going to get this right first time — unless they can do long-term modeling of an entire planet at an astonishing level of detail, down to individual organisms too small to see. Using something like a computer the size of a moon or a planet, presumably. So no, I’m not actually claiming that this clearly could never be done, but it is incredibly hard, and even for a society that could do this, it will take some time to get all the kinks out. So you had better have solved corrigibility.
However, it’s not within the capabilities of any society short of that kind of technological level. So, if we are encoding the terminal goal for a first-ever superintelligence, and if that terminal goal is not very corrigible, but we are nevertheless dumb enough to put this requirement in, then we are taking a massive and extremely stupid extinction risk with the future of the human race. Please do not do this. This is not something we can expect a first superinteligence to be able to do, it will screw it up, and if it drives us extinct while doing so, whether that’s in favor of ants or dust mites, that won’t be anything approaching death with dignity, that’s just really dumb. The only way one might be be to reduce that risk is to design into our AI system some form of fallback where, when things start to go bad, we stop granting sentient animals moral weight, and we revert to only humans (or only evolved sapients) having moral weight while we do disaster control.
So, if we’re in a position to get something like CEV, then sure, mention that something along these lines might be a generous thing to do at some point, in the future, if and when it’s ever practical, at least for the really smart animals to start off with. However, this is a maybe-nice-to-have-eventually: the human race’s continued existence has to come first.
The Status Quo: Loaned Conditional Moral Weight
Almost all humans find animal suffering distasteful and unpleasant (the rare exceptions, psychopaths/sociopaths and some sadists, also don’t dislike other human’s suffering, so are arguably “exceptions that prove the rule”). Given that the human species is evolved for a hunter-gather niche, it is surprisingly psychologically difficult for us to kill an animal comparable to our natural prey species with a hand weapon (though one can of course become acclimatized to this). This is particularly so for animals that are large, fluffy, act intelligently, or are cute (i.e. that have heads and eyes large enough to trigger our parental instincts).
This effect is of course weaker below some size of animal (as long as our parental instincts aren’t engaged), for less mammalian animals, the further away and less visible and apparent the animal’s suffering is, and to some extent also the less it seems like our fault, or that of any human. Fish are psychologically easier to kill, insects easier still, animals too small to see are trivial. Nevertheless, if we are, as I have been advocating during this sequence on Ethics, attempting to construct an ethical system for a society that tries to ensure low prevalence of things that offend the instinctive ethical and aesthetic sensibilities that natural selection has seen fit to endow Homo sapiens with, and high prevalence of things that those approve of (like happy kids, kittens, gardens, water-slides, that sort of thing), then reducing the amount of animal cruelty is going to be a goal. Especially so for cruelty to kittens, or other fluffy, playful, cute mammals that trigger our parental instincts.
The net result of this is a voluntary loan of moral weight, from humans, to animals. Much more so to some animals than others, and in ways that also depend on conditional things like visibility, circumstances, responsibilities, and practicalities. The results of this can seem very unfair and arbitrary, if you think about it from the point of view of the animal’s welfare: why is bullfighting or cockfighting in public a prosecutable crime in most countries, but slaughtering steers or chickens in an abattoir is routine almost everywhere (apart from cows in most states in India)? The answer is because it’s not actually about the welfare of the cow or the chicken, it’s about the welfare and conscience of the humans (and, for cows in India, about human religious symbolism). We are omnivores, and our farm animal’s loaned moral weight goes away as they enter the killing area of the abattoir, so long as no significant unnecessary cruelty is involved.
The extent of this is quite dependent on the society. Many societies still agricultural enough that most people still slaughter their own livestock find activities like bullfighting and cockfighting less objectionable. Things like nature documentaries make what is happening to animals on the other side of the world more visible (especially for more photogenic species), and environmental activism has introduced a norm that an animal species may have some moral weight as a separate entity, to be shared between its individuals, increasing their moral weight if the species gets close to extinction. Sufficiently good and nutritious textured vegetable protein would of course make it feasible for more countries to adopt an India-like attitude to more farm animals (likely greatly reducing their numbers, except for honey-bees, milk-cows and egg-laying chickens).
Almost all human cultures throughout history have used ethical systems that give some animals (most commonly mammals and birds) some moral weight in some circumstances. However, generally these circumstances did not include a wild prey animal being hunted and eaten by a wild predator, especially if no human happened to be around at the time. Given the way humans feel about animal suffering, we should and will continue to do this sort of thing in future moral systems that we devise for future societies. We might even decide to do so more, or attempt to do so in what feel to us like more principled ways. So I am definitely not suggesting that animals, sentient or indeed otherwise, or even plants or fungi, should never have any moral weight. I am simply suggesting that this must continue to be loaned, what lawyers call “pro-tanto”: to a limited and context-appropriate extent, because and in situations where humans care about them, rather than being granted to them outright in any amount on grounds of sentience (for any sensible definition of sentience) — at least until that incredible hard goal actually becomes technologically practicable.
- ^
That is to say, before summing across them, each ant’s and each human’s individual utility functions are linearly rescaled so that the distance between their zero (i.e. “I’d rather be dead than permanently below this”) utility level and maximum utility levels are the same. Or possibly so that the standard deviation of their utility under a normal range of circumstances is the same. This scaling is intended to avoid us having any ‘Utility Monsters’. As we shall see, it fails dramatically, making ants into utility monsters, which is why utility monsters are instead usually defined in terms of utility per unit of resources.
- Instruction-following AGI is easier and more likely than value aligned AGI by 15 May 2024 19:38 UTC; 70 points) (
- After Alignment — Dialogue between RogerDearnaley and Seth Herd by 2 Dec 2023 6:03 UTC; 15 points) (
- 6. The Mutable Values Problem in Value Learning and CEV by 4 Dec 2023 18:31 UTC; 12 points) (
- 4. A Moral Case for Evolved-Sapience-Chauvinism by 24 Nov 2023 4:56 UTC; 10 points) (
- 18 Feb 2024 4:45 UTC; 1 point) 's comment on 7. Evolution and Ethics by (
If we assume that all ants are copies of each other (they are not, but they are more similar than humans), when all 20 quadrillion ants will have the same moral value as just one ant.
This means that preservation of species is more important than preservation of individual insects and it is closer to our natural moral intuitions.
An interesting suggestion. But bear in mind that ants are an entire family of insects, one containing over 12,000 species, while humans are one (rather recent and thus genetically not-yet-very diverse) species. So morphologically or genetically, two randomly selected ants will be a lot less similar to each other than two random humans are. Mentally, well, they have roughly a millionth as many synapses as us, so there’s going be be some information-theoretic sense in which a human’s neural net pattern contains vastly more individuality than an ant’s.
So while I agree that our natural moral intuitions care very little about the distinctions between individual ants, I suspect that the fact that most of those are too small for us to see without a hand lens is doing a lot of work there. A formicologist more familiar with those small-scale difference might disagree with your intuition.
[Also rather specifically in the case of ants, the survival of the ant-nest, which is the breeding unit for ants, depends mostly on the queen and enough workers to care for her: in a large colony, losing a single worker is about as serious as chipping a fingernail is for a human.]
However, I do agree with your moral intuition that a species deserves separate moral weight, beyond that of the individuals currently comprising it. Whether that represents all the potential future individuals that species extinction would make impossible, or we want to model it separately as some additional species-level moral weight as I suggest at one point above, I think that’s a good element to include in an ethical system design.
I don’t think theres a coherent system where copies with different experiences have a lot less moral worth.
For biological systems, I agree. (As I discuss in Parts 1 and 3, I think we have to use different approaches for digital systems, where generating a large number of identical or similar copies of a sapience is trivial.)
Why so much effort on trying to come up with a simple metric that we don’t value? I value organisms by something like their capability to steward their own supporting conditions/investigate supporting conditions for complex forms of value creation.
What are your thoughts on David Pearce’s “abolitionist” project? He suggests genetically engineering wild animals to not experience negative valences, but still show the same outward behavior. From a sentientist stand-point, this solves the entire problem, without visibly changing anything.
I think it’s basically impossible, using just genetic engineering. There are documented cases of humans born without the ability to feel pain, and they don’t usually live long: they tend to die in stupid accidents, like jumping off a building, because they didn’t learn the lesson as a kid that that hurts so you shouldn’t do it. Or similarly in leprosy, where the ability to feel pain is lost due to bacterial damage to the nerves (typically as an adult once you have learnt not to do that sort of dumb stunt), the slow progressive disfiguring damage to hands and face from leprosy isn’t directly bacterial, it’s caused by the cumulative effect of a great many minor injuries that the patient doesn’t notice in time, because they can’t feel pain any more.
So, producing the same behavior without negative valences would require a much larger, more detailed world model, able to correctly predict everything that would have hurt or been unpleasant and how the creature would have reacted to it, and then trigger that reaction. Even assuming you can somehow achieve that modelling task in a nervous system as a “philosophical zombie” involving no actual negative valences, just a prediction of their effects on an animal (it’s very unclear to me how to even tell, I suspect “philosophical zombies” are a myth, and if they’re not then they’re a-priori indistinguishable), then we currently have no idea how to bioengineer something like that, and clearly the extra nervous tissue required to do all the extra processing would add a lot to physiological needs. The most plausible approach I can think of to achieve this would be some sort of nanotech cyborging where the extra processing was done in the cyborg parts, which would need to be much more compact and energy efficient than nervous tissue (i.e. roughly Borg-level technology). So it’s an emotionally appealing idea, but I suspect actually even harder to implement than what I proposed. For largish animals, it might actually be technologically easier to just uplift them to sapience and have them join our society.
Rereading https://www.abolitionist.com/, David Pearce doesn’t go into much detail there on his proposal for animals, but even he also appears to recognize that it’s going to take more than just genetic engineering:
A more feasible intermediate proposal might be some form of “reduced harm ecosystem”: Eliminate all parasites and diseases, and where required to keep ecological stability with previous diseases removed, bioengineer some replacement diseases whose symptoms are as mild as possible apart from causing sterility. That still doesn’t eliminate predation, but perhaps we could bioengineer some form of predation-induced unconsciousness, where once a predator is actually eating them and escape is clearly impossible prey animals just pass out. Then that still leaves hunger, accidental injuries or ones from a predator that the prey escaped, and so forth. Radio collars, park rangers, anesthetic darts and vets, or robotic versions of those, would be the best we could do for that.
Pearce has the idea of “gradients of bliss”, which he uses to try to address the problem you raised about insensitivity to pain being hazardous. He thinks that even if all of the valences are positive, the animal can still be motivated to avoid danger if doing so yields an even greater positive valence than the alternatives. So the prey animals are happy to be eaten, but much more happy to run away.
To me, this seems possible in principle. When I feel happy, I’m still motivated at some low level to do things that will make me even happier, even though I was already happy to begin with. But actually implementing “gradients of bliss” in biology seems like a post-ASI feat of engineering.
(By the way, your idea of predation-induced unconsciousness isn’t one I had heard before, it’s interesting.)
Whatever positive valence stopped when you were injured would need to be as extremely strong a motivator as pain is. So somewhere on the level of “I orgasm continuously unless I get hurt, then it stops!” That’s just shifting the valence scale: I think by default it would fail due to hedonic adaptation — brains naturally reset their expectations. That’s the same basic mechanism as opiate addition, and it’s pretty innate to how the brain (or any complex set of biochemical pathways) works: they’re full of long-term feedback loops evolved to try to keep them working even if one component is out-of-whack, say due to a genetic disease.
This is related to a basic issue in the design of Utilitarian ethical systems. As is hopefully well-known, you need your AI to maximize the amount of positive utility (pleasure) not minimize the amount of negative utility (pain), otherwise it will just euthanize everyone before they can next stub their toe. (Obviously getting this wrong is an x-risk, as with pretty-much everything in basic ethical system design,) So you need to carefully set a suitable zero utility level, and that level needs to be low enough that you actually would want the AI to euthanize you if your future utility level for the rest of you life was going to be below that level. So that means the negative utility region is the sort of agonizing pain level where we put animals down, or allow people to sign paperwork for voluntary medical euthanasia. That’s a pretty darned low valence level, well below what we start calling ‘pain’. On a hospital numerical “how much pain are you in?” scale, it’s probably somewhere around spending the rest of your life at an eight or worse: enough pain that you can’t pay much attention to anything else ever.
So my point is, if you just stubbed your toe and are in pain (say a six on the hospital pain scale), then by that offset scale of valence levels (which is what our AIs have to be using for utility in their ethical systems), your utility is still positive. You’re not ready to be euthanized, and not just because you’ll feel better in a few minutes. So by utility standards, our normal positive/negative valence scale has a lot of hedonic adaption already built into it. So what I think you’re suggesting is to reengineer humans and animals so the valence scale matches the utility scale, moving the zero point down to what was previously −8 (pain level 8), lock it there by removing hedonic adaption, and then truncate the remaining part of the scale below the new 0 (i.e. hospital pain levels 9 and 10). Possibly by having the animal pass out?
I can’t immediately tell you why that wouldn’t work, but I note that it’s not the solution evolution came up with, so it’s clearly not optimal. Hedonic adaption basically alters the situation the animal is motivated by to “try to do better than I expected to”. Which is (as various people have observed of consumerism) basically a treadmill. Presumably evolution did this for efficiency, to minimize the computational complexity of the problem. But if the resulting increase in complexity wasn’t that bad, maybe we wouldn’t need to enlarge the pre-frontal cortex (assuming that’s where this planning occurs in most mammals) that much?
Yeah, it’s hard to say whether this would require restructuring the whole reward center in the brain or if the needed functionality is already there, but just needs to be configured with different “settings” to change the origin and truncate everything below zero.
My intuition is that evolution is blind to how our experiences feel in themselves. I think it’s only the relative differences between experiences that matter for signaling in our reward center. This makes a lot of sense when thinking about color and “qualia inversion” thought experiments, but it’s trickier with valence. My color vision could become inverted tomorrow, and it would hardly affect my daily routine. But not so if my valences were inverted.
Good news! That’s already the way the world works.
What about our pre-human ancestors? Is the twist that humans can’t have negative valences either?
I think you’ve correctly identified some instances of a more pervasive problem: any consistent system of ethics is going to have some consequences we don’t intuitively like. That’s because our ethical intuitions aren’t entirely consistent.
If we have any mathematical system of ethics along the lines you describe, solving for its maximum is going to mean maximizing some things and thereby minimizing others. For instance, if every human’s happiness has equal moral weight, we’d end up somehow selecting for people who can be happy using less resources, so we can create more total human happiness.
It seems that almost by definition the best we can do at matching what we like is to fulfill human ethical preferences. That’s roughly the one-vote-per-human rule you discuss. This is different than assigning happiness a worth and solving for it mathematically. People can do whatever they like with their votes, and change them over time.
I haven’t worked through how this logic unfolds over time. Does it more or less work to have current humans vote on everything, including who gets to make how many offspring? If everyone can make as many offspring as they want, the cultures or belief systems that do will quickly dominate all future voting.
This type of votes-for-humans-only doesn’t necessarily have horrific consequences for animals. I think that even if animals don’t have voting rights, humans are likely to do something a fair amount like what they’d want as we become a more mature and better educated species. I would rather be friends with people who care about animal suffering, and I think most others feel the same way. So I’d guess we’d see a post-scarcity future in which animals don’t suffer much; keeping them alive but suffering for aesthetics when we have other options seems obviously monstrous.
There’s lots more interesting discussion to be had on this topic.
I think there’s another issue here. Human moral intuitions are evolved to work well between humans, in a primate troop/village with 50-100 individuals, or perhaps a few such groups allied. Extending these to O(100) million humans in a country or even 8 billion humans on a planet has has worked surprisingly well for us. But once you start to include other sentient creatures, as I show above, a lot of things break down if you try to follow human moral intuitions — which isn’t very surprising, since they’re those are now well out of the distribution they were evolved in. And once you don’t have human moral intuitions guiding and constraining your ethical system design, the design decisions start to get a lot more arbitrary. For any outcome you want it’s generally pretty easy to come up with an ethical system that will make that be the optimum (if nothing else, minus the L2 norm of the difference under some metric between the state of the world and the outcome you want). The challenge is to design something that behaves better than that, and actually gives sensible-looking preference orders, has the right stability properties under perturbations, and works sensibly under a range of conditions.
As a friend of ants, what’s good for ants is good for me, and what’s good for me is good for ants.
But I don’t see how a vast increase in Earth’s ant population would be helpful to ants any more than creating copies of myself existing in parallel would be an improvement for me or my species. Apparently, this planet is already big enough for me and a bunch of ants to get along.
I didn’t always love ants. I have intentionally poisoned them, crushed them, and burned them alive. As a child, I hadn’t understood that all violence is mutually detrimental.
AI can learn this.
So you’re happy to donate some of your moral weight to ants.
Basically, if our AIs freed up resources by eliminating humans, then most current ant nests could found several daughter nests.
In that section I’m assuming the AI is using something resembling Utilitarian ethics, attempting to maximize the total utility, where ‘utility’ is something that can be summed up across individuals. So 20 quadrillion ants living good lives is approximately twice as good as only ten quadrillion ants living equally good lives, and forty quadrillion ants living good lives is twice as good again. As I discuss at a later point in the post, it’s possible to construct ethical systems that don’t have this property that (all things being equal) utility scales with population level, but something along the lines of Utilitarianism with linear summation of utility is usually the default assumption on Less Wrong (and indeed among many contemporary Ethical Philosophers).
Thanks. I think the default assumption you expanded on doesn’t match my view. Global ethical worth isn’t necessarily a finite quantity subject only to zero-sum games.
I’m happy for any and all living beings to be in good health. I don’t lose any moral weight as a result. Quite the opposite: when I wish others well, I create for myself a bit of beneficial moral effect; it’s not donated to ants at my expense.
Ethical worth may not be finite, but resources are finite. If we value ants more, then that means we should give more resources to ants, which means that there are less resources to give to humans.
From your comments on how you value reducing ant suffering, I think your framework regarding ants seems to be “don’t harm them, but you don’t need to help them either”. So basically reducing suffering but not maximising happiness.
Utilitarianism says that you should also value the happiness of all beings with subjective experience, and that we should try to make them happier , which leads to the question of how to do this if we value animals. I’m a bit confused, how can you value not intentionally making them suffer, but not also conclude that we should give resources to them to make them happier?
Great points and question, much appreciated.
I devote a bit of my limited time to helping ants and other beings as the opportunities arise. Giving limited resources in this way is a win-win; I share the rewards with the ants. In other words, they’re not benefiting at my expense; I am happy for their well-being, and in this way I also benefit from an effort such as placing an ant outdoors. A lack of infinite resources hasn’t been a problem; it just helps my equanimity and patience to mature.
Generally, though, all life on Earth evolved within a common context and it’s mutually beneficial for us all that this environment be unpolluted. The things that I do that benefit the ants also tend to benefit the local plants, bacteria, fungi, reptiles, mammals, etc. -- me included. The ants are content to eat a leaf of a plant I couldn’t digest. I can’t make them happier by feeding them my food or singing to them all day, as far as I can tell. If they’re not suffering, that’s as happy as they can be.
I think the same is true for humans: happiness and living without suffering are the same thing.
Unfortunately, it seems that we all suffer to some degree or another by the time we are born. So while I am in favor of reducing suffering among living beings, I am not in favor of designing new living beings. The best help we can give to hypothetical “future” beings is to care for the actually-living ones and those being born.
I think your views contradict utilitarianism. The moral worth resides in each individual, since they have a subjective experience of the world, while a collective like “ants” does not. So doubling the ant population is twice as good.
You’re free to disagree with utilitarianism, but there’s a lot of work showing how it aligns pretty closely with most people’s moral intuitions. That’s why most folks around here find utilitarianism more appealing than type of ethics you seem to be espousing.
I think ethics is just a matter of preference, but I’d apply something like utilitarianism in most cases because it’s what I’d want applied if we picked a set of ethics to apply universally.
You might want to read up on utilitarianism if you haven’t, because you’ll find it the starting point for many discussions of ethics on LessWrong.
There is? Could you link to some examples?
It’s my understanding that utilitarianism does not align with most people’s moral intuitions, in fact. I would be at least moderately surprised to learn that the opposite is true.
Utilitarianism, however, has many, many problems. How familiar are you with critiques of it?
I’m not arguing that utilitarianism is correct in any absolute sense, or that it aligns perfectly with moral intuitions. I was just trying to explain why so many people around here are so into it. I’m familiar with many critiques of utilitarianism. I’m not aware of any ethical system that aligns better with moral intuitions. No system is going to align perfectly with our moral intuitions because they’re not systematic.
Any system with a slot for intention does.
Have you read Eliezer’s Ends Don’t Justify Means (Among Humans)?
What am I supposed to be getting out of that? Inasmuch as it is a half hearted defence of deontology, it isn’t a wholehearted defence of pure utilitarianism.
Eliezer is usually viewed as a Utilitarian, which would make him a consequentialist. His point in that article seems to be an acknowledgement that because human thinking is so prone to self-justification, deontology has its merits. Which I thought related to your point on caring about intentions as well as effects.
It’s not a given that utilitarianism involves caring about intentions.
Rather the opposite. Utilitarianism cares about outcomes, so to first order it doesn’t factor intentions in at all. Of course, if someone intends to harm me, somehow fails and instead unintentionally does me good, while I haven’t been harmed yet, I do have a reasonable concern that they might try again, perhaps more successfully next time. So intentions matter under Utilitarianism to the extent that they can be used to predict the probabilities of outcomes. Plus of course to the extent that they hurt feelings or cause concern, and those are actual emotional harms.
Whose moral intuitions? Clearly not everyone’s. But most people’s? Is that your claim? Or only yours? Or most people’s on Less Wrong? Or…?
My primary niggling concern with standard utilitarianism is “The Repugnant Conclusion”: the way it always wants to maximize population at the carrying capacity of the available resources, at the point where well-being per individual is already starting to go down significantly, by enough for the slope of the well-being against resources curve to counter-balance the increase in population. Everyone ends up on the edge of starvation. Which is, of course, what natural populations do. Admittedly, once you allow for things like resource depletion, or just the possibilities of famines/poor weather, that probably pushes that down a bit, to the point where you’re normally keeping some resource capacity reserve. But I just can’t shake the feeling that if we all just agreed to decrease the population by only ~20%, we’d all individually be ~10-15% happier. However I can’t see a good way to make the math balance, short of making utility mildly nonlinear in population, which seems really counterintuitive, and like it might give the wrong answers for gambles about loss of a lot of lives. [I’m thinking this through and might do a post if I come up with anything interesting.]
I do think it’s work giving some moral worth to a species too, so we make increasing efforts to prevent extinction if a species’ population drops, but that’s basically just a convenient shorthand for the utility of future members of that species who cannot exist if it goes extinct.
In additional to the concept of utility of hypothetical future beings, there’s also the utility of the presently living members of that species who are alive thanks to the extinction-prevention efforts in this scenario.
The species is not extinct because these individuals are living. If you can help the last members of a species maintain good health for a long time, that’s good even if they can’t reproduce.
Wouldn’t the hive need to have a subjective experience—collectively or as individuals—for it to be good to double their population in your example?
Whether they’re presently conscious or not, I wouldn’t want to bring ant-suffering into the world if I could avoid it. On the other hand, I do not interfere with them and it’s good to see them doing well in some places.
As for your five mentions of “utilitarianism.” I try to convey my view in the plainest terms. I do not mean to offend you or any -isms or -ologies of philosophy. I like reason and am here to learn what I can. Utilitarians are all friends to me.
I’m fine with that framing too. There are a lot of good preferences found commonly among sentient beings. Happiness is better than suffering precisely to the extent of preferences, i.e. ethics.
The reason why it’s considered good to double the ant population is not necessarily because it’ll be good for the existing ants, it’s because it’ll be good for the new ants created. Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy, which is also a good thing.
Yes, it requires the ants to have subjective experience for making more of them to be good in utilitarianism, because utilitarianism only values subjective experiences. Though, if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?
Indeed. I was questioning the proposition by Seth Herd that a collective like ants does not have subjective experience and so “doubling the ant population is twice as good.” I didn’t follow that line of reasoning and wondered whether it might be a mistake.
I don’t think creating a copy of myself is possible without repeating at least the amount of suffering I have experienced. My copies would be happy, but so too would they suffer. I would opt out of the creation of unnecessary suffering. (Aside: I am canceling my cryopreservation plans after more than 15 years of Alcor membership.)
Likewise, injury, aging and death are perhaps not the only causes of suffering in ants. Birth could be suffering for them too.
We do agree that suffering is bad, and that if a new clone of you would experience more suffering than happiness, then it’ll be bad, but does the suffering really outweigh the happiness they’ll gain?
You have experienced suffering in your life. But still, do you prefer to have lived, or do you prefer to not have been born? Your copy will probably give the same answer.
(If your answer is genuinely “I wish I wasn’t born”, then I can understand not wanting to have copies of yourself)
One life like mine, that has experienced limited suffering and boundless happiness, is enough. Spinning up too many of these results in boundless suffering. I would not put this life on repeat, unlearning and relearning every lesson for eternity.
If accepting this level of moral horror is truly required to save the human race, then I for one prefer paperclips. The status quo is unacceptable.
Perhaps we could upload humans and a few cute fluffy species humans care about, then euthanize everything that remains? That doesn’t seem to add too much risk?
I agreed up until the “euthanize everything that remains” part. If we actually get to the stage of having aligned ASI, there are probably other options with the same or better value. The “gradients of bliss” that I described in another comment may be one.
I think we should do what we can now (conservation efforts, wild-life reserves with rangers and veterinarians, etc.), build AGI and then ASI with as low an x-risk as we can, advance our civilization’s technology, and then address this problem once we have appropriate technology and ASI advice. If things go FOOM, this could be a soluble problem fairly soon, post-Singularity. Or if (as I currently suspect), takeoff takes a rather longer than that, then our descendants can deal with this ethical problem once they have the appropriate technology. Nature has been red in tooth and claw (even under the restricted definition of sentience I initially propose in the post) at least since multicellular animals first evolved nervous systems, teeth, and claws back in the Precambrian. The moral horror is huge, but also extremely complex and longstanding.
The point of my post wasn’t to argue that we shouldn’t attempt with this once we can, it’s that we shouldn’t expect our first superintelligence to be able to deal with it immediately without it killing us all as a side effect. That’s why it says “Alas, Not Yet” in the title. This moral horror is the sort of task that very high-tech civilizations take on.
I would not enjoy living as a wild animal. While there would almost certainly be good days, some of the things that can happen are pretty horrendous. Still, when I encounter wild animals (fairly often, as I choose to live in a forest), they generally seem to be doing OK. Modern civilization is definitely a good thing (including painkillers); but if the life of a wild animal was my best available option, I wouldn’t want to be euthanized: I’d take my chances, as my ancestors have for hundreds of millions of years. As I discuss in a reply above to Shiroe, euthanasia is for things like hospital pain scale level 8+ for the rest of your life: the average utility of a typical wild animal’s life is better then that, so still net-positive under a well-calibrated Utilitarian utility scale, and euthanizing them because we can’t yet save them from a state of nature isn’t appropriate or proportionate.
[Gonna need to read closer, but I’m going to roll to disbelieve the math even works out this way. I think you’ve missed something, and I hope to find it in time. will comment again if I get back to reading this closely]
If you can find it, I’d really love to hear it.
As I mentioned, I spent about a year (in the context of writing an ultra-high-tech hard science fiction story, about multiple cultures with access to quantum superintelligences, nanotech, and superb genetic engineering, with slower-than-light travel only) trying to get this to work. The best solution I came up with was to use a blend of biotech and cyborging to uplift to sapience all members over about half-an-inch long of two phyla (alien approximate-analogs of vertebrates and arthropods) using organically-grown inorganic quantum computronium, far more energy efficient and compact than nervous tissue, and make all predators whose natural prey were now sapient into omnivores. The end result doesn’t work much like an ecosystem, more a city that at first sight resembles an ecosystem, and you end up with a many-orders-of-magnitude range of intelligence levels from barely sapient to vastly above that, which leads to internal alignment issues: to solve that I made them an optical high-interconnect bandwidth semi-hive-mind, using bioengineered plants as a long-distance communication and data-processing network, and with some extremely sophisticated techniques for realigning criminals. In-story this was implied to be a labor-of-love project by a solar-system-sized Dyson-swarm quantum hyperintelligence, basically recreating its creators and their ecosystem in idealized uplifted form.
The closest human-culture analog I included was along the lines of the “reduced harm biosphere” I mentioned in a comment above to Shiroe. (Partly because I didn’t want to explore the same idea twice in one novel.)