Against Augmentation of Intelligence, Human or Otherwise (An Anti-Natalist Argument)

“…genetic engineering by itself could result in a future of incredible prosperity with far less suffering than exists in the world today.” – GeneSmith, December 2023

“Human intelligence augmentation needs to be in the mix. In particular, you have to augment people at least to the point where they automatically acquire deep security mindset, just in virtue of being that smart. This is not a low bar.”- Yudkowsky, April 2023

Sam Harris: “…if I told you that we, over the course of the next thirty years, made astonishing progress on this front [of human intelligence augmentation], so that our generation looks like bumbling mediaeval characters compared to our grandchildren, how did we get there?”

Daniel Kahneman: “You don’t get there.” (March 2019)

Reproduction increases suffering and death without consent, by definition. This is simple addition, multiplication, an undeniable fact of biology—science long settled. “Smart” people have no excuse for denying this. It is a low bar. Such is optimism bias, though, cruel master of “smart” and “not-so-smart” alike.

Reproducing and engineering children (or machines, but let’s go ahead and focus on children) in order to reduce suffering and evade death is as inherently contradictory today as it has ever been. 1 + 1 people’s suffering and death > 1 person’s suffering and death > 0 people’s suffering and death. Perhaps we are put off by the simplicity of the error? Obvious mistakes can be harder to admit than subtle ones, after all. Whatever our supposed excuse for it, though, creating children for the express purpose of trying to evade such a plain mathematical truth (2 > 1 > 0) is not only hopelessly irrational, but thoughtlessly cruel. It’s wrong, as wrong as any other (lesser) form of abuse. As Kahneman has made clear, when we encounter a judgment with an irrefutably correct answer against which we can measure the reality of our cognitive bias, we have no option except to attempt to correct for the bias by interrogating the thought processes that lead to it. In this case, that means we stop birthing innocent children to distract from our present impending-extinction woes. We focus, instead, to start, on the woes, which is to say on existing children.

Suppose ethical anti-natalism were the norm, though, that we could check this box and move down the list. Wouldn’t a focus on augmenting human intelligence and genetic engineering for the final generations of humanity be a top priority for preparing for AGI, for easing suffering in general (“incurable depression”, for example), and evading avoidable forms of painful death by curing diseases? Much as I’d like to sympathize with the struggle to gain sufficient investment in alignment and genetic engineering research, it’s difficult to ignore the fact that euthanasia technology (basic anesthesia, barbiturates, narcotics, oxygen deprivation) requires comparatively far less funding, and, if pursued in tandem with refraining from reproducing suffering and death without consent (childbirth), results in objectively less suffering and death, approaching zero. Running to the “CAVE” (Compassionate, Accessible, Voluntary Euthanasia) is not without its strategic problems. But, no, compared to focusing on those mostly basic administration problems, I don’t see how either genetic engineering in general and specifically intelligence augmentation could possibly be anywhere near the top of a well-formed list of humanity’s priorities.

We have settled science regarding our fear of death obscuring our awareness of our own direct responsibility for multiplying suffering and death in the form of Terror Management Theory (TMT), a specialized study within the broader domain of cognitive bias research devoted specifically to death-anxiety bias. Our lives don’t scale with our expectations, that research demonstrates conclusively. We’re trapped inescapably in the dying body, on a dying planet, fueled by a dying sun, in a galaxy doomed to collision and dissolution if not to being devoured by a central black hole, in a universe doomed to entropic decay unto absolute heat death as far as anybody can tell, and this is by now news to no one. The interests of our genes are not aligned to the interests of our bodies, nor are the interests of our minds aligned to our apparently uncontrollable machines. The inevitability of our deaths as individuals is ultimately the same problem as the inevitability of the extinction of the species which is ultimately the same problem as the heat death of the universe. It’s inherently depressing, unpleasant, bias-inducing. Where can we turn for refuge?

Not to childbirth, not with any good reason. This would be irrational and cruel, a multiplying of suffering and death without consent. However, far be it from me to discourage others’ therapeutic quietism, if they can find it evaluating their own and others’ “intelligence”, or in pursuing gene research startups. To each their own (Panglossian end-times utilitarianism), that’s fine. Do we have any sure basis for believing augmenting intelligence won’t simply worsen already existing problems, in that we are ourselves already an evolutionary intelligence augmentation experiment that has successfully manifested the 6th Mass Global Extinction Event, to which we will most probably succumb sooner than later? Do we have any reason to believe our own cognitive biases are not informing how we measure the utility of “human intelligence” to begin with, leading to over-confidence bias? Do we have any reason to assume “intelligence” won’t be forever haunted by incompleteness paradoxes and diminishing returns on any actionable utility? No. But that’s fine. Good entertainment is hard to find. But the least we can do is leave children out of it—existent and, most importantly, non-existent. They deserve so much better than we will ever be capable of providing, so much more than we were ever going to be able to provide. We need to correct for bias and let go.