Artificial Programming of Human Needs: A Path to Degradation or a New Impetus for Development?
Viktor Argonov // Problems of Philosophy. 2008. No. 12. P. 22-37 // Translation from Russian. Original text available here.
Abstract
The development of biological sciences in the twentieth century clearly demonstrated that the positive and negative sensations and emotions of living organisms can be controlled by influencing the material structure of the nervous system. Today it seems quite probable that in the foreseeable future humanity will learn to artificially, at the physiological level, associate pleasant and unpleasant sensations and emotions with any stimuli and life situations, thus gaining the ability to artificially program their needs. This work analyzes the prospects for creating and using such technologies, their possible limitations, and social consequences. It is shown that the factor of striving for individual survival will apparently allow people to avoid the most dystopian consequences and preserve the incentive for development under various social models — from completely liberal (even permitting simple artificial stimulation of pleasure centers) to totalitarian, based on the forced programming of needs.
Introduction
There is a well-known thesis that in the course of natural evolutionary development, living organisms always changed, “adapting” to the environment, but humans became the first who learned to reshape the environment to suit themselves at a much greater speed. Questions of physiological and psychological self-improvement have concerned humanity since ancient times, but having achieved impressive results in mastering the surrounding nature, humans themselves remained an unconquered “bastion.” Only in our time has it become clear that the radical restructuring of the human organism using technical means is a matter of the foreseeable future. Recent successes in the fields of artificial intelligence, microelectronics, neurophysiology, and biotechnology (cloning of mammals, decoding the human genome, successful experiments in removing the limit on the number of divisions of human cells, etc.) convincingly demonstrate that humans can learn to purposefully transform not only their habitat but also themselves, combining both evolutionary strategies.Multiple extensions of the average life expectancy, cyborgization — which implies the creation of new systems for nutrition, reproduction, additional sense organs, limbs, “intelligence amplifiers,” devices for the electronic exchange of information between individuals, etc. — all this can give humans unprecedented new opportunities [1][2][3][4][5][6][7][8].
One such change may be associated with the development of technologies of artificial programming of needs (APN) — the purposeful programming of motivations of human actions. Needs are fundamental because they set the purposes of activity. All other biological and technological changes in humans can only provide the means to achieve these purposes. The formulation of the problem of purposefully forming purposes sounds paradoxical, almost tautological. By what criterion can this ultimate goal be chosen, especially if a person is programming themselves? Most futurists ignore this problem; some consider it immoral. Usually, the issue is viewed through the prism of only traditional methods of programming needs (upbringing, propaganda, other psychotechnologies of “consciousness manipulation,” chemical substances), the possibilities of which are significantly limited. However, it seems highly probable to us that new methods of APN will appear in the future, associated, in particular, with the direct, somatic reassignment of connections in the neural tissue of the brain, which will lead to significant changes in people’s lifestyles and the structure of society. The first truly famous work dedicated to the purposeful programming of human needs and its social consequences was A. Huxley’s novel Brave New World [9]. It shows how revolutionary the fruits of improving even just traditional programming methods could be. The theoretical possibilities of new APN methods, as we will see below, are generally almost limitless. It is all the more paradoxical that this problem has not formed its own special, coherent direction in futurology. One can identify works that discuss technologies for artificial stimulation of pleasure centers or genetic reprogramming of humans to rid them of suffering and/or increase the average comfort of life. There are two polar points of view — to consider such technologies a new drug that will lead to the degradation of humanity [10], or, conversely, to see in them a path to building a society of universal happiness [11][12]. In the full sense, only such individual, “isolated” works as [7:1] are devoted to the problems of APN.
It is quite difficult to cover in one article both the fundamental and technical prerequisites of APN, as well as the prospects for the possible development of humanity under various social scenarios (in particular, considering the possibility of a liberal and a totalitarian approach to the use of technologies). A comprehensive examination of the APN problem would require a whole monograph, but we will try to briefly highlight its main aspects. Unlike authors who emphasize what humanity should strive for, we will try to assess what might actually happen, considering the prospects and dangers of this path.
1. Description of the Behavior of Living Beings in Terms of Comfort Maximization
A fundamental property of all animals, starting from a certain level of evolutionary development, is the distinction between pleasant and unpleasant sensations and emotions. They define actions and stimuli to be sought after and avoided; they define needs, the initial principles of any purposeful behavior. Pleasant and unpleasant sensations and emotions could theoretically be associated with any stimuli, but in all actually existing species (except, in part, humans), the set of correspondences (the needs matrix, NM) is defined in such a way as to promote the survival of the species and, indirectly, the development of the entire organic world. Obviously, an animal that derived pleasure from pain or felt fear of food would be unviable. As P. V. Simonov wrote, “it is precisely the dialectic of preservation and development that led to the formation in the process of evolution of two main varieties of emotions — negative and positive. The subject seeks to strengthen, prolong, and repeat a positive emotion, and to weaken, interrupt, and prevent a negative one” [13][14].
The behavioral strategy of an animal can be represented as a problem of maximizing a certain quantity , which we will call comfort of a state. Comfort is a measure of the pleasantness of a state, regardless of the specific factors that cause it. Comfort can be defined as the degree of a subject’s satisfaction with their current sensory state, assuming the possibility of its unlimited continuation. Discomfort, accordingly, is a state with negative comfort, which the organism seeks to interrupt. Comfort is not equivalent to purely “physical” pleasure; it is an integral characteristic of all sensations and emotions that can be regarded as positive and negative. Neurophysiologically, they are generally associated with various centers of the brain, but there is a subjective scale of priority between them. The possibility of objectively measuring is problematic, but subjectively we can build a hierarchy of states according to their desirability.
In the simplest case, an organism seeks to maximize only the instantaneous, current value of comfort . It looks for actions that can change comfort in the direction of increase and performs them as long as they yield the desired result. In fact, the organism seeks a local maximum of the function q in the space of its actions (the form of this function may change over time under the influence of external factors). Beings capable of predicting events and planning actions for some time into the future are able to solve the problem of maximizing not instantaneous comfort, but its most probable average value over that time. If the forecasting horizon depends on the subject’s actions, corresponding to the length of some known state (after which comfort is unknown), the subject seeks to prolong the state with a positive predicted value of and shorten the state with its negative value. Such a behavioral strategy can be described as the desire to maximize the product of average comfortby the forecasting time. This quantity, which we will call utility, is equal to the integral of instantaneous comfort over time
where the current moment is taken as the zero time value. In particular, if is fixed (does not depend on the subject’s actions), maximizing simply means maximizing average comfort.
The desire to maximize utility can be interpreted as a willingness to sacrifice small immediate comfort for greater additional comfort in the future (S. Freud calls this, for humans, the reality principle as opposed to the purely animal pleasure principle[15]), but in practice, emotions provide feedback that makes the instantaneous value dependent on the integral . Thanks to this, a possible contradiction between maximizing and is fully or significantly eliminated. For example, an animal ignores food if it knows that danger is associated with it. In doing so, it sacrifices the pleasant sensations that food provides, but it does this not so much because of abstract knowledge of danger, but because of fear, which itself is an unpleasant emotion and provides such discomfort that the pleasure from food cannot compensate for it. The animal refuses food to get rid of the unpleasant emotion. Thus, the animal is able to care about the future (maximize ) by simply striving to maximize . Fear, of course, arises only due to knowledge of danger, the ability to predict events, and this leads to an objective difference in the behavioral strategy of animals with and .
The question of the applicability of the above to humans is the question of the validity of utilitarianism. The founder of utilitarian ideas (in a broad sense) was Epicurus, who believed that people should always strive for what they believe will bring them satisfaction and avoid what they believe will cause them suffering [16]. The founder of modern utilitarian philosophy was J. Bentham [17], whose ideas were later developed by J. S. Mill [18]. Since that time, the model of man as a being striving to maximize “good” has ceased to be a subject exclusively of philosophical thought; it gave a significant impetus to the development of sociology and became one of the cornerstones of economic theory [19][20][21][22]. But to this day, utilitarian ideas remain controversial. Traditionally, they are condemned as representing man as immoral, selfish, governed by animal instincts. However, the fairness of such accusations strongly depends on the specific meaning we assign to the words “pleasure,” “comfort,” “utility,” “good.” With the definition of comfort we use in this work, we only assert that a person, when behaving rationally, strives to act in such a way as to be satisfied with their actions and their consequences. The dialectic of the utilitarian approach is such that, by setting a goal higher than obtaining pleasure, a person thereby still strives for the pleasant and avoids the unpleasant, only new factors act as pleasant and unpleasant. In particular, the comfort state of one subject may increase due to their awareness of the fact of an increase in the comfort state of other subjects. This ability of altruists to make sacrifices for other people while remaining satisfied has not only philosophical but also neurophysiological [23][24] and evolutionary [25] justifications.
Be that as it may, the human striving for comfort has a number of significant differences from the behavior of other animals. An important feature of humans is the logical awareness of their ability to care about the future. The time for forecasting and planning events is significantly longer for them than for other animals, and can be comparable to lifespan. Thanks to this, on a rational, not just instinctive, level, a person can raise the question of the value of life. In the traditional religious conception of an afterlife or predetermined reincarnation, the forecasting horizon is theoretically unlimited, and maximizing utility means, among other things (and often primarily), caring about the future life. But if death is the end of everything, or a transition to a fundamentally unpredictable state, the forecasting and planning time cannot exceed the upcoming biological lifespan . If , then, depending on the predicted value of average comfort , a person faces the task of prolonging or shortening life (according to the same simple principle that the pleasant is what should be prolonged, and the unpleasant is what should be stopped or shortened). From this, a person gains two new possibilities: firstly, to care about survival when instincts do not require it (no real immediate danger); secondly, to go against the instinct of self-preservation if there are logical, non-affective, reasons for ending life (upcoming life, if not sacrificed, appears to be physical or spiritual suffering). Thus, a rational approach leads a person to deny the unconditional necessity of survival, but with a positive it gives a new powerful incentive to preserve and prolong life. The need for survival is no longer independent; it turns out to be a function of the success in satisfying other needs. It is particularly important to note that we are talking here about individual survival, which only indirectly contributes to the survival of the species or population.
Another feature of humans is life in a rapidly changing environment. The rate of environmental change caused by human activity is incomparably higher than the rate of natural biological evolution, so basic biological needs do not have time to adapt to new realities. Thus, while for wild animals tasty food is almost always beneficial, for humans the relationship is often reversed. Many human food products do not exist in nature in a ready-made form, and a mechanism for adequately assessing their usefulness has not been developed for them. Sexual selection continues to be largely based on completely archaic criteria that do not correspond to the interests of psychological compatibility (e.g., appearance). The most striking example of the discrepancy between the pleasant and the useful is hard drugs, which combine a way to obtain the strongest pleasant sensations and mortal danger. Such discrepancies are possible in other animals with , but in humans, due to the longer forecasting time , survival (and utility maximization) is particularly strongly “detached” from momentary pleasures. At the same time, the rapid change of environment creates prerequisites for disrupting the connection of survival not only with , but also with . Nevertheless, humans are a biologically very successful species. This is partly achieved due to their special attitude towards survival, but there is also another important factor — new, easily variable needs associated with higher nervous activity, capable of changing at the same speed as society and civilization develop. They can take various forms: creativity, socially useful labor, cognition of the world, morality, etc., but they are all united by the ability to vary easily both between different individuals and within one individual over a lifetime. It would be wrong to consider the listed spheres of activity as the exclusive prerogative of humans; in rudimentary form, they (e.g., creativity) also exist in other higher animals. But the peculiarity of humans lies precisely in the variability of the needs matrix, in the absence of a single innate set of preferences for all individuals, and it is this that has allowed natural selection to maintain the connection between the survival of the population and the maximization of by individuals.
2. Artificial Programming of Needs: Technical Issues
The existence in humans of easily variable “supra-biological” needs illustrates well that pleasant and unpleasant sensations and emotions are not always tied to specific events and stimuli. The same phenomenon or type of activity (a work of art, a scientific problem, a human action) can be pleasant for one person, unpleasant for another, and neutral for a third. Naturally, a person comes to the question of the possibility of purposefully establishing these connections, artificially programming needs. In society, the task of programming needs is performed by upbringing and ideology, but their possibilities, as we have already said, have known limitations. Is arbitrary programming of needs possible?
The task of artificial programming of needs (APN) is closely related to the task of controlling comfort. Control of comfort is carried out in the daily activities of living beings in any interaction with the outside world, with the aim of creating pleasant stimuli and removing unpleasant ones. But there are also methods of controlling comfort that imply a direct effect on nerve centers, for example, chemical (narcotic substances) or electrical. Electrical stimulation of pleasure centers is most famous from the experiments of J. Olds and P. Milner [26] in 1954. In these experiments, rats with electrodes implanted in their pleasure centers could stimulate them by pressing a button. When the rats understood that such a connection existed, they began to constantly close the contacts, losing interest in food and individuals of the opposite sex. Subsequently, C. Sem-Jacobsen and a number of other scientists conducted similar experiments on humans in a neurosurgical clinic. The studies showed that stimulation of similar brain areas caused feelings of joy, satisfaction, and erotic experiences.
Direct control of comfort is programming of needs only in the trivial sense that the appearance of a new pleasant stimulus leads to the emergence of a need to strive for it. By true programming of needs, we will understand not the creation of a new stimulus, but the establishment of connections between an existing stimulus and the sensation of comfort (connections in the needs matrix, NM). Such an approach, in accordance with cybernetic terminology, can be called algedonic[27].
The simplest method of direct, somatic reprogramming of needs is the surgical suppression or destruction of centers responsible for some pleasant or unpleasant sensations and emotions. Cases have long been known where a person, after a brain injury, for example, lost the ability to feel pain. Nowadays, surgical treatment of drug addiction is increasingly being practiced, where after stereotactic (based on high-precision intervention) suppression of a certain pleasure center, a person stops receiving pleasant sensations from harmful substances.
More complex APN tasks are associated with the problem of stimulus recognition. While this is not particularly difficult for chemical analyzers (taste, smell), and generally simple static images (simple pictures, individual sounds, elementary tactile sensations), it is much more complex for dynamic images, especially those recreated from information from several senses at once. It is easy to imagine how to make a person consider one food tasty and another not (for example, to program an attraction only to healthy food, if this can be determined by taste): it is necessary to study the taste signals entering the brain from different substances and change the principle by which the brain determines their pleasantness. One could also program a person to derive pleasure from physical labor and from active work in general; one could even (if needed for something) make pain sensations pleasant. But how to program the reactions of pleasure centers to complex, specialized types of activity, for example, to scientific work and creativity? This would require either extremely complex recognition of dynamic images (how, from visual and other sensations, to know that a person has made a scientific discovery?) or recognition of thoughts. In the latter case, the pleasure center would react not to external stimuli indicating the process or results of activity, but to the person’s thoughts about it. But here there is another difficulty, related to the fact that a person is capable of thinking about non-existent things (for example, mentally imagining scientific activity or its results that do not exist in practice).
In [7:2], V. Kosarev expresses the idea that APN technologies will develop simultaneously with artificial intelligence and cyborgization technologies. Cyborgization, as a result of which a person, including their brain, will become a hybrid of biological and technological, will allow transferring the APN problem from the field of pure neurophysiology to the field of computer science and control theory. This will make it possible to define the concepts of pleasant and unpleasant more strictly and to set the principle of utility maximization. Of course, a cyborg, like an ordinary person, must have subjective sensations, will, and emotions, so its creation will require a comprehensive study of the nature of consciousness, not limited to the realm of the pleasant and unpleasant. The cybernetic approach to regulating the behavior of systems for which pleasant and unpleasant, “reward” and “punishment” are defined (algedonic loops are created) was considered by one of the founders of modern control theory, S. Beer, in [27:1]. One can imagine an automatic system for artificial stimulation of pleasure centers, made in the form of a separate programmable device connected to the cyborg’s brain.
In any case, it seems to us that the difficulties of APN are only technical, and there are no fundamental limitations here. Theoretically, someday any conceivable NM might become possible, but even if this does not happen, their artificial assignment will become possible within very wide limits. It is only a matter of time.
3. Practical Use of Programming of Needs and Its Possible Social Consequences
If we assume that artificial programming of needs (APN) has become possible, the question arises about the goals and consequences of its practical use.
To make a forecast of the possible development of society, it is necessary to consider two factors: the interests of individual people striving to be satisfied with life, and the interests of states, which theoretically can be quite arbitrary (depending on the moral values accepted in society, the personal views of statesmen, etc.), but in historical perspective are subject to a selection process in which some models turn out to be more viable and others die out.
Assuming that APN is technically publicly available, two extreme models of social structure in relation to it can be distinguished. The first model, which we will conditionally call liberal, is that each person is given the right to decide for themselves which stimuli to consider pleasant and unpleasant. The development of society in such a model will be determined by the personal interests of people, their individual approaches to programming their needs. The opposite of the liberal model is the totalitarian model, according to which all (or most) people must be programmed forcibly (or before birth) in accordance with the interests of society, the state, or specific people in power (a fictional description of one such variant is given in [9:1]).
3.1. The Liberal Model of APN
Let us first discuss the prospects and problems of the liberal model, as it is more fundamental and reductionistic.
Apparently, the majority of people in their programming will be driven by the desire to increase the comfort of life. But the choice of a specific method is ambiguous. The full realization of APN ideas means that the same sensations can be obtained from any chosen stimulus or type of activity. Any pleasures, including not only “bodily” enjoyments but also the deepest emotional, spiritual experiences, with appropriate programming, can be obtained from creativity, from socially useful labor, etc., as well as from simply pressing a button (by the method of artificial stimulation, AS). By what criterion should the needs matrices (NM) be chosen? From the point of view of modern values, creativity and labor are good, while pleasure from pressing a button is a surrogate and evil. But how can such a position be justified rationally? A person choosing their NM could raise counterarguments. Why is creativity needed in society, except to obtain those very emotions that can now be achieved in many other ways? What is the benefit to society if people in it already have a means to be happy? Pressing a button is, at least, technically the simplest way of direct comfort control, without any tricks with stimulus recognition, etc.
In the literature, dystopian forecasts of such development are common, where a person who receives the strongest feelings artificially becomes like the rat from the experiments described in [26:1] and loses interest in other activities; where society degrades, stops developing, or even perishes. An example of a fictional description of such a society is the story The Final Circle of Paradise by A. and B. Strugatsky [10:1]. And even in the mentioned work [7:3], where the idea of comfort control is generally viewed optimistically, the author emphasizes that ”… the ‘pleasure’ center… must be reliably protected from the possibility of bypassing or ‘shirking’ the execution of necessary programs by directly affecting its ‘positive emotion’ centers,” i.e., he considers it necessary to introduce an artificial ban on AS. A similar helplessness in the face of the problem is found by M. Deering [6:1]: “After the Singularity a combination of nanotechnology and reverse engineering of the brain will give us the ability to experience any psychological state we choose at any time as much as we want without physical harm. Who will be able to resist the temptation to wirehead? And once experienced will the memory of the episode be addictively irresistible? Will it even be psychologically possible to turn it off the first time? We are all evolutionarily programmed to seek pleasure… This threat is perhaps the most serious of all the hazards associated with advanced technology… It might be a good idea to refrain from experimenting with your state of consciousness. … Do not alter the normal hardwired pleasure reward structures of your psyche.”
In our opinion, however, there is a simple natural mechanism that would not allow frightening forecasts to materialize in the liberal model. If a person strives to maximize not the instantaneous comfort , but the integral , they still have one most important factor in any situation — the factor of life expectancy. And if it becomes possible to easily ensure arbitrarily high (within the limits of technical capabilities) comfort of life (in everyday language, the “quality” of life), then the task of increasing its duration (its “quantity”) comes to the fore.
Given this, is artificial stimulation of pleasure centers by “pressing a button” really so good? Would it not have negative consequences for life expectancy? Like “traditional” drugs, methods of direct comfort control may pose a direct danger to health or cause physical dependence, where the danger is not the fact of AS itself, but the possible withdrawal from it. If these problems are solved and AS is easily accessible and harmless, an important question remains its compatibility with other activities. A person must eat, sleep, and ensure their safety. A well-known problem with conventional drugs is the person’s incapacity while intoxicated, loss of self-control, and reduced mental abilities. If AS has the same side effect, a person will be forced to periodically exit the state of euphoria and ensure their viability, while falling into a “natural” state with lower comfort. But such a lifestyle is no different from ordinary drug addiction; it is an extremely irrational choice both from the standpoint of maximizing the comfort of life (which can only be realized during periods of intoxication) and from the standpoint of survival (for which the person will have no sensory incentives). This path is even more unreasonable if the same sensations can, with appropriate programming, be obtained from ordinary labor, combining “business with pleasure” (which is impossible for ordinary drugs).
Consequently, only such a method of AS that is easily accessible, harmless, and does not interfere with other matters can find widespread practical application. If it is developed, simply fixing comfort at a certain stably high level, regardless of what a person does and what happens to them — constant artificial stimulation (CAS) — could come into wide practice. CAS should not cause sensory habituation (as happens with many ordinary stimuli, which over time cease to have an effect), otherwise the requirement of constant is not met. There is no particular problem in this, as shown by a number of modern neurochemical studies described in [11:1]. The specific implementation of CAS may vary. One could, for example, alternately stimulate several brain centers, creating a complex dynamic picture of sensations (a kind of “music of the feelings”) while maintaining a stably high comfort. In the distant future, CAS may have nothing to do with the vulgar archetype of the “rat pressing a button” — it could simply consist of genetically disabling discomfort mechanisms and maintaining high comfort without external intervention [11:2].
The use of CAS, for all its outward odiousness, would not entail dystopian social consequences. A person is happy (otherwise the condition of constant high comfort is not met) and interested in survival. In a certain sense, such a person is very socially convenient. They are not susceptible to drug addiction (including alcoholism), they do not require entertainment activities that do not contribute to survival; for them, the conflict between “pleasant and useful” simply does not exist. A consistently high level of “joy of life” would allow such a person to perform any socially significant work without laziness, as long as it is not associated with danger (such activities by this time could be fully mechanized). Nevertheless, in such a radical understanding, CAS has one significant drawback. If a person is equally satisfied in any situation, only the intellect can assess to what extent it contributes to survival. In a way, this is the loss of some important sense organ. And if in modern humans a negative intellectual assessment itself already causes a feeling of discomfort in advance (feedback), a person with CAS would not have that either. In some cases, rejecting sensory evaluation would be justified. For example, there are situations where severe pain or fear not only does not help to escape from danger but even hinders it. From a survival standpoint, it would be justified to simply receive logical information about the nature of the damage instead of excessively strong pain. But if a person does not feel pain at all (as we have said, such cases actually exist), they are more defenseless against dangers, may not notice damage, or simply treat the threat lightly. Perhaps the future person will assess the situation more adequately on an intellectual level; however, in the author’s opinion, sensory assessment will remain significant in the foreseeable future.
Thus, although CAS allows maximizing the average comfort , it is not as effective for maximizing life expectancy . To maximize utility , defined by integral , it is necessary to find a balance between survival and comfort. Obviously, an optimal NM should give pleasant sensations from actions that promote survival, and unpleasant (or less pleasant) sensations from actions that contradict or simply do not promote it. In order to maintain average comfort at a sufficiently high (though not constant) level, a person should not set only difficult-to-achieve tasks when programming. The program should stimulate any activity that promotes survival. The forecast for the development of society in this case will not differ significantly from the forecast when using CAS. Since the liberal model assumes that each person will be free in choosing their NM, it can be assumed that some people will still choose CAS. It is also possible that for some time CAS will dominate due to the technical complexity of more flexible APN schemes. Some people may also choose deliberately non-optimal and destructive matrices, including those that pose a danger to others. Choosing a socially dangerous NM is unwise from the point of view of maximizing , since following it will meet with resistance from other people. However, in practice, not all people will be guided by utilitarian and pragmatic considerations; a person may strive for any ideals, including destructive ones. Therefore, considering that security will remain of great importance for most people, it can be assumed that the most dangerous NMs will be prohibited by law.
The most radical change in society during the transition to the widespread use of APN or CAS will probably be the complete withering away of the entire modern entertainment industry. The only entertainment industry in such a society would be the development of new, more effective APN methods, including research to increase the maximum technically possible level of comfort. It should be noted, however, the considerable danger associated with the possibility of increasing the maximum permissible bar not only for pleasant but also for unpleasant sensations. The use of such technologies for criminal and/or “state” purposes could make it possible to inflict unlimited, truly hellish suffering on a person [12:1]. This danger is so serious that the possibility of even isolated precedents of this kind calls into question the ethical justification of all developments in artificial comfort control. But here we can only hope for the creation of methods for effective individual protection against such abuses and the general trend towards rationalization (not necessarily even humanization) of humanity.
Basically, the activity of most people will be re-targeted towards maintaining and prolonging their own lives. As we have already said, in the absence of the problem of “quality” of life, the natural motivation for actions remains its “quantity.” There are many factors determining life expectancy. In addition to “traditional” directions related to health promotion, medical development, ensuring public safety, etc., new, “non-traditional” methods aimed at radically increasing average life expectancy should be widely developed. These are based on the fight against aging — a factor that currently sets the insurmountable upper limit of life expectancy. Already today, active searches for methods of radical life extension are underway in many countries around the world, and the specific scientific and philosophical aspects of the problem are widely discussed (see, for example, Russian Internet resources [4:1][5:1][8:1]).
The fight against aging can be carried out in various directions. Some methods imply identifying the mechanisms of “programmed” onset of old age and disabling them. The theory of programmed cell death [28], based on the work of L. Hayflick [29] and A. Olovnikov [30], is widely known, as are the first successful experiments on “immortalizing” human cells “in vitro” [31]. There are also hypotheses about the genetic mechanisms of programmed death of multicellular organisms.
Other methods of radical life extension may be based on restructuring the human organism “bypassing” the mechanisms of aging. This could be the replacement of aged organs with new ones (the possibilities of separately growing cloned human organs or using organs of other animals are discussed), and cyborgization, which at the initial stage involves the creation of artificial organs, and later — a radical restructuring of the organism. At a certain stage, a person will still have a “natural” part subject to aging, but further development of biotechnologies should lead to the destruction of the boundary between “living” and “artificial.” Cyborgs will cease to be a mixture of biological and technological parts; they will be in the full sense living people, albeit with an artificial body. Unlimited life extension of a cyborg could be achieved, for example, by a modular scheme [2:1].
Arbitrary construction of the human body, in addition to getting rid of aging, will also provide protection from many dangerous factors, making a person resistant to extreme working conditions, less susceptible to injury, and capable of regenerating most damage. At the same time, of course, a person will still need to maintain their existence, technical serviceability, and food supply. Dangers from destructive actions of other people and global catastrophes will also remain.
The danger emanating from other people, apparently, could be significantly reduced in the process of natural development. As we have already noted, aggressive, destructive needs are not a preferred choice from a rational point of view. In conditions where the average life expectancy is very high and a person is free to fill life with joy, the vast majority of people will be cautious, not inclined to take risks. It is known that even in the modern world, people who are less well-off and less satisfied with life are usually involved in extremist activities. Of course, some people will program themselves deliberately against generally accepted standards or simply without thinking about the consequences and without using the experience of other people. Therefore, a narrowing of individual freedoms in APN seems inevitable — a legislative ban on identified destructive matrices or even a complete rejection of the liberal model.
A rational basis for conflicts between people will remain as the limited nature of resources, for example, energy sources. Most likely, commodity-money relations will also persist, although their role may not be as all-encompassing as in modern society, since many modern incentives for enrichment will disappear. In addition to resource distribution, money could be used to encourage socially useful NMs, to attract people to significant long-term projects. However, a radical increase in life expectancy will in itself create additional incentives for this. Apart from that, one should expect that society will become more individualistic, since, all other things being equal, it is more rational to program oneself for actions whose results depend only on oneself. One of the first victims of APN will be the currently existing archaic system of building inter-sexual relations, in which the happiness of one person strongly depends on the actions of another (sometimes, moreover, irrationally motivated). Most likely, the practice of uniting people into social groups and families will persist in the future, but mainly in cases where it is useful for survival. This, of course, does not mean that a person will “calculate” the consequences of this or that social action in each specific case; they will simply act in accordance with their tastes and preferences, determined by programming. In the process of natural historical development, knowledge about the consequences of using a particular NM will accumulate, and most people will choose the most effective ones for survival and utility maximization in general. It is not excluded that in the long term, the number of subjects will decrease while the capabilities of each expand. It can be assumed that in the distant future, people will be able to have several auxiliary terminal bodies, remotely controlled from one main one (biomarion technology [2:2]). Perhaps a technology for merging subjects into one will appear in such a way that it does not mean the death of any of them.
The creative interests of people will undergo a significant reorientation. Scientific and technical creativity will be preserved and increase in importance; however, “pure” art, which has only aesthetic value, may become unclaimed as APN technologies develop (if a person, as with the most ingenious work of art, can enjoy the ordinary everyday world around them, natural landscapes, the smell of grass, the rustle of leaves). This will affect not only the so-called “mass,” “entertainment” culture, but also everything whose sole purpose is to obtain certain feelings and emotions (being, ultimately, an indirect control of comfort). Art containing cognitive or developmental elements may survive, but even that will come into great question with the fundamental restructuring of the human intellect in the process of cyborgization. On the other hand, fundamentally new directions in art may appear, related to the development of APN methods, which could have both sensory (e.g., the mentioned “music of the feelings” in CAS systems) and intellectual significance.
A long-term factor that will always limit the “unlimited happy life” will remain global cataclysms. Not only earthquakes, floods, tsunamis, etc., which over time may cease to be a problem, but a broader task is the unlimited preservation of a habitat suitable for human life. This means not only protecting nature but also providing humans with energy sources, protection from cosmic dangers (meteorites, asteroids, nearby supernova explosions). In a few billion years, humanity will need to be saved from the death of the Sun. Perhaps this will require resettlement to another planetary system; perhaps humans will be able to prolong the Sun’s existence indefinitely; perhaps planetary systems will no longer be needed for life at all. In any case, the life of humanity and individual people cannot be infinite – someday all conceivable sources of free energy in the universe will simply run out. However, the above reasoning gives a vivid illustration of the development potential that a total reorientation towards life extension could give humanity. Consideration of the life expectancy factor shows that APN is unlikely to cause the degradation of humanity. On the contrary, in the long term, it could accelerate progress by freeing humans from expending effort on ensuring momentary comfort.
3.2. The Totalitarian Model of APN and Its Interaction with the Liberal Model
The totalitarian model has a number of well-known advantages over the liberal one. It allows the immediate exclusion from consideration of obviously unwise NMs, as well as a number of AS variants. The totalitarian model makes it easy to organize the joint work of people on significant projects (e.g., research in the field of life extension) and to effectively ensure security.
At the same time, the negative sides of the totalitarian model of artificial programming of needs are equally obvious. The main problem is associated with a certain degree of arbitrariness in the declared goals of societal development. These goals may correspond to the survival tasks of each individual or the state as a whole, but they may also directly contradict them. In the totalitarian model, the question of the mechanism for selecting the applied needs matrices (NMs) is, in fact, a question of power. Mandatory NMs may be chosen democratically, by specialized expert councils, by limited power circles, or even solely by heads of state, who in this case receive virtually unlimited power over people. Theoretically, the totalitarian model of APN combined with authoritarian power could lead to the most horrific consequences. Such power enables the organization of any destructive projects, detrimental both to individuals and to states and humanity as a whole; it allows programming people as obedient slaves who derive pleasure from carrying out orders and suffer from not carrying them out. One can also imagine a situation where a leader, driven by certain ethical, aesthetic, religious, or general philosophical convictions, would program people exclusively for suffering. Any, even the most insane idea using the totalitarian model of APN could be implemented much more effectively than is possible in modern or historical totalitarian states.
To make assumptions about actual historical development, we should consider scenarios where the division into separate states persists on Earth and where the state is unified. In the first case, competition between states will inevitably persist, in which a selection of more viable models will occur. As now, the level of scientific and technological development will remain an extremely important factor in the self-preservation of a state and its political system. States that set as their main goal not self-preservation, but some other, non-conducive goals (building an “ideal” society, ethnic purity of the nation, dominance of one religion, etc.) will, all else being equal, find themselves at a disadvantage and in the long term will be unviable. Totalitarian states oriented towards survival will have to find an NM, the forced use of which will ensure a high rate of scientific and technological development and internal stability of the political system. In principle, the task of state self-preservation may conflict with the task of maximizing utility, both for individuals and for all citizens collectively. Such states are to some extent analogous to colonies of highly developed eusocial insects, in which socially useful behavior is ensured not by violent coercion but by chemical programming of needs using pheromones from the dominant individual (the so-called “benevolent despotism” model) [32]. Such programming effectively ensures the survival of the population, but is not always favorable for individuals. It is known, for example, that a side effect of such an organization in honeybees is the extremely short lifespan of workers. But the peculiarity of humans, as already mentioned, is that they are capable of rationally realizing their desire to maximize utility. And if the chosen scheme of mandatory programming turns out to be far from optimal in terms of maximizing , citizens will not be loyal to the regime. A person can be programmed to derive pleasure from obviously dangerous or harmful activities, but on an intellectual level, they will still be interested in issues of life extension and obtaining pleasure from a wider range of stimuli. The authorities can fight discontent with the most sophisticated methods — program people to suffer from such thoughts, keep the fundamental possibility of alternative NMs secret, artificially deprive people of the ability to critically assess reality — but all this will inevitably encounter strong resistance and slow down scientific and technological development (if only because the state will spend a lot of effort on maintaining the regime). Therefore, the most viable states should be those in which the mandatory NM does not conflict too much with the task of maximizing . Then the totalitarian model of APN, optimized for the interests of the state, could have an advantage over the liberal one, optimized for the interests of individuals. On the other hand, any totalitarian model, even the most reasonable and humane, has a significant weakness compared to the liberal one — the imperfection of the mechanism for choosing the NM. The liberal model allows people to experiment on themselves and thus accumulate experience for the whole society. The totalitarian model, however, if not preceded by a liberal period, will rely only on theoretical assessments. It may be good for “canonizing” the best schemes developed by the liberal model, but it would still be a brake on development.
In historical development, competition between APN models will apparently occur, and it is difficult to predict in advance the advantage of one or the other. It can be assumed that even the most radical and unstable forms will appear from time to time in individual states. It is possible that, on the contrary, in the full sense, neither a purely totalitarian nor a purely liberal model will exist anywhere in practice. In the liberal model, as already mentioned, there may be a list of prohibited NMs. One can also imagine a moderately totalitarian model in which a person is given the right to choose between several legally approved permissible matrices. In historical perspective, convergence of models will apparently take place.
If there remains only one unified state on Earth, the factor of interstate competitiveness disappears. Scientific and technological development will cease to be necessary for the preservation of the state, will cease to be an end in itself, remaining only a means to prolong human life. The only conflict left will be the conflict between the survival of the individual and the survival of humanity. But these tasks are interdependent, and such a conflict is much milder than the conflict between individual survival and the strengthening of the state (to which, as historical experience shows, millions of lives can be sacrificed). Therefore, if the goals of a unified state are indeed limited to the survival of humanity (and, possibly, the elimination of conflicts between people), it is not so important whether it implements a liberal or totalitarian model of APN — in any case, the tasks of the state will almost always correspond to the interests of each individual person. Both models could prove sufficiently stable. Development may slow down, but the long-term forecast for the totalitarian model is similar to that for the liberal model.
At the same time, with the disappearance of competition between individual states, an important factor of instability for totalitarian political systems with “crazy” goals will also disappear. In conditions of rivalry, each state must develop, be strong, and competitive. For a unified state, the only restraining factor will be internal tension associated with citizen dissatisfaction, which, in the absence of competition between systems, may be suppressed much more easily. And yet, in the long term, the most odious scenarios seem unrealistic. The ruling elite will still pursue the goal of maximizing and, in particular, life expectancy, and for this, the development of science and technology is necessary.
The main features of the totalitarian model of APN in a unified state are well reflected in A. Huxley’s novel Brave New World [9:2]. The dominant NMs are optimized for the interests of state survival, while minimally conflicting with people’s interests (people are kept satisfied). Due to the lack of competition with other states, scientific and technological development slowed down — it, like a number of other phenomena, was sacrificed for stability. The author shows how the choice of the totalitarian path was predetermined by the specific available set of APN technologies: programming is carried out before birth and in early childhood, and further reprogramming is practically impossible.
At the same time, from a modern perspective, Huxley’s novel contains a number of obvious errors, thanks to which the described world largely acquired a dystopian tinge. These include, in particular, the absurdly exaggerated idea of consumption. In modern society, the orientation towards consumption is a way of creating incentives for people to work and, consequently, for higher rates of development and state competitiveness. However, in the Brave New World, the factor of development speed is no longer decisive, and the strategy of orienting people towards consumption is suboptimal, if only because they can be programmed to work directly. Moreover, the author does not consider factors such as automation of production, artificial intelligence, and, most importantly, people’s desire to prolong life. All people are programmed to obediently accept the inevitability of death, which is impossible while maintaining high comfort and sufficiently high intellectual abilities (at least in part of the population). The desire to prolong people’s lives would not allow society to slow down development too much.
Conclusion
The natural course of evolutionary development of the organic world established a close connection between the striving of individuals for comfort and the survival of species. Natural selection put all animal activity at the service of the survival task, although the behavior of the organisms themselves was motivated only by the pursuit of relatively short-term, or even entirely momentary, comfort.
Humans were able to realize the task of survival explicitly and thus separate it from the task of maximizing instantaneous and short-term comfort. For the vast majority of people in the past, the main task was survival, as survival was a greater problem than maintaining average comfort at a moderately sufficient level provided by traditional life.
Modern society, in comparison with traditional society or the animal world, finds itself in a unique state where most activity (at least in developed countries) has shifted from survival to increasing comfort. The prerequisites for this were, firstly, an increase in average life expectancy almost to the natural biological limit (not yet overcome), and secondly, the emergence of new, widely available methods for increasing comfort. Thus, life expectancy became much less dependent on human actions than comfort. The established hedonistic orientation of modern civilization is often condemned from ethical positions, but at the current level of development of productive forces, it has in practice shown its stability and greater competitiveness compared to alternative models (attempting to artificially limit the possibilities of maximizing comfort).
The emergence of methods for artificial programming of needs, along with a radical increase in life expectancy, may create in the foreseeable future prerequisites for another change in the value paradigm – a reorientation of all activity back towards survival. This will, in a certain sense, be a return “to nature,” to the animal state, but already at a qualitatively new stage of evolutionary development. This work aims to show that, contrary to popular opinion, this will not lead to the degradation of society but, on the contrary, will increase its desire for development. Society may become simpler, more one-sided from the point of view of modern humans, but, at the same time, paradoxically combine the ideals of hedonism (easily accessible maximum enjoyment of life), Marxism (happy labor for the common good), scientism (orientation towards scientific and technological development), and even religious-ethical concepts (maximum relief of human suffering, resolution of conflicts, liberation from “animal” passions). People may radically change their bodies, become cyborgs, learn to arbitrarily change their form, and use any conceivable energy sources. But whatever fantastic transformations happen to humans in the distant future, the key question will remain about needs, about the motivation of activity. And it is this that will determine the course of further human development.
Lem S. Summa Technologiae. Translated from Polish by A. G. Gromova, D. I. Iordanskii, R. I. Nudelman, B. N. Panovkin, L. R. Pliner, R. A. Trofimov, Yu. A. Yaroshevskii. Moscow: Mir, 1968. 607 p. (in Russian) [Original: Lem S. Summa Technologiae. Krakow: Wydawnictwo literackie, 1967]
Deering M. Rassvet singulyarnosti. [Electronic resource]. Translated from English by P. Vasilev. Available here (in Russian) // Original: Deering M. S. The Dawn of Singularity. Available here
Kosarev V. V. Kto budet zhit’ na zemle v XXI veke? [Who will live on Earth in the 21st century?]. Neva. 1997. No. 10. p. 135–149. Available here (in Russian)
Huxley A. O, divnyi novyi mir. Translated from English by O. Soroka, V. Babkov. St. Petersburg: Amfora, 1999. 541 p. (in Russian) [Original: Huxley A. Brave new world. London: Chatto & Windus, 1932. 306 p.]
Strugatskii A. N., Strugatskii B. N. Khishchnye veshchi veka [The Final Circle of Paradise]. In: Strugatskii A. N., Strugatskii B. N. Sobranie sochinenii [Collected Works]. Moscow: Tekst, 1992. Vol. 3. p. 413. (in Russian)
Bolonkin A. A. Nauka, dusha, rai i vysshii razum [Science, Soul, Paradise and the Supreme Intelligence]. [Electronic resource]. 2001. Available here and here (in Russian)
Freud S. Po tu storonu printsipa udovol’stviya. Moscow: Progress, 1992. 545 p. (in Russian) [Original: Freud S. Beyond the Pleasure Principle. London: Hogarth, 1920]
Dyshnik M., ed. Materialisty Drevnei Gretsii. Sobranie tekstov Geraklita, Demokrita i Epikura [The Materialists of Ancient Greece. A Collection of Texts by Heraclitus, Democritus, and Epicurus]. Moscow: Gospolitizdat, 1955. 238 p. (in Russian)
Bentham J. Vvedenie v osnovaniya nravstvennosti i zakonodatel’stva. Moscow: Rosspen, 1998. 415 p. (in Russian) [Original: Bentham J. An introduction to the principles of morals and legislation. London, 1789]
Mill J. S. Utilitarianizm. O svobode. 3rd ed. St. Petersburg: Perevoznikov, 1900. 236 p. (in Russian) [Original: Mill J. S. Utilitarianism. London, 1863]
Gossen H. H. Entwickelung der Gesetze des menschlichen Verkehrs, und der daraus fliessenden Regeln für menschliche [Development of the Laws of Human Intercourse and the Consequent Rules for Human Action]. Berlin: R. L. Prager, 1889. (in German) [Original: Gossen H. H. Entwickelung der Gesetze des menschlichen Verkehrs, und der daraus fliessenden Regeln für menschliche. Braunschweig: Vieweg, 1854]
Jevons W. S. Politicheskaya ekonomiya. St. Petersburg: Narodnaya pol’za, 1905. (in Russian) [Original: Jevons W. S. Theory of Political Economy. London: Macmillan, 1871]
Menger K. Osnovaniya politicheskoi ekonomii [Principles of Economics]. In: Avstriiskaya shkola v politicheskoi ekonomii: K. Menger, E. Böhm-Bawerk, F. Wieser [The Austrian School of Economics: K. Menger, E. Böhm-Bawerk, F. Wieser]. Moscow: Ekonomika, 1992. (in Russian) [Original: Menger C. Grundsätze der Volkswirtschaftslehre. Vienna: W. Braumüller, 1871]
Walras L. Elementy chistoi politicheskoi ekonomii ili Teoriya obshchestvennogo bogatstva [Elements of Pure Economics: or, The Theory of Social Wealth]. Translated from French by I. A. Egorov, A. V. Belyanin. Moscow: Izograf, 2000. 421 p. (in Russian) [Original: Walras L. Éléments d’économie politique pure; ou, Théorie de la richesse sociale. Lausanne: Corbaz, 1874]
Milner P. Fiziologicheskaya psikhologiya. Translated from English by O. S. Vinogradova. Moscow: Mir, 1973. 647 p. (in Russian) [Original: Milner P. M. Physiological psychology. New York: Holt, Rinehart & Winston Inc., 1970]
Beer S. Mozg firmy. Moscow: Radio i svyaz’, 1993. 192 p. (in Russian) [Original: Beer S. Brain of the Firm: a development in management cybernetics. New York: Herder and Herder, 1972. 319 p.]
Korshunov A. M., Preobrazhenskaya I. S. Programmirovannaya smert’ kletok (apoptoz) [Programmed cell death (apoptosis)]. Nevrologicheskii zhurnal [Neurological Journal]. 1998. Vol. 3, No. 1. p. 40–47. (in Russian)
Olovnikov A. M. Printsip marginotomii v matrichnom sinteze polinukleotidov [The principle of marginotomy in template synthesis of polynucleotides]. Doklady Akademii Nauk SSSR [Proceedings of the USSR Academy of Sciences]. 1971. Vol. 201, No. 6. p. 1496–1499. (in Russian)
Bodnar A. G., Ouellette M., Frolkis M., Holt S. E., Chiu C. P., Morin G. B., Harley C. B., Shay J. W., Lichtsteiner S., Wright W. E. Extension of life-span by introduction of telomerase into normal human cells. Science. 1998. Vol. 279. p. 349–352.
Artificial Programming of Human Needs: A Path to Degradation or a New Impetus for Development?
Artificial Programming of Human Needs: A Path to Degradation or a New Impetus for Development?
Viktor Argonov // Problems of Philosophy. 2008. No. 12. P. 22-37 // Translation from Russian. Original text available here.
Abstract
The development of biological sciences in the twentieth century clearly demonstrated that the positive and negative sensations and emotions of living organisms can be controlled by influencing the material structure of the nervous system. Today it seems quite probable that in the foreseeable future humanity will learn to artificially, at the physiological level, associate pleasant and unpleasant sensations and emotions with any stimuli and life situations, thus gaining the ability to artificially program their needs. This work analyzes the prospects for creating and using such technologies, their possible limitations, and social consequences. It is shown that the factor of striving for individual survival will apparently allow people to avoid the most dystopian consequences and preserve the incentive for development under various social models — from completely liberal (even permitting simple artificial stimulation of pleasure centers) to totalitarian, based on the forced programming of needs.
Introduction
There is a well-known thesis that in the course of natural evolutionary development, living organisms always changed, “adapting” to the environment, but humans became the first who learned to reshape the environment to suit themselves at a much greater speed. Questions of physiological and psychological self-improvement have concerned humanity since ancient times, but having achieved impressive results in mastering the surrounding nature, humans themselves remained an unconquered “bastion.” Only in our time has it become clear that the radical restructuring of the human organism using technical means is a matter of the foreseeable future. Recent successes in the fields of artificial intelligence, microelectronics, neurophysiology, and biotechnology (cloning of mammals, decoding the human genome, successful experiments in removing the limit on the number of divisions of human cells, etc.) convincingly demonstrate that humans can learn to purposefully transform not only their habitat but also themselves, combining both evolutionary strategies.Multiple extensions of the average life expectancy, cyborgization — which implies the creation of new systems for nutrition, reproduction, additional sense organs, limbs, “intelligence amplifiers,” devices for the electronic exchange of information between individuals, etc. — all this can give humans unprecedented new opportunities [1] [2] [3] [4] [5] [6] [7] [8] .
One such change may be associated with the development of technologies of artificial programming of needs (APN) — the purposeful programming of motivations of human actions. Needs are fundamental because they set the purposes of activity. All other biological and technological changes in humans can only provide the means to achieve these purposes. The formulation of the problem of purposefully forming purposes sounds paradoxical, almost tautological. By what criterion can this ultimate goal be chosen, especially if a person is programming themselves? Most futurists ignore this problem; some consider it immoral. Usually, the issue is viewed through the prism of only traditional methods of programming needs (upbringing, propaganda, other psychotechnologies of “consciousness manipulation,” chemical substances), the possibilities of which are significantly limited. However, it seems highly probable to us that new methods of APN will appear in the future, associated, in particular, with the direct, somatic reassignment of connections in the neural tissue of the brain, which will lead to significant changes in people’s lifestyles and the structure of society. The first truly famous work dedicated to the purposeful programming of human needs and its social consequences was A. Huxley’s novel Brave New World [9] . It shows how revolutionary the fruits of improving even just traditional programming methods could be. The theoretical possibilities of new APN methods, as we will see below, are generally almost limitless. It is all the more paradoxical that this problem has not formed its own special, coherent direction in futurology. One can identify works that discuss technologies for artificial stimulation of pleasure centers or genetic reprogramming of humans to rid them of suffering and/or increase the average comfort of life. There are two polar points of view — to consider such technologies a new drug that will lead to the degradation of humanity [10] , or, conversely, to see in them a path to building a society of universal happiness [11] [12] . In the full sense, only such individual, “isolated” works as [7:1] are devoted to the problems of APN.
It is quite difficult to cover in one article both the fundamental and technical prerequisites of APN, as well as the prospects for the possible development of humanity under various social scenarios (in particular, considering the possibility of a liberal and a totalitarian approach to the use of technologies). A comprehensive examination of the APN problem would require a whole monograph, but we will try to briefly highlight its main aspects. Unlike authors who emphasize what humanity should strive for, we will try to assess what might actually happen, considering the prospects and dangers of this path.
1. Description of the Behavior of Living Beings in Terms of Comfort Maximization
A fundamental property of all animals, starting from a certain level of evolutionary development, is the distinction between pleasant and unpleasant sensations and emotions. They define actions and stimuli to be sought after and avoided; they define needs, the initial principles of any purposeful behavior. Pleasant and unpleasant sensations and emotions could theoretically be associated with any stimuli, but in all actually existing species (except, in part, humans), the set of correspondences (the needs matrix, NM) is defined in such a way as to promote the survival of the species and, indirectly, the development of the entire organic world. Obviously, an animal that derived pleasure from pain or felt fear of food would be unviable. As P. V. Simonov wrote, “it is precisely the dialectic of preservation and development that led to the formation in the process of evolution of two main varieties of emotions — negative and positive. The subject seeks to strengthen, prolong, and repeat a positive emotion, and to weaken, interrupt, and prevent a negative one” [13] [14] .
The behavioral strategy of an animal can be represented as a problem of maximizing a certain quantity , which we will call comfort of a state. Comfort is a measure of the pleasantness of a state, regardless of the specific factors that cause it. Comfort can be defined as the degree of a subject’s satisfaction with their current sensory state, assuming the possibility of its unlimited continuation. Discomfort, accordingly, is a state with negative comfort, which the organism seeks to interrupt. Comfort is not equivalent to purely “physical” pleasure; it is an integral characteristic of all sensations and emotions that can be regarded as positive and negative. Neurophysiologically, they are generally associated with various centers of the brain, but there is a subjective scale of priority between them. The possibility of objectively measuring is problematic, but subjectively we can build a hierarchy of states according to their desirability.
In the simplest case, an organism seeks to maximize only the instantaneous, current value of comfort . It looks for actions that can change comfort in the direction of increase and performs them as long as they yield the desired result. In fact, the organism seeks a local maximum of the function q in the space of its actions (the form of this function may change over time under the influence of external factors). Beings capable of predicting events and planning actions for some time into the future are able to solve the problem of maximizing not instantaneous comfort, but its most probable average value over that time. If the forecasting horizon depends on the subject’s actions, corresponding to the length of some known state (after which comfort is unknown), the subject seeks to prolong the state with a positive predicted value of and shorten the state with its negative value. Such a behavioral strategy can be described as the desire to maximize the product of average comfort by the forecasting time . This quantity, which we will call utility, is equal to the integral of instantaneous comfort over time
where the current moment is taken as the zero time value. In particular, if is fixed (does not depend on the subject’s actions), maximizing simply means maximizing average comfort.
The desire to maximize utility can be interpreted as a willingness to sacrifice small immediate comfort for greater additional comfort in the future (S. Freud calls this, for humans, the reality principle as opposed to the purely animal pleasure principle [15] ), but in practice, emotions provide feedback that makes the instantaneous value dependent on the integral . Thanks to this, a possible contradiction between maximizing and is fully or significantly eliminated. For example, an animal ignores food if it knows that danger is associated with it. In doing so, it sacrifices the pleasant sensations that food provides, but it does this not so much because of abstract knowledge of danger, but because of fear, which itself is an unpleasant emotion and provides such discomfort that the pleasure from food cannot compensate for it. The animal refuses food to get rid of the unpleasant emotion. Thus, the animal is able to care about the future (maximize ) by simply striving to maximize . Fear, of course, arises only due to knowledge of danger, the ability to predict events, and this leads to an objective difference in the behavioral strategy of animals with and .
The question of the applicability of the above to humans is the question of the validity of utilitarianism. The founder of utilitarian ideas (in a broad sense) was Epicurus, who believed that people should always strive for what they believe will bring them satisfaction and avoid what they believe will cause them suffering [16] . The founder of modern utilitarian philosophy was J. Bentham [17] , whose ideas were later developed by J. S. Mill [18] . Since that time, the model of man as a being striving to maximize “good” has ceased to be a subject exclusively of philosophical thought; it gave a significant impetus to the development of sociology and became one of the cornerstones of economic theory [19] [20] [21] [22] . But to this day, utilitarian ideas remain controversial. Traditionally, they are condemned as representing man as immoral, selfish, governed by animal instincts. However, the fairness of such accusations strongly depends on the specific meaning we assign to the words “pleasure,” “comfort,” “utility,” “good.” With the definition of comfort we use in this work, we only assert that a person, when behaving rationally, strives to act in such a way as to be satisfied with their actions and their consequences. The dialectic of the utilitarian approach is such that, by setting a goal higher than obtaining pleasure, a person thereby still strives for the pleasant and avoids the unpleasant, only new factors act as pleasant and unpleasant. In particular, the comfort state of one subject may increase due to their awareness of the fact of an increase in the comfort state of other subjects. This ability of altruists to make sacrifices for other people while remaining satisfied has not only philosophical but also neurophysiological [23] [24] and evolutionary [25] justifications.
Be that as it may, the human striving for comfort has a number of significant differences from the behavior of other animals. An important feature of humans is the logical awareness of their ability to care about the future. The time for forecasting and planning events is significantly longer for them than for other animals, and can be comparable to lifespan. Thanks to this, on a rational, not just instinctive, level, a person can raise the question of the value of life. In the traditional religious conception of an afterlife or predetermined reincarnation, the forecasting horizon is theoretically unlimited, and maximizing utility means, among other things (and often primarily), caring about the future life. But if death is the end of everything, or a transition to a fundamentally unpredictable state, the forecasting and planning time cannot exceed the upcoming biological lifespan . If , then, depending on the predicted value of average comfort , a person faces the task of prolonging or shortening life (according to the same simple principle that the pleasant is what should be prolonged, and the unpleasant is what should be stopped or shortened). From this, a person gains two new possibilities: firstly, to care about survival when instincts do not require it (no real immediate danger); secondly, to go against the instinct of self-preservation if there are logical, non-affective, reasons for ending life (upcoming life, if not sacrificed, appears to be physical or spiritual suffering). Thus, a rational approach leads a person to deny the unconditional necessity of survival, but with a positive it gives a new powerful incentive to preserve and prolong life. The need for survival is no longer independent; it turns out to be a function of the success in satisfying other needs. It is particularly important to note that we are talking here about individual survival, which only indirectly contributes to the survival of the species or population.
Another feature of humans is life in a rapidly changing environment. The rate of environmental change caused by human activity is incomparably higher than the rate of natural biological evolution, so basic biological needs do not have time to adapt to new realities. Thus, while for wild animals tasty food is almost always beneficial, for humans the relationship is often reversed. Many human food products do not exist in nature in a ready-made form, and a mechanism for adequately assessing their usefulness has not been developed for them. Sexual selection continues to be largely based on completely archaic criteria that do not correspond to the interests of psychological compatibility (e.g., appearance). The most striking example of the discrepancy between the pleasant and the useful is hard drugs, which combine a way to obtain the strongest pleasant sensations and mortal danger. Such discrepancies are possible in other animals with , but in humans, due to the longer forecasting time , survival (and utility maximization) is particularly strongly “detached” from momentary pleasures. At the same time, the rapid change of environment creates prerequisites for disrupting the connection of survival not only with , but also with . Nevertheless, humans are a biologically very successful species. This is partly achieved due to their special attitude towards survival, but there is also another important factor — new, easily variable needs associated with higher nervous activity, capable of changing at the same speed as society and civilization develop. They can take various forms: creativity, socially useful labor, cognition of the world, morality, etc., but they are all united by the ability to vary easily both between different individuals and within one individual over a lifetime. It would be wrong to consider the listed spheres of activity as the exclusive prerogative of humans; in rudimentary form, they (e.g., creativity) also exist in other higher animals. But the peculiarity of humans lies precisely in the variability of the needs matrix, in the absence of a single innate set of preferences for all individuals, and it is this that has allowed natural selection to maintain the connection between the survival of the population and the maximization of by individuals.
2. Artificial Programming of Needs: Technical Issues
The existence in humans of easily variable “supra-biological” needs illustrates well that pleasant and unpleasant sensations and emotions are not always tied to specific events and stimuli. The same phenomenon or type of activity (a work of art, a scientific problem, a human action) can be pleasant for one person, unpleasant for another, and neutral for a third. Naturally, a person comes to the question of the possibility of purposefully establishing these connections, artificially programming needs. In society, the task of programming needs is performed by upbringing and ideology, but their possibilities, as we have already said, have known limitations. Is arbitrary programming of needs possible?
The task of artificial programming of needs (APN) is closely related to the task of controlling comfort. Control of comfort is carried out in the daily activities of living beings in any interaction with the outside world, with the aim of creating pleasant stimuli and removing unpleasant ones. But there are also methods of controlling comfort that imply a direct effect on nerve centers, for example, chemical (narcotic substances) or electrical. Electrical stimulation of pleasure centers is most famous from the experiments of J. Olds and P. Milner [26] in 1954. In these experiments, rats with electrodes implanted in their pleasure centers could stimulate them by pressing a button. When the rats understood that such a connection existed, they began to constantly close the contacts, losing interest in food and individuals of the opposite sex. Subsequently, C. Sem-Jacobsen and a number of other scientists conducted similar experiments on humans in a neurosurgical clinic. The studies showed that stimulation of similar brain areas caused feelings of joy, satisfaction, and erotic experiences.
Direct control of comfort is programming of needs only in the trivial sense that the appearance of a new pleasant stimulus leads to the emergence of a need to strive for it. By true programming of needs, we will understand not the creation of a new stimulus, but the establishment of connections between an existing stimulus and the sensation of comfort (connections in the needs matrix, NM). Such an approach, in accordance with cybernetic terminology, can be called algedonic [27] .
The simplest method of direct, somatic reprogramming of needs is the surgical suppression or destruction of centers responsible for some pleasant or unpleasant sensations and emotions. Cases have long been known where a person, after a brain injury, for example, lost the ability to feel pain. Nowadays, surgical treatment of drug addiction is increasingly being practiced, where after stereotactic (based on high-precision intervention) suppression of a certain pleasure center, a person stops receiving pleasant sensations from harmful substances.
More complex APN tasks are associated with the problem of stimulus recognition. While this is not particularly difficult for chemical analyzers (taste, smell), and generally simple static images (simple pictures, individual sounds, elementary tactile sensations), it is much more complex for dynamic images, especially those recreated from information from several senses at once. It is easy to imagine how to make a person consider one food tasty and another not (for example, to program an attraction only to healthy food, if this can be determined by taste): it is necessary to study the taste signals entering the brain from different substances and change the principle by which the brain determines their pleasantness. One could also program a person to derive pleasure from physical labor and from active work in general; one could even (if needed for something) make pain sensations pleasant. But how to program the reactions of pleasure centers to complex, specialized types of activity, for example, to scientific work and creativity? This would require either extremely complex recognition of dynamic images (how, from visual and other sensations, to know that a person has made a scientific discovery?) or recognition of thoughts. In the latter case, the pleasure center would react not to external stimuli indicating the process or results of activity, but to the person’s thoughts about it. But here there is another difficulty, related to the fact that a person is capable of thinking about non-existent things (for example, mentally imagining scientific activity or its results that do not exist in practice).
In [7:2] , V. Kosarev expresses the idea that APN technologies will develop simultaneously with artificial intelligence and cyborgization technologies. Cyborgization, as a result of which a person, including their brain, will become a hybrid of biological and technological, will allow transferring the APN problem from the field of pure neurophysiology to the field of computer science and control theory. This will make it possible to define the concepts of pleasant and unpleasant more strictly and to set the principle of utility maximization. Of course, a cyborg, like an ordinary person, must have subjective sensations, will, and emotions, so its creation will require a comprehensive study of the nature of consciousness, not limited to the realm of the pleasant and unpleasant. The cybernetic approach to regulating the behavior of systems for which pleasant and unpleasant, “reward” and “punishment” are defined (algedonic loops are created) was considered by one of the founders of modern control theory, S. Beer, in [27:1] . One can imagine an automatic system for artificial stimulation of pleasure centers, made in the form of a separate programmable device connected to the cyborg’s brain.
In any case, it seems to us that the difficulties of APN are only technical, and there are no fundamental limitations here. Theoretically, someday any conceivable NM might become possible, but even if this does not happen, their artificial assignment will become possible within very wide limits. It is only a matter of time.
3. Practical Use of Programming of Needs and Its Possible Social Consequences
If we assume that artificial programming of needs (APN) has become possible, the question arises about the goals and consequences of its practical use.
To make a forecast of the possible development of society, it is necessary to consider two factors: the interests of individual people striving to be satisfied with life, and the interests of states, which theoretically can be quite arbitrary (depending on the moral values accepted in society, the personal views of statesmen, etc.), but in historical perspective are subject to a selection process in which some models turn out to be more viable and others die out.
Assuming that APN is technically publicly available, two extreme models of social structure in relation to it can be distinguished. The first model, which we will conditionally call liberal, is that each person is given the right to decide for themselves which stimuli to consider pleasant and unpleasant. The development of society in such a model will be determined by the personal interests of people, their individual approaches to programming their needs. The opposite of the liberal model is the totalitarian model, according to which all (or most) people must be programmed forcibly (or before birth) in accordance with the interests of society, the state, or specific people in power (a fictional description of one such variant is given in [9:1] ).
3.1. The Liberal Model of APN
Let us first discuss the prospects and problems of the liberal model, as it is more fundamental and reductionistic.
Apparently, the majority of people in their programming will be driven by the desire to increase the comfort of life. But the choice of a specific method is ambiguous. The full realization of APN ideas means that the same sensations can be obtained from any chosen stimulus or type of activity. Any pleasures, including not only “bodily” enjoyments but also the deepest emotional, spiritual experiences, with appropriate programming, can be obtained from creativity, from socially useful labor, etc., as well as from simply pressing a button (by the method of artificial stimulation, AS). By what criterion should the needs matrices (NM) be chosen? From the point of view of modern values, creativity and labor are good, while pleasure from pressing a button is a surrogate and evil. But how can such a position be justified rationally? A person choosing their NM could raise counterarguments. Why is creativity needed in society, except to obtain those very emotions that can now be achieved in many other ways? What is the benefit to society if people in it already have a means to be happy? Pressing a button is, at least, technically the simplest way of direct comfort control, without any tricks with stimulus recognition, etc.
In the literature, dystopian forecasts of such development are common, where a person who receives the strongest feelings artificially becomes like the rat from the experiments described in [26:1] and loses interest in other activities; where society degrades, stops developing, or even perishes. An example of a fictional description of such a society is the story The Final Circle of Paradise by A. and B. Strugatsky [10:1] . And even in the mentioned work [7:3] , where the idea of comfort control is generally viewed optimistically, the author emphasizes that ”… the ‘pleasure’ center… must be reliably protected from the possibility of bypassing or ‘shirking’ the execution of necessary programs by directly affecting its ‘positive emotion’ centers,” i.e., he considers it necessary to introduce an artificial ban on AS. A similar helplessness in the face of the problem is found by M. Deering [6:1] : “After the Singularity a combination of nanotechnology and reverse engineering of the brain will give us the ability to experience any psychological state we choose at any time as much as we want without physical harm. Who will be able to resist the temptation to wirehead? And once experienced will the memory of the episode be addictively irresistible? Will it even be psychologically possible to turn it off the first time? We are all evolutionarily programmed to seek pleasure… This threat is perhaps the most serious of all the hazards associated with advanced technology… It might be a good idea to refrain from experimenting with your state of consciousness. … Do not alter the normal hardwired pleasure reward structures of your psyche.”
In our opinion, however, there is a simple natural mechanism that would not allow frightening forecasts to materialize in the liberal model. If a person strives to maximize not the instantaneous comfort , but the integral , they still have one most important factor in any situation — the factor of life expectancy. And if it becomes possible to easily ensure arbitrarily high (within the limits of technical capabilities) comfort of life (in everyday language, the “quality” of life), then the task of increasing its duration (its “quantity”) comes to the fore.
Given this, is artificial stimulation of pleasure centers by “pressing a button” really so good? Would it not have negative consequences for life expectancy? Like “traditional” drugs, methods of direct comfort control may pose a direct danger to health or cause physical dependence, where the danger is not the fact of AS itself, but the possible withdrawal from it. If these problems are solved and AS is easily accessible and harmless, an important question remains its compatibility with other activities. A person must eat, sleep, and ensure their safety. A well-known problem with conventional drugs is the person’s incapacity while intoxicated, loss of self-control, and reduced mental abilities. If AS has the same side effect, a person will be forced to periodically exit the state of euphoria and ensure their viability, while falling into a “natural” state with lower comfort. But such a lifestyle is no different from ordinary drug addiction; it is an extremely irrational choice both from the standpoint of maximizing the comfort of life (which can only be realized during periods of intoxication) and from the standpoint of survival (for which the person will have no sensory incentives). This path is even more unreasonable if the same sensations can, with appropriate programming, be obtained from ordinary labor, combining “business with pleasure” (which is impossible for ordinary drugs).
Consequently, only such a method of AS that is easily accessible, harmless, and does not interfere with other matters can find widespread practical application. If it is developed, simply fixing comfort at a certain stably high level, regardless of what a person does and what happens to them — constant artificial stimulation (CAS) — could come into wide practice. CAS should not cause sensory habituation (as happens with many ordinary stimuli, which over time cease to have an effect), otherwise the requirement of constant is not met. There is no particular problem in this, as shown by a number of modern neurochemical studies described in
[11:1]
. The specific implementation of CAS may vary. One could, for example, alternately stimulate several brain centers, creating a complex dynamic picture of sensations (a kind of “music of the feelings”) while maintaining a stably high comfort. In the distant future, CAS may have nothing to do with the vulgar archetype of the “rat pressing a button” — it could simply consist of genetically disabling discomfort mechanisms and maintaining high comfort without external intervention
[11:2]
.
The use of CAS, for all its outward odiousness, would not entail dystopian social consequences. A person is happy (otherwise the condition of constant high comfort is not met) and interested in survival. In a certain sense, such a person is very socially convenient. They are not susceptible to drug addiction (including alcoholism), they do not require entertainment activities that do not contribute to survival; for them, the conflict between “pleasant and useful” simply does not exist. A consistently high level of “joy of life” would allow such a person to perform any socially significant work without laziness, as long as it is not associated with danger (such activities by this time could be fully mechanized). Nevertheless, in such a radical understanding, CAS has one significant drawback. If a person is equally satisfied in any situation, only the intellect can assess to what extent it contributes to survival. In a way, this is the loss of some important sense organ. And if in modern humans a negative intellectual assessment itself already causes a feeling of discomfort in advance (feedback), a person with CAS would not have that either. In some cases, rejecting sensory evaluation would be justified. For example, there are situations where severe pain or fear not only does not help to escape from danger but even hinders it. From a survival standpoint, it would be justified to simply receive logical information about the nature of the damage instead of excessively strong pain. But if a person does not feel pain at all (as we have said, such cases actually exist), they are more defenseless against dangers, may not notice damage, or simply treat the threat lightly. Perhaps the future person will assess the situation more adequately on an intellectual level; however, in the author’s opinion, sensory assessment will remain significant in the foreseeable future.
Thus, although CAS allows maximizing the average comfort , it is not as effective for maximizing life expectancy . To maximize utility , defined by integral , it is necessary to find a balance between survival and comfort. Obviously, an optimal NM should give pleasant sensations from actions that promote survival, and unpleasant (or less pleasant) sensations from actions that contradict or simply do not promote it. In order to maintain average comfort at a sufficiently high (though not constant) level, a person should not set only difficult-to-achieve tasks when programming. The program should stimulate any activity that promotes survival. The forecast for the development of society in this case will not differ significantly from the forecast when using CAS. Since the liberal model assumes that each person will be free in choosing their NM, it can be assumed that some people will still choose CAS. It is also possible that for some time CAS will dominate due to the technical complexity of more flexible APN schemes. Some people may also choose deliberately non-optimal and destructive matrices, including those that pose a danger to others. Choosing a socially dangerous NM is unwise from the point of view of maximizing , since following it will meet with resistance from other people. However, in practice, not all people will be guided by utilitarian and pragmatic considerations; a person may strive for any ideals, including destructive ones. Therefore, considering that security will remain of great importance for most people, it can be assumed that the most dangerous NMs will be prohibited by law.
The most radical change in society during the transition to the widespread use of APN or CAS will probably be the complete withering away of the entire modern entertainment industry. The only entertainment industry in such a society would be the development of new, more effective APN methods, including research to increase the maximum technically possible level of comfort. It should be noted, however, the considerable danger associated with the possibility of increasing the maximum permissible bar not only for pleasant but also for unpleasant sensations. The use of such technologies for criminal and/or “state” purposes could make it possible to inflict unlimited, truly hellish suffering on a person [12:1] . This danger is so serious that the possibility of even isolated precedents of this kind calls into question the ethical justification of all developments in artificial comfort control. But here we can only hope for the creation of methods for effective individual protection against such abuses and the general trend towards rationalization (not necessarily even humanization) of humanity.
Basically, the activity of most people will be re-targeted towards maintaining and prolonging their own lives. As we have already said, in the absence of the problem of “quality” of life, the natural motivation for actions remains its “quantity.” There are many factors determining life expectancy. In addition to “traditional” directions related to health promotion, medical development, ensuring public safety, etc., new, “non-traditional” methods aimed at radically increasing average life expectancy should be widely developed. These are based on the fight against aging — a factor that currently sets the insurmountable upper limit of life expectancy. Already today, active searches for methods of radical life extension are underway in many countries around the world, and the specific scientific and philosophical aspects of the problem are widely discussed (see, for example, Russian Internet resources [4:1] [5:1] [8:1] ).
The fight against aging can be carried out in various directions. Some methods imply identifying the mechanisms of “programmed” onset of old age and disabling them. The theory of programmed cell death [28] , based on the work of L. Hayflick [29] and A. Olovnikov [30] , is widely known, as are the first successful experiments on “immortalizing” human cells “in vitro” [31] . There are also hypotheses about the genetic mechanisms of programmed death of multicellular organisms.
Other methods of radical life extension may be based on restructuring the human organism “bypassing” the mechanisms of aging. This could be the replacement of aged organs with new ones (the possibilities of separately growing cloned human organs or using organs of other animals are discussed), and cyborgization, which at the initial stage involves the creation of artificial organs, and later — a radical restructuring of the organism. At a certain stage, a person will still have a “natural” part subject to aging, but further development of biotechnologies should lead to the destruction of the boundary between “living” and “artificial.” Cyborgs will cease to be a mixture of biological and technological parts; they will be in the full sense living people, albeit with an artificial body. Unlimited life extension of a cyborg could be achieved, for example, by a modular scheme [2:1] .
Arbitrary construction of the human body, in addition to getting rid of aging, will also provide protection from many dangerous factors, making a person resistant to extreme working conditions, less susceptible to injury, and capable of regenerating most damage. At the same time, of course, a person will still need to maintain their existence, technical serviceability, and food supply. Dangers from destructive actions of other people and global catastrophes will also remain.
The danger emanating from other people, apparently, could be significantly reduced in the process of natural development. As we have already noted, aggressive, destructive needs are not a preferred choice from a rational point of view. In conditions where the average life expectancy is very high and a person is free to fill life with joy, the vast majority of people will be cautious, not inclined to take risks. It is known that even in the modern world, people who are less well-off and less satisfied with life are usually involved in extremist activities. Of course, some people will program themselves deliberately against generally accepted standards or simply without thinking about the consequences and without using the experience of other people. Therefore, a narrowing of individual freedoms in APN seems inevitable — a legislative ban on identified destructive matrices or even a complete rejection of the liberal model.
A rational basis for conflicts between people will remain as the limited nature of resources, for example, energy sources. Most likely, commodity-money relations will also persist, although their role may not be as all-encompassing as in modern society, since many modern incentives for enrichment will disappear. In addition to resource distribution, money could be used to encourage socially useful NMs, to attract people to significant long-term projects. However, a radical increase in life expectancy will in itself create additional incentives for this. Apart from that, one should expect that society will become more individualistic, since, all other things being equal, it is more rational to program oneself for actions whose results depend only on oneself. One of the first victims of APN will be the currently existing archaic system of building inter-sexual relations, in which the happiness of one person strongly depends on the actions of another (sometimes, moreover, irrationally motivated). Most likely, the practice of uniting people into social groups and families will persist in the future, but mainly in cases where it is useful for survival. This, of course, does not mean that a person will “calculate” the consequences of this or that social action in each specific case; they will simply act in accordance with their tastes and preferences, determined by programming. In the process of natural historical development, knowledge about the consequences of using a particular NM will accumulate, and most people will choose the most effective ones for survival and utility maximization in general. It is not excluded that in the long term, the number of subjects will decrease while the capabilities of each expand. It can be assumed that in the distant future, people will be able to have several auxiliary terminal bodies, remotely controlled from one main one (biomarion technology [2:2] ). Perhaps a technology for merging subjects into one will appear in such a way that it does not mean the death of any of them.
The creative interests of people will undergo a significant reorientation. Scientific and technical creativity will be preserved and increase in importance; however, “pure” art, which has only aesthetic value, may become unclaimed as APN technologies develop (if a person, as with the most ingenious work of art, can enjoy the ordinary everyday world around them, natural landscapes, the smell of grass, the rustle of leaves). This will affect not only the so-called “mass,” “entertainment” culture, but also everything whose sole purpose is to obtain certain feelings and emotions (being, ultimately, an indirect control of comfort). Art containing cognitive or developmental elements may survive, but even that will come into great question with the fundamental restructuring of the human intellect in the process of cyborgization. On the other hand, fundamentally new directions in art may appear, related to the development of APN methods, which could have both sensory (e.g., the mentioned “music of the feelings” in CAS systems) and intellectual significance.
A long-term factor that will always limit the “unlimited happy life” will remain global cataclysms. Not only earthquakes, floods, tsunamis, etc., which over time may cease to be a problem, but a broader task is the unlimited preservation of a habitat suitable for human life. This means not only protecting nature but also providing humans with energy sources, protection from cosmic dangers (meteorites, asteroids, nearby supernova explosions). In a few billion years, humanity will need to be saved from the death of the Sun. Perhaps this will require resettlement to another planetary system; perhaps humans will be able to prolong the Sun’s existence indefinitely; perhaps planetary systems will no longer be needed for life at all. In any case, the life of humanity and individual people cannot be infinite – someday all conceivable sources of free energy in the universe will simply run out. However, the above reasoning gives a vivid illustration of the development potential that a total reorientation towards life extension could give humanity. Consideration of the life expectancy factor shows that APN is unlikely to cause the degradation of humanity. On the contrary, in the long term, it could accelerate progress by freeing humans from expending effort on ensuring momentary comfort.
3.2. The Totalitarian Model of APN and Its Interaction with the Liberal Model
The totalitarian model has a number of well-known advantages over the liberal one. It allows the immediate exclusion from consideration of obviously unwise NMs, as well as a number of AS variants. The totalitarian model makes it easy to organize the joint work of people on significant projects (e.g., research in the field of life extension) and to effectively ensure security.
At the same time, the negative sides of the totalitarian model of artificial programming of needs are equally obvious. The main problem is associated with a certain degree of arbitrariness in the declared goals of societal development. These goals may correspond to the survival tasks of each individual or the state as a whole, but they may also directly contradict them. In the totalitarian model, the question of the mechanism for selecting the applied needs matrices (NMs) is, in fact, a question of power. Mandatory NMs may be chosen democratically, by specialized expert councils, by limited power circles, or even solely by heads of state, who in this case receive virtually unlimited power over people. Theoretically, the totalitarian model of APN combined with authoritarian power could lead to the most horrific consequences. Such power enables the organization of any destructive projects, detrimental both to individuals and to states and humanity as a whole; it allows programming people as obedient slaves who derive pleasure from carrying out orders and suffer from not carrying them out. One can also imagine a situation where a leader, driven by certain ethical, aesthetic, religious, or general philosophical convictions, would program people exclusively for suffering. Any, even the most insane idea using the totalitarian model of APN could be implemented much more effectively than is possible in modern or historical totalitarian states.
To make assumptions about actual historical development, we should consider scenarios where the division into separate states persists on Earth and where the state is unified. In the first case, competition between states will inevitably persist, in which a selection of more viable models will occur. As now, the level of scientific and technological development will remain an extremely important factor in the self-preservation of a state and its political system. States that set as their main goal not self-preservation, but some other, non-conducive goals (building an “ideal” society, ethnic purity of the nation, dominance of one religion, etc.) will, all else being equal, find themselves at a disadvantage and in the long term will be unviable. Totalitarian states oriented towards survival will have to find an NM, the forced use of which will ensure a high rate of scientific and technological development and internal stability of the political system. In principle, the task of state self-preservation may conflict with the task of maximizing utility, both for individuals and for all citizens collectively. Such states are to some extent analogous to colonies of highly developed eusocial insects, in which socially useful behavior is ensured not by violent coercion but by chemical programming of needs using pheromones from the dominant individual (the so-called “benevolent despotism” model) [32] . Such programming effectively ensures the survival of the population, but is not always favorable for individuals. It is known, for example, that a side effect of such an organization in honeybees is the extremely short lifespan of workers. But the peculiarity of humans, as already mentioned, is that they are capable of rationally realizing their desire to maximize utility. And if the chosen scheme of mandatory programming turns out to be far from optimal in terms of maximizing , citizens will not be loyal to the regime. A person can be programmed to derive pleasure from obviously dangerous or harmful activities, but on an intellectual level, they will still be interested in issues of life extension and obtaining pleasure from a wider range of stimuli. The authorities can fight discontent with the most sophisticated methods — program people to suffer from such thoughts, keep the fundamental possibility of alternative NMs secret, artificially deprive people of the ability to critically assess reality — but all this will inevitably encounter strong resistance and slow down scientific and technological development (if only because the state will spend a lot of effort on maintaining the regime). Therefore, the most viable states should be those in which the mandatory NM does not conflict too much with the task of maximizing . Then the totalitarian model of APN, optimized for the interests of the state, could have an advantage over the liberal one, optimized for the interests of individuals. On the other hand, any totalitarian model, even the most reasonable and humane, has a significant weakness compared to the liberal one — the imperfection of the mechanism for choosing the NM. The liberal model allows people to experiment on themselves and thus accumulate experience for the whole society. The totalitarian model, however, if not preceded by a liberal period, will rely only on theoretical assessments. It may be good for “canonizing” the best schemes developed by the liberal model, but it would still be a brake on development.
In historical development, competition between APN models will apparently occur, and it is difficult to predict in advance the advantage of one or the other. It can be assumed that even the most radical and unstable forms will appear from time to time in individual states. It is possible that, on the contrary, in the full sense, neither a purely totalitarian nor a purely liberal model will exist anywhere in practice. In the liberal model, as already mentioned, there may be a list of prohibited NMs. One can also imagine a moderately totalitarian model in which a person is given the right to choose between several legally approved permissible matrices. In historical perspective, convergence of models will apparently take place.
If there remains only one unified state on Earth, the factor of interstate competitiveness disappears. Scientific and technological development will cease to be necessary for the preservation of the state, will cease to be an end in itself, remaining only a means to prolong human life. The only conflict left will be the conflict between the survival of the individual and the survival of humanity. But these tasks are interdependent, and such a conflict is much milder than the conflict between individual survival and the strengthening of the state (to which, as historical experience shows, millions of lives can be sacrificed). Therefore, if the goals of a unified state are indeed limited to the survival of humanity (and, possibly, the elimination of conflicts between people), it is not so important whether it implements a liberal or totalitarian model of APN — in any case, the tasks of the state will almost always correspond to the interests of each individual person. Both models could prove sufficiently stable. Development may slow down, but the long-term forecast for the totalitarian model is similar to that for the liberal model.
At the same time, with the disappearance of competition between individual states, an important factor of instability for totalitarian political systems with “crazy” goals will also disappear. In conditions of rivalry, each state must develop, be strong, and competitive. For a unified state, the only restraining factor will be internal tension associated with citizen dissatisfaction, which, in the absence of competition between systems, may be suppressed much more easily. And yet, in the long term, the most odious scenarios seem unrealistic. The ruling elite will still pursue the goal of maximizing and, in particular, life expectancy, and for this, the development of science and technology is necessary.
The main features of the totalitarian model of APN in a unified state are well reflected in A. Huxley’s novel Brave New World [9:2] . The dominant NMs are optimized for the interests of state survival, while minimally conflicting with people’s interests (people are kept satisfied). Due to the lack of competition with other states, scientific and technological development slowed down — it, like a number of other phenomena, was sacrificed for stability. The author shows how the choice of the totalitarian path was predetermined by the specific available set of APN technologies: programming is carried out before birth and in early childhood, and further reprogramming is practically impossible.
At the same time, from a modern perspective, Huxley’s novel contains a number of obvious errors, thanks to which the described world largely acquired a dystopian tinge. These include, in particular, the absurdly exaggerated idea of consumption. In modern society, the orientation towards consumption is a way of creating incentives for people to work and, consequently, for higher rates of development and state competitiveness. However, in the Brave New World, the factor of development speed is no longer decisive, and the strategy of orienting people towards consumption is suboptimal, if only because they can be programmed to work directly. Moreover, the author does not consider factors such as automation of production, artificial intelligence, and, most importantly, people’s desire to prolong life. All people are programmed to obediently accept the inevitability of death, which is impossible while maintaining high comfort and sufficiently high intellectual abilities (at least in part of the population). The desire to prolong people’s lives would not allow society to slow down development too much.
Conclusion
The natural course of evolutionary development of the organic world established a close connection between the striving of individuals for comfort and the survival of species. Natural selection put all animal activity at the service of the survival task, although the behavior of the organisms themselves was motivated only by the pursuit of relatively short-term, or even entirely momentary, comfort.
Humans were able to realize the task of survival explicitly and thus separate it from the task of maximizing instantaneous and short-term comfort. For the vast majority of people in the past, the main task was survival, as survival was a greater problem than maintaining average comfort at a moderately sufficient level provided by traditional life.
Modern society, in comparison with traditional society or the animal world, finds itself in a unique state where most activity (at least in developed countries) has shifted from survival to increasing comfort. The prerequisites for this were, firstly, an increase in average life expectancy almost to the natural biological limit (not yet overcome), and secondly, the emergence of new, widely available methods for increasing comfort. Thus, life expectancy became much less dependent on human actions than comfort. The established hedonistic orientation of modern civilization is often condemned from ethical positions, but at the current level of development of productive forces, it has in practice shown its stability and greater competitiveness compared to alternative models (attempting to artificially limit the possibilities of maximizing comfort).
The emergence of methods for artificial programming of needs, along with a radical increase in life expectancy, may create in the foreseeable future prerequisites for another change in the value paradigm – a reorientation of all activity back towards survival. This will, in a certain sense, be a return “to nature,” to the animal state, but already at a qualitatively new stage of evolutionary development. This work aims to show that, contrary to popular opinion, this will not lead to the degradation of society but, on the contrary, will increase its desire for development. Society may become simpler, more one-sided from the point of view of modern humans, but, at the same time, paradoxically combine the ideals of hedonism (easily accessible maximum enjoyment of life), Marxism (happy labor for the common good), scientism (orientation towards scientific and technological development), and even religious-ethical concepts (maximum relief of human suffering, resolution of conflicts, liberation from “animal” passions). People may radically change their bodies, become cyborgs, learn to arbitrarily change their form, and use any conceivable energy sources. But whatever fantastic transformations happen to humans in the distant future, the key question will remain about needs, about the motivation of activity. And it is this that will determine the course of further human development.
Lem S. Summa Technologiae. Translated from Polish by A. G. Gromova, D. I. Iordanskii, R. I. Nudelman, B. N. Panovkin, L. R. Pliner, R. A. Trofimov, Yu. A. Yaroshevskii. Moscow: Mir, 1968. 607 p. (in Russian) [Original: Lem S. Summa Technologiae. Krakow: Wydawnictwo literackie, 1967]
Lazarevich A. Generator zhelanii [The Desire Generator]. [Electronic resource]. 1995. Available here (in Russian)
Rossiiskoe transgumanisticheskoe dvizhenie [Russian Transhumanist Movement]. [Electronic resource]. Available here (in Russian)
Bessmertie. [Electronic resource]. Available here (in Russian)
Immortality. [Electronic resource]. Сurrently unavailable: http://immortality.ru
Deering M. Rassvet singulyarnosti. [Electronic resource]. Translated from English by P. Vasilev. Available here (in Russian) // Original: Deering M. S. The Dawn of Singularity. Available here
Kosarev V. V. Kto budet zhit’ na zemle v XXI veke? [Who will live on Earth in the 21st century?]. Neva. 1997. No. 10. p. 135–149. Available here (in Russian)
Vechnyi razum [Eternal Mind]. [Electronic resource]. Available here (in Russian)
Huxley A. O, divnyi novyi mir. Translated from English by O. Soroka, V. Babkov. St. Petersburg: Amfora, 1999. 541 p. (in Russian) [Original: Huxley A. Brave new world. London: Chatto & Windus, 1932. 306 p.]
Strugatskii A. N., Strugatskii B. N. Khishchnye veshchi veka [The Final Circle of Paradise]. In: Strugatskii A. N., Strugatskii B. N. Sobranie sochinenii [Collected Works]. Moscow: Tekst, 1992. Vol. 3. p. 413. (in Russian)
Pearce D. Hedonic Imperative. [Electronic resource]. 2005. Available here
Bolonkin A. A. Nauka, dusha, rai i vysshii razum [Science, Soul, Paradise and the Supreme Intelligence]. [Electronic resource]. 2001. Available here and here (in Russian)
Simonov P. V. Chto takoe emotsiya? [What is Emotion?]. Moscow: Nauka, 1966. 93 p. (in Russian)
Simonov P. V. Emotsional’nyi mozg [The Emotional Brain]. Moscow: Nauka, 1981. 215 p. (in Russian)
Freud S. Po tu storonu printsipa udovol’stviya. Moscow: Progress, 1992. 545 p. (in Russian) [Original: Freud S. Beyond the Pleasure Principle. London: Hogarth, 1920]
Dyshnik M., ed. Materialisty Drevnei Gretsii. Sobranie tekstov Geraklita, Demokrita i Epikura [The Materialists of Ancient Greece. A Collection of Texts by Heraclitus, Democritus, and Epicurus]. Moscow: Gospolitizdat, 1955. 238 p. (in Russian)
Bentham J. Vvedenie v osnovaniya nravstvennosti i zakonodatel’stva. Moscow: Rosspen, 1998. 415 p. (in Russian) [Original: Bentham J. An introduction to the principles of morals and legislation. London, 1789]
Mill J. S. Utilitarianizm. O svobode. 3rd ed. St. Petersburg: Perevoznikov, 1900. 236 p. (in Russian) [Original: Mill J. S. Utilitarianism. London, 1863]
Gossen H. H. Entwickelung der Gesetze des menschlichen Verkehrs, und der daraus fliessenden Regeln für menschliche [Development of the Laws of Human Intercourse and the Consequent Rules for Human Action]. Berlin: R. L. Prager, 1889. (in German) [Original: Gossen H. H. Entwickelung der Gesetze des menschlichen Verkehrs, und der daraus fliessenden Regeln für menschliche. Braunschweig: Vieweg, 1854]
Jevons W. S. Politicheskaya ekonomiya. St. Petersburg: Narodnaya pol’za, 1905. (in Russian) [Original: Jevons W. S. Theory of Political Economy. London: Macmillan, 1871]
Menger K. Osnovaniya politicheskoi ekonomii [Principles of Economics]. In: Avstriiskaya shkola v politicheskoi ekonomii: K. Menger, E. Böhm-Bawerk, F. Wieser [The Austrian School of Economics: K. Menger, E. Böhm-Bawerk, F. Wieser]. Moscow: Ekonomika, 1992. (in Russian) [Original: Menger C. Grundsätze der Volkswirtschaftslehre. Vienna: W. Braumüller, 1871]
Walras L. Elementy chistoi politicheskoi ekonomii ili Teoriya obshchestvennogo bogatstva [Elements of Pure Economics: or, The Theory of Social Wealth]. Translated from French by I. A. Egorov, A. V. Belyanin. Moscow: Izograf, 2000. 421 p. (in Russian) [Original: Walras L. Éléments d’économie politique pure; ou, Théorie de la richesse sociale. Lausanne: Corbaz, 1874]
Gruter M. Law and the Mind. Biological Origins of Human Behavior. Newbury Park, London & New Delhi: SAGE Publ. 1991.
Danielli J. E. Altruism and the internal reward or the opium of the people. Journal of Social and Biological Structures. 1980. Vol. 3. p. 87–94.
Simon H. A. A mechanism for social selection and successful altruism. Science. 1990. Vol. 250. p. 1665–1668.
Milner P. Fiziologicheskaya psikhologiya. Translated from English by O. S. Vinogradova. Moscow: Mir, 1973. 647 p. (in Russian) [Original: Milner P. M. Physiological psychology. New York: Holt, Rinehart & Winston Inc., 1970]
Beer S. Mozg firmy. Moscow: Radio i svyaz’, 1993. 192 p. (in Russian) [Original: Beer S. Brain of the Firm: a development in management cybernetics. New York: Herder and Herder, 1972. 319 p.]
Korshunov A. M., Preobrazhenskaya I. S. Programmirovannaya smert’ kletok (apoptoz) [Programmed cell death (apoptosis)]. Nevrologicheskii zhurnal [Neurological Journal]. 1998. Vol. 3, No. 1. p. 40–47. (in Russian)
Hayflick L. The limited in vitro lifetime of human diploid cell strains. Experimental Cell Research. 1965. Vol. 37. p. 614–636.
Olovnikov A. M. Printsip marginotomii v matrichnom sinteze polinukleotidov [The principle of marginotomy in template synthesis of polynucleotides]. Doklady Akademii Nauk SSSR [Proceedings of the USSR Academy of Sciences]. 1971. Vol. 201, No. 6. p. 1496–1499. (in Russian)
Bodnar A. G., Ouellette M., Frolkis M., Holt S. E., Chiu C. P., Morin G. B., Harley C. B., Shay J. W., Lichtsteiner S., Wright W. E. Extension of life-span by introduction of telomerase into normal human cells. Science. 1998. Vol. 279. p. 349–352.
Kipyatkov V. E. Mir obshchestvennykh nasekomykh [The World of Social Insects]. Leningrad: Leningrad University Press, 1991. 408 p. (in Russian)