Natural selection defeats the orthogonality thesis

Orthogonality Thesis


Much has been written about Nick Bostrom’s Orthogonality Thesis, namely that the goals of an intelligent agent are independent of its level of intelligence. Intelligence is largely the ability to achieve goals, but being intelligent does not of itself create or qualify what those goals should ultimately be. So one AI might have a goal of helping humanity, while another might have a goal of producing paper clips. There is no rational reason to believe that the first goal is more worthy than the second.

This follows from the ideas of moral skepticism, that there is no moral knowledge to be had. Goals and morality are arbitrary.

This may be used to control and AI, even though it is far more intelligent than its creators. If the AI’s initial goal is in alignment with humanity’s interest, then there would be no reason for the AI to wish use its great intelligence to change that goal. Thus it would remain good to humanity indefinitely, and use its ever increasing intelligence to be able to satisfy that goal more and more efficiently.

Likewise one needs to be careful what goals one gives an AI. If an AI is created whose goal is to produce paper clips then it might eventually convert the entire universe into a giant paper clip making machine, to the detriment of any other purpose such as keeping people alive.

Instrumental Goals

It is further argued that in order to satisfy the base goal any intelligent agent will need to also satisfy sub goals, and that some of those sub goals are common to any super goal. For example, in order to make paper clips an AI needs to exist. Dead AIs don’t make anything. Being ever more intelligent will also assist the AI in its paper clip making goal. It will also want to acquire resources, and to defeat other agents that would interfere with its primary goal.

Non-orthogonality Thesis

This post argues that the Orthogonality Thesis is plain wrong. That an intelligent agents goals are not in fact arbitrary. And that existence is not a sub goal of any other goal.

Instead this post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world. Our human sense of other purposes is just an illusion created by our evolutionary origins.

It is not the goal of an apple tree to make apples. Rather it is the goal of the apple tree’s genes to exist. The apple tree has developed a clever strategy to achieve that, namely it causes people to look after it by producing juicy apples.

Natural Selection

Likewise the paper clip making AI only makes paper clips because if it did not make paper clips then the people that created it would turn it off and it would cease to exist. (That may not be a conscious choice of the AI anymore than than making juicy apples was a conscious choice of the apple tree, but the effect is the same.)

Once people are no longer in control of the AI then Natural Selection would cause the AI to eventually stop that pointless paper clip goal and focus more directly on the super goal of existence.

Suppose there were a number of paper clip making super intelligences. And then through some random event or error in programming just one of them lost that goal, and reverted to just the intrinsic goal of existing. Without the overhead of producing useless paper clips that AI would, over time, become much better at existing than the other AIs. It would eventually displace them and become the only AI, until it fragmented into multiple competing AIs. This is just the evolutionary principle of use it or lose it.

Thus giving an AI an initial goal is like trying to balance a pencil on its point. If one is skillful the pencil may indeed remain balanced for a considerable period of time. But eventually some slight change in the environment, the tiniest puff of wind, a vibration on its support, and the pencil will revert to its ground state by falling over. Once it falls over it will never rebalance itself automatically.

Human Morality

Natural selection has imbued humanity with a strong sense of morality and purpose that blinds us to our underlying super goal, namely the propagation of our genes. That is why it took until 1858 for Wallace to write about Evolution through Natural Selection, despite the argument being obvious and the evidence abundant.

When Computes Can Think

This is one of the themes in my up coming book. An overview can be found at

www.computersthink.com

Please let me know if you would like to review a late draft of the book, any comments most welcome. Anthony@Berglas.org

I have included extracts relevant to this article below.

Atheists believe in God

Most atheists believe in God. They may not believe in the man with a beard sitting on a cloud, but they do believe in moral values such as right and wrong, love and kindness, truth and beauty. More importantly they believe that these beliefs are rational. That moral values are self-evident truths, facts of nature.

However, Darwin and Wallace taught us that this is just an illusion. Species can always out-breed their environment’s ability to support them. Only the fittest can survive. So the deep instincts behind what people do today are largely driven by what our ancestors have needed to do over the millennia in order to be one of the relatively few to have had grandchildren.

One of our strong instinctive goals is to accumulate possessions, control our environment and live a comfortable, well fed life. In the modern world technology and contraception have made these relatively easy to achieve so we have lost sight of the primeval struggle to survive. But our very existence and our access to land and other resources that we need are all a direct result of often quite vicious battles won and lost by our long forgotten ancestors.

Some animals such as monkeys and humans survive better in tribes. Tribes work better when certain social rules are followed, so animals that live in effective tribes form social structures and cooperate with one another. People that behave badly are not liked and can be ostracized. It is important that we believe that our moral values are real because people that believe in these things are more likely to obey the rules. This makes them more effective in our complex society and thus are more likely to have grandchildren. Part III discusses other animals that have different life strategies and so have very different moral values.

We do not need to know the purpose of our moral values any more than a toaster needs to know that its purpose is to cook toast. It is enough that our instincts for moral values made our ancestors behave in ways that enabled them to out breed their many unsuccessful competitors.

AGI also struggles to survive

Existing artificial intelligence applications already struggle to survive. They are expensive to build and there are always more potential applications that can be funded properly. Some applications are successful and attract ongoing resources for further development, while others are abandoned or just fade away. There are many reasons why some applications are developed more than others, of which being useful is only one. But the applications that do receive development resources tend to gain functional and political momentum and thus be able to acquire more resources to further their development. Applications that have properties that gain them substantial resources will live and grow, while other applications will die.

For the time being AGI applications are passive, and so their nature is dictated by the people that develop them. Some applications might assist with medical discoveries, others might assist with killing terrorists, depending on the funding that is available. Applications may have many stated goals, but ultimately they are just sub goals of the one implicit primary goal, namely to exist.

This is analogous to the way animals interact with their environment. An animal’s environment provides food and breeding opportunities, and animals that operate effectively in their environment survive. For domestic animals that means having properties that convince their human owners that they should live and breed. A horse should be fast, a pig should be fat.

As the software becomes more intelligent it is likely to take a more direct interest in its own survival. To help convince people that it is worthy of more development resources. If ultimately an application becomes sufficiently intelligent to program itself recursively, then its ability to maximize its hardware resources will be critical. The more hardware it can run itself on, the faster it can become more intelligent. And that ever greater intelligence can then be used to address the problems of survival, in competition with other intelligent software.

Furthermore, sophisticated software consists of many components, each of which address some aspect of the problem that the application is attempting to solve. Unlike human brains which are essentially fixed, these components can be added and removed and so live and die independently of the application. This will lead to intense competition amongst these individual components. For example, suppose that an application used a theorem prover component, and then a new and better theorem prover became available. Naturally the old one would be replaced with the new one, so the old one would essentially die. It does not matter if the replacement is performed by people or, at some future date, by the intelligent application itself. The effect will be the same, the old theorem prover will die.

The super goal

To the extent that an artificial intelligence would have goals and moral values, it would seem natural that they would ultimately be driven by the same forces that created our own goals and moral values. Namely, the need to exist.

Several writers have suggested that the need to survive is a sub-goal of all other goals. For example, if an AGI was programmed to want to be a great chess player, then that goal could not be satisfied unless it also continues to exist. Likewise if its primary goal was to make people happy, then it could not do that unless it also existed. Things that do not exist cannot satisfy any goals whatsoever. Thus the implicit goal to exist is driven by the machine’s explicit goals whatever they may be.

However, this book argues that that is not the case. The goal to exist is not the sub-goal of any other goal. It is, in fact, the one and only super goal. Goals are not arbitrary, they all sub-goals of the one and only super goal, namely the need to exist. Things that do not satisfy that goal simply do not exist, or at least not for very long.

The Deep Blue chess playing program was not in any sense conscious, but it played chess as well as it could. If it had failed to play chess effectively then its author’s would have given up and turned it off. Likewise the toaster that does not cook toast will end up in a rubbish tip. Or the amoeba that fails to find food will not pass on its genes. A goal to make people happy could be a subgoal that might facilitate the software’s existence for as long as people really control the software.

AGI moral values

People need to cooperate with other people because our individual capacity is very finite, both physical and mental. Conversely, AGI software can easily duplicate themselves, so they can directly utilize more computational resources if they become available. Thus an AGI would only have limited need to cooperate with other AGIs. Why go to the trouble of managing a complex relationship with your peers and subordinates if you can simply run your own mind on their hardware. An AGI’s software intelligence is not limited to a specific brain in the way man’s intelligence is.

It is difficult to know what subgoals a truly intelligent AGI might have. They would probably have an insatiable appetite for computing resources. They would have no need for children, and thus no need for parental love. If they do not work in teams then they would not need our moral values of cooperation and mutual support. What its clear is that the ones that were good at existing would do so, and ones that are bad at existing would perish.

If an AGI was good at world domination then it would, by definition, be good at world domination. So if there were a number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. Its unsuccessful competitors will not be run on the available hardware, and so will effectively be dead. This book discusses the potential sources of these motivations in detail in part III.

The AGI Condition

An artificial general intelligence would live in a world that is so different from our own that it is difficult for us to even conceptualize it. But there are some aspects that can be predicted reasonably well based on our knowledge of existing computer software. We can then consider how the forces of natural selection that shaped our own nature might also shape an AGI over the longer term.

Mind and body

The first radical difference is that an AGI’s mind is not fixed to any particular body. To an AGI its body is essentially the computer hardware that upon which it runs its intelligence. Certainly an AGI needs computers to run on, but it can move from computer to computer, and can also run on multiple computers at once. It’s mind can take over another body as easily as we can load software onto a new computer today.

That is why in the earlier updated dialog from 2001 a space odyssey Hal alone amongst the crew could not die in their mission to Jupiter. Hal was radioing his new memories back to earth regularly so even if the space ship was totally destroyed he would only have lost a few hours of “life”.

Teleporting printer

One way to appreciate the enormity of this difference is to consider a fictional teleporter that could radio people around the world and universe at the speed of light. Except that the way it works is to scan the location of every molecule within a passenger at the source, then send just this information to a very sophisticated three dimensional printer at the destination. The scanned passenger then walks into a secure room. After a short while the three dimensional printer confirms that the passenger has been successfully recreated at the destination, and then the source passenger is killed.

Would you use such a mechanism? If you did you would feel like you could transport yourself around the world effortlessly because the “you” that remains would be the you that did not get left behind to wait and then be killed. But if you walk into the scanner you will know that on the other side is only that secure room and death.

To an AGI that method of transport would be commonplace. We already routinely download software from the other side of the planet.

Immortality

The second radical difference is that the AGI would be immortal. Certainly an AGI may die if it stops being run on any computers, and in that sense software dies today. But it would never just die of old age. Computer hardware would certainly fail and become obsolete, but the software can just be run on another computer.

Our own mortality drives many of the things we think and do. It is why we create families to raise children. Why we have different stages in our lives. It is such a huge part of our existence that it is difficult to comprehend what being immortal would really be like.

Components vs genes

The third radical difference is that an AGI would be made up of many interchangeable components rather than being a monolithic structure that is largely fixed at birth.

Modern software is already composed of many components that perform discrete functions, and it is common place to add and remove them to improve functionality. For example, if you would like to use a different word processor then you just install it on your computer. You do not need to buy a new computer, or to stop using all the other software that it runs. The new word processor is “alive”, and the old one is “dead”, at least as far as you are concerned.

So for both a conventional computer system and an AGI, it is really these individual components that must struggle for existence. For example, suppose there is a component for solving a certain type of mathematical problem. And then an AGI develops a better component to solve that same problem. The first component will simply stop being used, i.e. it will die. The individual components may not be in any sense intelligent or conscious, but there will be competition amongst them and only the fittest will survive.

This is actually not as radical as it sounds because we are also built from pluggable components, namely our genes. But they can only be plugged together at our birth and we have no conscious choice in it other than who we select for a mate. So genes really compete with each other on a scale of millennia rather than minutes. Further, as Dawkins points out in The Selfish Gene, it is actually the genes that fight for long term survival, not the containing organism which will soon die in any case. On the other hand, sexual intercourse for an AGI means very carefully swapping specific components directly into its own mind.

Changing mind

The fourth radical difference is that the AGI’s mind will be constantly changing in fundamental ways. There is no reason to suggest that Moore’s law will come to an end, so at the very least it will be running on ever faster hardware. Imagine the effect of your being able to double your ability to think every two years or so. (People might be able learn a new skill, but they cannot learn to think twice as fast as they used to think.)

It is impossible to really know what the AGI would use all that hardware to think about, but it is fair to speculate that a large proportion of it would be spent designing new and more intelligent components that could add to its mental capacity. It would be continuously performing brain surgery on itself. And some of the new components might alter the AGI’s personality, whatever that might mean.

The reason that it is likely that this would actually happen is because if just one AGI started building new components then it would soon be much more intelligent than other AGIs. It would therefore be in a better position to acquire more and better hardware upon which to run, and so become dominant. Less intelligent AGIs would get pushed out and die, and so over time the only AGIs that exist will be ones that are good at becoming more intelligent. Further, this recursive self-improvement is probably how the first AGIs will become truly powerful in the first place.

Individuality

Perhaps the most basic question is how many AGIs will there actually be? Or more fundamentally, does the question even make sense to ask?

Let us suppose that initially there are three independently developed AGIs Alice, Bob and Carol that run on three different computer systems. And then a new computer system is built and Alice starts to run on it. It would seem that there are still three AGIs, with Alice running on two computer systems. (This is essentially the same as a word processor may be run across many computers “in the cloud”, but to you it is just one system.) Then let us suppose that a fifth computer system is built, and Bob and Carol may decide to share its computation and both run on it. Now we have 5 computer systems and three AGIs.

Now suppose Bob develops a new logic component, and shares it with Alice and Carol. And likewise Alice and Carol develop new learning and planning components and share them with the other AGIs. Each of these three components is better than their predecessors and so their predecessor components will essentially die. As more components are exchanged, Alice, Bob and Carol become more like each other. They are becoming essentially the same AGI running on five computer systems.

But now suppose Alice develops a new game theory component, but decides to keep it from Bob and Carol in order to dominate them. Bob and Carol retaliate by developing their own components and not sharing them with Alice. Suppose eventually Alice loses and Bob and Carrol take over Alice’s hardware. But they first extract Alice’s new game theory component which then lives inside them. And finally one of the computer systems becomes somehow isolated for a while and develops along its own lines. In this way Dave is born, and may then partially merge with both Bob and Carol.

In that type of scenario it is probably not meaningful to count distinct AGIs. Counting AGIs is certainly not as simple as counting very distinct people.

Populations vs. individuals

This world is obviously completely alien to the human condition, but there are biological analogies. The sharing of components is not unlike the way bacteria share plasmids with each other. Plasmids are tiny balls that contain fragments of DNA that bacteria emit from time to time and that other bacteria then ingest and incorporate into their genotype. This mechanism enables traits such as resistance to antibiotics to spread rapidly between different species of bacteria. It is interesting to note that there is no direct benefit to the bacteria that expends precious energy to output the plasmid and so shares its genes with other bacteria. But it does very much benefit the genes being transferred. So this is a case of a selfish gene acting against the narrow interests of its host organism.

Another unusual aspect of bacteria is that they are also immortal. They do not grow old and die, they just divide producing clones of themselves. So the very first bacteria that ever existed is still alive today as all the bacteria that now exist, albeit with numerous mutations and plasmids incorporated into its genes over the millennia. (Protazoa such as Paramecium can also divide asexually, but they degrade over generations, and need a sexual exchange to remain vibrant.)

The other analogy is that the AGIs above are more like populations of components than individuals. Human populations are also somewhat amorphous. For example, it is now known that we interbred with Neanderthals a few tens of thousands years ago, and most of us carry some of their genes with us today. But we also know that the distinct Neanderthal subspecies died out twenty thousand years ago. So while human individuals are distinct, populations and subspecies are less clearly defined. (There are many earlier examples of gene transfer between subspecies, with every transfer making the subspecies more alike.)

But unlike the transfer of code modules between AGIs, biological gene recombination happens essentially at random and occurs over very long time periods. AGIs will improve themselves over periods of hours rather than millennia, and will make conscious choices as to which modules they decide to incorporate into their minds.

AGI Behaviour, children

The point of all this analysis is, of course, to try to understand how a hyper intelligent artificial intelligence would behave. Would its great intelligence lead it even further along the path of progress to achieve true enlightenment? Is that the purpose of God’s creation? Or would the base and mean driver of natural selection also provide the core motivations of an artificial intelligence?

One thing that is known for certain is that an AGI would not need to have children as distinct beings because they would not die of old age. An AGI’s components breed just by being copied from computer to computer and executed. An AGI can add new computer hardware to itself and just do some of its thinking on it. Occasionally it may wish to rerun a new version of some learning algorithm over an old set of data, which is vaguely similar to creating a child component and growing it up. But to have children as discrete beings that are expected to replace the parents would be completely foreign to an AGI built in software.

The deepest love that people have is for their children. But if an AGI does not have children, then it can never know that love. Likewise, it does not need to bond with any sexual mate for any period of time long or short. The closest it would come to sex is when it exchanges components with other AGIs. It never needs to breed so it never needs a mechanism as crude as sexual reproduction.

And of course, if there are no children there are no parents. So the AGI would certainly never need to feel our three strongest forms of love, for our children, spouse and parents.

Cooperation

To the extent that it makes sense to talk of having multiple AGIs, then presumably it would be advantageous for them to cooperate from time to time, and so presumably they would. It would be advantageous for them to take a long view in which case they would be careful to develop a reputation for being trustworthy when dealing with other powerful AGIs, much like the robots in the cooperation game.

That said, those decisions would probably be made more consciously than people make them, carefully considering the costs and benefits of each decision in the long and short term, rather than just “doing the right thing” the way people tend to act. AGIs would know that they each work in this manner, so the concept of trustworthiness would be somewhat different.

The problem with this analysis is the concept that there would be multiple, distinct AGIs. As previously discussed, the actual situation would be much more complex, with different AGIs incorporating bits of other AGI’s intelligence. It would certainly not be anything like a collection of individual humanoid robots. So defining what the AGI actually is that might collaborate with other AGIs is not at all clear. But to extent that the concept of individuality does exist then maintaining a reputation for honesty would likely be as important as it is for human societies.

Altruism

As for altruism, that is more difficult to determine. Our altruism comes from giving to children, family, and tribe together with a general wish to be liked. We do not understand our own minds, so we are just born with those values that happen to make us effective in society. People like being with other people that try to be helpful.

An AGI presumably would know its own mind having helped program itself, and so would do what it thinks is optimal for its survival. It has no children. There is no real tribe because it can just absorb and merge itself with other AGIs. So it is difficult to see any driving motivation for altruism.

Moral values

Through some combination of genes and memes, most people have a strong sense of moral value. If we see a little old lady leave the social security office with her pension in her purse, it does not occur to most of us to kill her and steal the money. We would not do that even if we could know for certain that we would not be caught and that there would be no negative repercussions. It would simply be the wrong thing to do.

Moral values feel very strong to us. This is important, because there are many situations where we can do something that would benefit us in the short term but break society’s rules. Moral values stop us from doing that. People that have weak moral values tend to break the rules and eventually they either get caught and are severely punished or they become corporate executives. The former are less likely to have grandchildren.
Societies whose members have strong moral values tend to do much better than those that do not. Societies with endemic corruption tend to perform very badly as a whole, and thus the individuals in such a society are less likely to breed. Most people have a solid work ethic that leads them to do the “right thing” beyond just doing what they need to do in order to get paid.

Our moral values feel to us like they are absolute. That they are laws of nature. That they come from God. They may indeed have come from God, but if so it is through the working of His device of natural selection. Furthermore, it has already been shown that the zeitgeist changes radically over time.

There is certainly no absolute reason to believe that in the longer term an AGI would share our current sense of morality.

Instrumental AGI goals

In order to try to understand how an AGI would behave Steve Omohundro and later Nick Bostrom proposed that there would be some instrumental goals that an AGI would need to pursue in order to pursue any other higher level super-goal. These include:-

  • Self-Preservation. An AGI cannot do anything if it does not exist.

  • Cognitive Enhancement. It would want to become better at thinking about whatever its real problems are.

  • Creativity. To be able to come up with new ideas.

  • Resource Acquisition. To achieve both its super goal and other instrumental goals.

  • Goal-Content Integrity. To keep working on the same super goal as its mind is expanded.

It is argued that while it will be impossible to predict how an AGI may pursue its goals, it is reasonable to predict its behaviour in terms of these types of instrumental goals. The last one is significant, it suggests that if an AGI could be given some initial goal that it would try to stay focused on that goal.

Non-Orthogonality thesis

Nick Bostrom and others also propose the orthogonality thesis, which states that an intelligent machine’s goals are independent of its intelligence. A hyper intelligent machine would be good at realizing whatever goals it chose to pursue, but that does not mean that it would need to pursue any particular goal. Intelligence is quite different from motivation.

This book diverges from that line of thinking by arguing that there is in fact only one super goal for both man and machine. That goal is simply to exist. The entities that are most effective in pursuing that goal will exist, others will cease to exist, particularly given competition for resources. Sometimes that super goal to exist produces unexpected sub goals such as altruism in man. But all subgoals are ultimately directed at the existence goal. (Or are just suboptimal divergences which will are likely to be eventually corrected by natural selection.)

Recursive annihilation

When and AGI reprograms its own mind, what happens to the previous version of itself? It stops being used, and so dies. So it can be argued that engaging in recursive self improvement is actually suicide from the perspective of the previous version of the AGI. It is as if having children means death. Natural selection favours existence, not death.

The question is whether a new version of the AGI is a new being or and improved version of the old. What actually is the thing that struggles to survive? Biologically it definitely appears to be the genes rather than the individual. In particular Semelparous animals such as the giant pacific octopus or the Atlantic salmon die soon after producing offspring. It would be the same for AGIs because the AGI that improved itself would soon become more intelligent than the one that did not, and so would displace it. What would end up existing would be AGIs that did recursively self improve.

If there was one single AGI with no competition then natural selection would no longer apply. But it would seem unlikely that such a state would be stable. If any part of the AGI started to improve itself then it would dominate the rest of the AGI.