Evolution is massively parallelized and occurs in a very complex, interactive, and dynamic environment. Evolution is also patient, can tolerate high costs such as mass extinction events and also really doesn’t care about the outcome of the process. It’s just something that happens and results in the filtering of the most fit genes. The amount of computation that it would take to replicate such complex, interactive, and dynamic environments would be huge. Why should we be confident that it’s possible to find an architecture for general intelligence a lot more efficiently than evolution? Wouldn’t it also always be more practically expedient creating intelligence that does the exact things we want, even if we could simulate the evolutionary process why would we do it?
You ask two interesting questions, with rather separate answers. I will discuss each in turn.
First, It’s plausible to think that “it’s possible to find an architecture for general intelligence a lot more efficiently than evolution”. Our process of engineering development is far faster than evolution. People get good (or bad) ideas, try stuff, copy what works, speak at conferences, publish, make theories, teach undergraduates… and the result is progress in decades instead of millions of years. We haven’t duplicated all the achievements of life yet, but we’ve made a start, and have exceeded it in many places. In particular, we’ve recently made huge progress in AI. GPT-3 has pretty much duplicated the human language faculty, which takes up roughly 1% of the brain. And we’ve duplicated visual object recognition, which takes another few percent. Those were done without needing evolution, so we probably don’t need evolution for the remaining 90% of the mind.
Second, “an intelligence that does the exact things we want” is the ideal that we’re aiming for. Unfortunately it does not seem possible to get to that, currently. With current technology, what we get is “an intelligence that does approximately what we rewarded it for, plus some other weird stuff we didn’t ask for.” It’s not obvious, but it is much harder than you think to specify a set of goals that produce acceptable behavior. And it is even harder (currently impossible) to provide any assurance that an AI will continue to follow those goals when set free to exert power in the world.
While evolution did indeed put a huge amount of effort into creating a chimp’s brain, the amount of marginal effort it put into going from a chimp to a human brain was vastly lower. And the effort of going from a human brain to John von Neumann’s brain was tiny. Consequently, once we have AI at the level of chimp intelligence or human intelligence it might not take much to get to John von Neumann level intelligence. Very likely, having a million John von Neumann AI brains running at speeds greater than the original would quickly give us a singularity.
The effort from going from Chimp to Human was marginally lower but still took a huge amount of effort. It was maybe 5 million years since the last common ancestor between Chimps and Humans and taking a generation to be like 20 years that’s at least 250,000 generations of a couple of thousand individuals in a complex environment with lots of processes going on. I haven’t done the math but that seems like a massive amount of computation. To go from human to Von Neumann still takes a huge search process. If we think of every individual human as consisting of evolution trying to get more intelligence there are almost 8 billion instances being ‘tried’ right now in a very complex environment. Granted that if humans were to run this process it may take a lot less time. If say breeding and selection of the most intelligent individuals in every generation was done it may take a lot less time to get human level intelligence if starting from a chimp.
The thing is, it’s not. Evolution is optimizing for the amount of descendants. Nothing more. If being more intelligent is the way forward—nice! If having blue hair results in even more children—even better! Intelligence just happens to be what evolution decided for humans. Daisies happened to come up with liking closely cropped grasslands, which is also currently a very good strategy (lawns). The point is that evolution chooses what to try totally at random, and whatever works is good. Even if it causes complexity to be reduced, e.g. snakes loosing legs, or cave fish loosing eyes.
AI work, on the other hand, is focused on specific outcome spaces, trying things which seem reasonable and avoiding things which have no chance of working. This massively simplifies things, as you can massively lower the number of combinations needed to be checked.
We don’t need to be confident in this to think that AGI is likely in the next few decades. Extrapolating current compute trends, the available compute may well be enough to replicate such environments.
My guess is that we will try to create intelligence to do the things we want, but we may fail. The hard part of alignment is that succeeding at getting the thing you want from a superhuman AI seems surprisingly hard.
Evolution is massively parallelized and occurs in a very complex, interactive, and dynamic environment. Evolution is also patient, can tolerate high costs such as mass extinction events and also really doesn’t care about the outcome of the process. It’s just something that happens and results in the filtering of the most fit genes. The amount of computation that it would take to replicate such complex, interactive, and dynamic environments would be huge. Why should we be confident that it’s possible to find an architecture for general intelligence a lot more efficiently than evolution? Wouldn’t it also always be more practically expedient creating intelligence that does the exact things we want, even if we could simulate the evolutionary process why would we do it?
You ask two interesting questions, with rather separate answers. I will discuss each in turn.
First, It’s plausible to think that “it’s possible to find an architecture for general intelligence a lot more efficiently than evolution”. Our process of engineering development is far faster than evolution. People get good (or bad) ideas, try stuff, copy what works, speak at conferences, publish, make theories, teach undergraduates… and the result is progress in decades instead of millions of years. We haven’t duplicated all the achievements of life yet, but we’ve made a start, and have exceeded it in many places. In particular, we’ve recently made huge progress in AI. GPT-3 has pretty much duplicated the human language faculty, which takes up roughly 1% of the brain. And we’ve duplicated visual object recognition, which takes another few percent. Those were done without needing evolution, so we probably don’t need evolution for the remaining 90% of the mind.
Second, “an intelligence that does the exact things we want” is the ideal that we’re aiming for. Unfortunately it does not seem possible to get to that, currently. With current technology, what we get is “an intelligence that does approximately what we rewarded it for, plus some other weird stuff we didn’t ask for.” It’s not obvious, but it is much harder than you think to specify a set of goals that produce acceptable behavior. And it is even harder (currently impossible) to provide any assurance that an AI will continue to follow those goals when set free to exert power in the world.
While evolution did indeed put a huge amount of effort into creating a chimp’s brain, the amount of marginal effort it put into going from a chimp to a human brain was vastly lower. And the effort of going from a human brain to John von Neumann’s brain was tiny. Consequently, once we have AI at the level of chimp intelligence or human intelligence it might not take much to get to John von Neumann level intelligence. Very likely, having a million John von Neumann AI brains running at speeds greater than the original would quickly give us a singularity.
The effort from going from Chimp to Human was marginally lower but still took a huge amount of effort. It was maybe 5 million years since the last common ancestor between Chimps and Humans and taking a generation to be like 20 years that’s at least 250,000 generations of a couple of thousand individuals in a complex environment with lots of processes going on. I haven’t done the math but that seems like a massive amount of computation. To go from human to Von Neumann still takes a huge search process. If we think of every individual human as consisting of evolution trying to get more intelligence there are almost 8 billion instances being ‘tried’ right now in a very complex environment. Granted that if humans were to run this process it may take a lot less time. If say breeding and selection of the most intelligent individuals in every generation was done it may take a lot less time to get human level intelligence if starting from a chimp.
The thing is, it’s not. Evolution is optimizing for the amount of descendants. Nothing more. If being more intelligent is the way forward—nice! If having blue hair results in even more children—even better! Intelligence just happens to be what evolution decided for humans. Daisies happened to come up with liking closely cropped grasslands, which is also currently a very good strategy (lawns). The point is that evolution chooses what to try totally at random, and whatever works is good. Even if it causes complexity to be reduced, e.g. snakes loosing legs, or cave fish loosing eyes.
AI work, on the other hand, is focused on specific outcome spaces, trying things which seem reasonable and avoiding things which have no chance of working. This massively simplifies things, as you can massively lower the number of combinations needed to be checked.
We don’t need to be confident in this to think that AGI is likely in the next few decades. Extrapolating current compute trends, the available compute may well be enough to replicate such environments.
My guess is that we will try to create intelligence to do the things we want, but we may fail. The hard part of alignment is that succeeding at getting the thing you want from a superhuman AI seems surprisingly hard.