You ask two interesting questions, with rather separate answers. I will discuss each in turn.
First, It’s plausible to think that “it’s possible to find an architecture for general intelligence a lot more efficiently than evolution”. Our process of engineering development is far faster than evolution. People get good (or bad) ideas, try stuff, copy what works, speak at conferences, publish, make theories, teach undergraduates… and the result is progress in decades instead of millions of years. We haven’t duplicated all the achievements of life yet, but we’ve made a start, and have exceeded it in many places. In particular, we’ve recently made huge progress in AI. GPT-3 has pretty much duplicated the human language faculty, which takes up roughly 1% of the brain. And we’ve duplicated visual object recognition, which takes another few percent. Those were done without needing evolution, so we probably don’t need evolution for the remaining 90% of the mind.
Second, “an intelligence that does the exact things we want” is the ideal that we’re aiming for. Unfortunately it does not seem possible to get to that, currently. With current technology, what we get is “an intelligence that does approximately what we rewarded it for, plus some other weird stuff we didn’t ask for.” It’s not obvious, but it is much harder than you think to specify a set of goals that produce acceptable behavior. And it is even harder (currently impossible) to provide any assurance that an AI will continue to follow those goals when set free to exert power in the world.
You ask two interesting questions, with rather separate answers. I will discuss each in turn.
First, It’s plausible to think that “it’s possible to find an architecture for general intelligence a lot more efficiently than evolution”. Our process of engineering development is far faster than evolution. People get good (or bad) ideas, try stuff, copy what works, speak at conferences, publish, make theories, teach undergraduates… and the result is progress in decades instead of millions of years. We haven’t duplicated all the achievements of life yet, but we’ve made a start, and have exceeded it in many places. In particular, we’ve recently made huge progress in AI. GPT-3 has pretty much duplicated the human language faculty, which takes up roughly 1% of the brain. And we’ve duplicated visual object recognition, which takes another few percent. Those were done without needing evolution, so we probably don’t need evolution for the remaining 90% of the mind.
Second, “an intelligence that does the exact things we want” is the ideal that we’re aiming for. Unfortunately it does not seem possible to get to that, currently. With current technology, what we get is “an intelligence that does approximately what we rewarded it for, plus some other weird stuff we didn’t ask for.” It’s not obvious, but it is much harder than you think to specify a set of goals that produce acceptable behavior. And it is even harder (currently impossible) to provide any assurance that an AI will continue to follow those goals when set free to exert power in the world.